Mobile Computing short Note

Posted by admin at May 7, 2019



Mobile Computing: A technology that allows transmission of data, via a computer, without having to be connected to a fixed physical link.
Mobile voice communication is widely established throughout the world and has had a very rapid increase in the number of subscribers to the various cellular networks over the last few years. An extension of this technology is the ability to send and receive data across these cellular networks. This is the principle of mobile computing.
Mobile data communication has become a very important and rapidly evolving technology as it allows users to transmit data from remote locations to other remote or fixed locations. This proves to be the solution to the biggest problem of business people on the move – mobility.
In this article we give an overview of existing cellular networks and describe in detail the CDPD technology which allows data communications across these networks. Finally, we look at the applications of Mobile Computing in the real world.


Mobile telephony took off with the introduction of cellular technology which allowed the efficient utilization of frequencies enabling the connection of a large number of users. During the 1980’s analogue technology was used. Among the most well known systems were the NMT900 and 450 (Nordic Mobile Telephone) and the AMPS (Advanced Mobile Phone Service). In the 1990’s the digital cellular technology was introduced with GSM (Global System Mobile) being the most widely accepted system around the world. Other such systems are the DCS1800 (Digital Communication System) and the PCS1900 (Personal Communication System).
A cellular network consists of mobile units linked together to switching equipment, which interconnect the different parts of the network and allow access to the fixed Public Switched Telephone Network (PSTN). The technology is hidden from view; it’s incorporated in a number of transceivers called Base Stations (BS). Every BS is located at a strategically selected place and covers a given area or cell – hence the name cellular communications. A number of adjacent cells grouped together form an area and the corresponding BSs communicate through a so called Mobile Switching Centre (MSC). The MSC is the heart of a cellular radio system. It is responsible for routing, or switching, calls from the originator to the detonator. It can be thought of managing the cell, being responsible for set-up, routing control and termination of the call, for management of inter-MSC hand over and supplementary services, and for collecting charging and accounting information. The MSC may be connected to other MSCs on the same network or to the PSTN.

Mobile Switching Centre

The frequencies used vary according to the cellular network technology implemented. For GSM, 890 – 915 MHz range is used for transmission and 935 -960 MHz for reception. The DCS technology uses frequencies in the 1800MHz range while PCS in the 1900MHz range.
Each cell has a number of channels associated with it. These are assigned to subscribers on demand. When a Mobile Station (MS) becomes ‘active’ it registers with the nearest BS. The corresponding MSC stores the information about that MS and its position. This information is used to direct incoming calls to the MS.
If during a call the MS moves to an adjacent cell then a change of frequency will necessarily occur – since adjacent cells never use the same channels. This procedure is called hand over and is the key to Mobile communications. As the MS is approaching the edge of a cell, the BS monitors the decrease in signal power. The strength of the signal is compared with adjacent cells and the call is handed over to the cell with the strongest signal.
During the switch, the line is lost for about 400ms. When the MS is going from one area to another it registers itself to the new MSC. Its location information is updated, thus allowing MSs to be used outside their ‘home’ areas.


Data Communications is the exchange of data using existing communication networks. The term data covers a wide range of applications including File Transfer (FT), interconnection between Wide-Area-Networks (WAN), facsimile (fax), electronic mail, access to the internet and the World Wide Web (WWW).

Mobile Communications Overview

Data Communications have been achieved using a variety of networks such as PSTN, leased-lines and more recently ISDN (Integrated Services Data Network) and ATM (Asynchronous Transfer Mode)/Frame Relay. These networks are partly or totally analogue or digital using technologies such as circuit – switching, packet – switching e.t.c.
Circuit switching implies that data from one user (sender) to another (receiver) has to follow a prespecified path. If a link to be used is busy , the message can not be redirected , a property which causes many delays.
Packet switching is an attempt to make better utilization of the existing network by splitting the message to be sent into packets. Each packet contains information about the sender, the receiver, the position of the packet in the message as well as part of the actual message. There are many protocols defining the way packets can be send from the sender to the receiver. The most widely used are the Virtual Circuit-Switching system, which implies that packets have to be sent through the same path, and the Datagram system which allows packets to be sent at various paths depending on the network availability. Packet switching requires more equipment at the receiver, where reconstruction of the message will have to be done.
The introduction of mobility in data communications required a move from the Public Switched Data Network (PSDN) to other networks like the ones used by mobile phones. PCSI has come up with an idea called CDPD (Cellular Digital Packet Data) technology which uses the existing mobile network (frequencies used for mobile telephony).
Mobility implemented in data communications has a significant difference compared to voice communications. Mobile phones allow the user to move around and talk at the same time; the loss of the connection for 400ms during the hand over is undetectable by the user. When it comes to data, 400ms is not only detectable but causes huge distortion to the message. Therefore data can be transmitted from a mobile station under the assumption that it remains stable or within the same cell.


Today, the mobile data communications market is becoming dominated by a technology called CDPD.
There are other alternatives to this technology namely Circuit Switched Cellular, Specialized Mobile Radio and Wireless Data Networks.
CDPD’s principle lies in the usage of the idle time in between existing voice signals that are being sent across the cellular networks. The major advantage of this system is the fact that the idle time is not chargeable and so the cost of data transmission is very low. This may be regarded as the most important consideration by business individuals.
CDPD networks allow fixed or mobile users to connect to the network across a fixed link and a packet switched system respectively. Fixed users have a fixed physical link to the CDPD network. In the case of a mobile end user, the user can, if CDPD network facilities are non-existent, connect to existing circuit switched networks and transmit data via these networks. This is known as Circuit Switched CDPD (CS-CDPD).
Circuit Switched CDPD

Service coverage is a fundamental element of providing effective wireless solutions to users and using this method achieves this objective. Where CDPD is available data is split into packets and a packet switched network protocol is used to transport the packets across the network. This may be of either Datagram or Virtual Circuit Switching form.
The data packets are inserted on momentarily unoccupied voice frequencies during the idle time on the voice signals. CDPD networks have a network hierarchy with each level of the hierarchy doing its own specified tasks.

The hierarchy consists of the following levels:
Mobile End User Interface.
Using a single device such as a Personal Digital Assistant or personal computer which has been connected to a Radio Frequency (RF) Modem which is specially adapted with the antennae required to transmit data on the cellular network, the mobile end user can transmit both data and voice signals. Voice signals are transmitted via a mobile phone connected to the RF Modem Unit. RF Modems transfer data in both forward and reverse channels using Gaussian Minimum Shift Keying (MSK) modulation , a modified form of Frequency Shift Keying (FSK) at modulation index of 0.5 .
Mobile Data Base Station (MDBS).
In each cell of the cellular reception area, there is a Mobile Data Base Station (MDBS) which is responsible for detection of idle time in voice channels, for relaying data between the mobile units and the Mobile Data Intermediate Systems (MDIS), sending of packets of data onto the appropriate unoccupied frequencies as well as receiving data packets and passing them to the appropriate Mobile end user within its domain.
o Detection of idle time.
This is achieved using a scanning receiver (also known as sniffer) housed in the MDBS. The snuffer detects voice traffic by measuring the signal strength on a specific frequency, hence detecting an idle channel.
o Relaying data packets between mobile units and networks.
If the sniffer detects two idle channels then the MDBS establishes two RF air-links between the end user unit and itself. Two channels are required to achieve bidirectional communications. One channel is for forward communication from the MDBS to the mobile units. This channel is unique to each mobile unit and hence contention less. The reverse channels are shared between a numbers of Mobile units and as a result, two mobile units sharing a reverse link cannot communicate to each other.
Reverse channels are accessed using a Digital Sense Multiple Access with Collision Detection (DSMA – CD) protocol which is similar to the protocol used in Ethernet communication which utilizes Carrier Sense Multiple Access with Collision Detection (CSMA – CD). This protocol allows the collision of two data packets on a common channel to be detected so that the Mobile unit can be alerted by the MDBS to retry transmission at a later time.
Once a link is established, the MDBS can quickly detect if and when a voice signal is ramping up (requesting) this link and within the 40ms it takes for the voice signal to ramp up and get a link, the MDBS disconnects from the current air-link and finds another idle channel establishing a new link. This is known as channel hopping.
The speed at which the MDBS hops channels ensures that the CDPD network is completely invisible to the existing cellular networks and it doesn’t interfere with transmission of existing voice channels.
When the situation occurs that all voice channels are at capacity, then extra frequencies specifically set aside for CDPD data can be utilized. Although this scenario is very unlikely as each cell within the reception area has typically 57 channels, each of which has an average of 25 – 30% of idle time.
• Mobile Data Intermediate Systems (MDIS)
Groups of MDBS that control each cell in the cellular network reception area are connected to a higher level entity in the network hierarchy, the Mobile Data Intermediate Systems. Connection is made via a wideband trunk cable. Data packets are then relayed by MDBS to and from mobile end users and MDIS.
These MDIS use a Mobile Network Location Protocol (MNLP) to exchange location information about Mobile end users within their domain. The MDIS maintains a database for each of the M-ES in its serving area. Each mobile unit has a fixed home area but may be located in any area where reception is available. So, if a MDIS unit receives a data packet addressed to a mobile unit that resides in its domain, it sends the data packet to the appropriate MDBS in its domain which will forward it as required. If the data packet is addressed to a mobile unit in another group of cells, then the MDIS forwards the data packet to the appropriate MDIS using the forward channel. The MDIS units hide all mobility issues from systems in higher levels of the network hierarchy.
In the reverse direction, where messages are from the Mobile end user, packets are routed directly to their destination and not necessarily through the mobile end users home MDIS.
Intermediate Systems (IS)
MDIS are interconnected to these IS which form the backbone of the CDPD system. These systems are unaware of mobility of end-users, as this is hidden by lower levels of the network hierarchy. The ISs are the systems that provide the CDPD interface to the various computer and phone networks.
The IS’s relay data between MDIS’s and other IS’s throughout the network. They can be connected to routers that support Internet and Open Systems Interconnection Connectionless Network Services (OSI-CLNS), to allow access to other cellular carriers and external land- based networks.


There are some actions that are necessary in order to obtain reliability over a network.
User Authentication
The procedure which checks if the identity of the subscriber transferred over the radio path corresponds with the details held in the network.
• User Anonymity
Instead of the actual directory telephone number, the International Mobile Subscriber Identity (IMSI) number is used within the network to uniquely identify a mobile subscriber.
• Fraud Prevention
Protection against impersonation of authorised users and fraudulent use of the network is required.
Protection of user data
All the signals within the network are encrypted and the identification key is never transmitted through the air. This ensures maximum network and data security.
The information needed for the above actions are stored in data bases. The Home Location Register (HLR) stores information relating the Mobile Station (MS) to its network. This includes information for each MS on subscription levels, supplementary services and the current or most recently used network and location area. The Authentication Centre (AUC) provides the information to authenticate MSs using the network, in order to guard against possible fraud, stolen subsciber cards, or unpaid bills. The Visitor Location Register (VLR) stores information about subscription levels, supplementary services and location for a subscriber who is currently in, or has very recently been, in that area. It may also record whether a subscriber is currently active, thus avoiding delay and unnecessary use of the network in trying to call a switched off terminal.
The data packets are transmitted at speeds of typically 19.2 Kilobits/second to the MDBS, but actual throughput may be as low as 9.6 Kilobits/second due to the extra redundant data that is added to transmitted packets. This information includes sender address, reciever address and in the case of Datagram Switching, a packet ordering number. Check data is also added to allow error correction if bits are incorrectly recieved. Each data packet is encoded with the check data using a Reed-Solomon forward error correction code. The encoded sequence is then logically OR’ed with a pseudo-random sequence, to assist the MDBS and mobile units in synchronisation of bits. The transmitted data is also encrypted to maintain system security.
CDPD follows the OSI standard model for packet switched data communications. The CDPD architecture extends across layers one, two and three of the OSI layer model. The mobile end users handle the layer 4 functions (transport) and higher layers of the OSI model such as user interface.


The question that always arises when a business is thinking of buying a mobile computer is “Will it be worth it?”
In many fields of work, the ability to keep on the move is vital in order to utilise time efficiently. Efficient utilization of resources (i.e.: staff) can mean substantial savings in transportation costs and other non quantifiable costs such as increased customer attention, impact of on site maintenance and improved intercommunication within the business.
The importance of Mobile Computers has been highlighted in many fields of which a few are described below:
• For Estate Agents
Estate agents can work either at home or out in the field. With mobile computers they can be more productive. They can obtain current real estate information by accessing multiple listing services, which they can do from home, office or car when out with clients. They can provide clients with immediate feedback regarding specific homes or neighborhoods, and with faster loan approvals, since applications can be submitted on the spot. Therefore, mobile computers allow them to devote more time to clients.
Emergency Services
Ability to receive information on the move is vital where the emergency services are involved. Information regarding the address, type and other details of an incident can be dispatched quickly, via a CDPD system using mobile computers, to one or several appropriate mobile units which are in the vicinity of the incident.
Here the reliability and security implemented in the CDPD system would be of great advantage.

• In courts
Defense counsels can take mobile computers in court. When the opposing counsel references a case which they are not familiar, they can use the computer to get direct, real-time access to on-line legal database services, where they can gather information on the case and related precedents. Therefore mobile computers allow immediate access to a wealth of information, making people better informed and prepared.
In companies
Managers can use mobile computers in, say, and critical presentations to major customers. They can access the latest market share information. At a small recess, they can revise the presentation to take advantage of this information. They can communicate with the office about possible new offers and call meetings for discussing responds to the new proposals. Therefore, mobile computers can leverage competitive advantages.
Stock Information Collation/Control
In environments where access to stock is very limited ie: factory warehouses. The use of small portable electronic databases accessed via a mobile computer would be ideal.
Data collated could be directly written to a central database, via a CDPD network, which holds all stock information hence the need for transfer of data to the central computer at a later date is not necessary. This ensures that from the time that a stock count is completed, there is no inconsistency between the data input on the portable computers and the central database.
• Credit Card Verification
At Point of Sale (POS) terminals in shops and supermarkets, when customers use credit cards for transactions, the intercommunication required between the bank central computer and the POS terminal, in order to effect verification of the card usage, can take place quickly and securely over cellular channels using a mobile computer unit. This can speed up the transaction process and relieve congestion at the POS terminals.
Taxi/Truck Dispatch
Using the idea of a centrally controlled dispatcher with several mobile units (taxis), mobile computing allows the taxis to be given full details of the dispatched job as well as allowing the taxis to communicate information about their whereabouts back to the central dispatch office. This system is also extremely useful in secure deliveries i.e.: Security. This allows a central computer to be able to track and receive status information from all of its mobile secure delivery vans. Again, the security and reliability properties of the CDPD system shine through.

Electronic Mail/Paging
Usage of a mobile unit to send and read emails is a very useful asset for any business individual, as it allows him/her to keep in touch with any colleagues as well as any urgent developments that may affect their work. Access to the Internet, using mobile computing technology, allows the individual to have vast arrays of knowledge at his/her fingertips.
Paging is also achievable here, giving even more intercommunication capability between individuals, using a single mobile computer device.


With the rapid technological advancements in Artificial Intelligence, Integrated Circuitry and increases in Computer Processor speeds, the future of mobile computing looks increasingly exciting.
With the emphasis increasingly on compact, small mobile computers, it may also be possible to have all the practicality of a mobile computer in the size of a hand held organizer or even smaller.
Use of Artificial Intelligence may allow mobile units to be the ultimate in personal secretaries, which can receive emails and paging messages, understand what they are about, and change the individual’s personal schedule according to the message. This can then be checked by the individual to plan his/her day.
The working lifestyle will change, with the majority of people working from home, rather than commuting. This may be beneficial to the environment as less transportation will be utilised. This mobility aspect may be carried further in that, even in social spheres, people will interact via mobile stations, eliminating the need to venture outside of the house.
This scary concept of a world full of inanimate zombies sitting, locked to their mobile stations, accessing every sphere of their lives via the computer screen becomes ever more real as technology, especially in the field of mobile data communications, rapidly improves and, as shown below, trends are very much towards ubiquitous or mobile computing.

Major Trends in Computing

Indeed, technologies such as Interactive television and Video Image Compression already imply a certain degree of mobility in the home, ie. home shopping etc. Using the mobile data communication technologies discussed, this mobility may be pushed to extreme.
The future of Mobile Computing is very promising indeed, although technology may go too far, causing detriment to society.

A GSM network is composed of several functional entities, whose functions and interfaces are defined.

The GSM network can be divided into three broad parts. The Mobile Station is carried by the subscriber; the Base Station Subsystem controls the radio link with the Mobile Station. The Network Subsystem, the main part of which is the Mobile services Switching Center, performs the switching of calls between the mobile and other fixed or mobile network users, as well as management of mobile services, such as authentication. Not shown is the Operations and Maintenance center, which oversees the proper operation and setup of the network. The Mobile Station and the Base Station Subsystem communicate across the Um interface, also known as the air interface or radio link. The Base Station Subsystem communicates with the Mobile service Switching Center across the A interface.

Mobile Station

The mobile station (MS) consists of the physical equipment, such as the radio transceiver, display and digital signal processors, and a smart card called the Subscriber Identity Module (SIM). The SIM provides personal mobility, so that the user can have access to all subscribed services irrespective of both the location of the terminal and the use of a specific terminal. By inserting the SIM card into another GSM cellular phone, the user is able to receive calls at that phone, make calls from that phone, or receive other subscribed services.
The mobile equipment is uniquely identified by the International Mobile Equipment Identity (IMEI). The SIM card contains the International Mobile Subscriber Identity (IMSI), identifying the subscriber, a secret key for authentication, and other user information. The IMEI and the IMSI are independent, thereby providing personal mobility. The SIM card may be protected against unauthorized use by a password or personal identity number.

Base Station Subsystem

The Base Station Subsystem is composed of two parts, the Base Transceiver Station (BTS) and the Base Station Controller (BSC). These communicate across the specified A¬bis interface, allowing (as in the rest of the system) operation between components made by different suppliers.
The Base Transceiver Station houses the radio tranceivers that define a cell and handles the radio¬link protocols with the Mobile Station. In a large urban area, there will potentially be a large number of BTSs deployed. The requirements for a BTS are ruggedness, reliability, portability, and minimum cost.
The Base Station Controller manages the radio resources for one or more BTSs. It handles radio¬channel setup, frequency hopping, and handovers, as described below. The BSC is the connection between the mobile and the Mobile service Switching Center (MSC). The BSC also translates the 13 kbps voice channel used over the radio link to the standard 64 kbps channel used by the Public Switched Telephone Network or ISDN.

Network Subsystem

The central component of the Network Subsystem is the Mobile services Switching Center (MSC). It acts like a normal switching node of the PSTN or ISDN, and in addition provides all the functionality needed to handle a mobile subscriber, such as registration, authentication, location updating, handovers, and call routing to a roaming subscriber. These services are provided in conjuction with several functional entities, which together form the Network Subsystem. The MSC provides the connection to the public fixed network (PSTN or ISDN), and signalling between functional entities uses the ITU¬T Signaling System Number 7 (SS7), used in ISDN and widely used in current public networks.
The Home Location Register (HLR) and Visitor Location Register (VLR), together with the MSC, provide the call routing and (possibly international) roaming capabilities of GSM. The HLR contains all the administrative information of each subscriber registered in the corresponding GSM network, along with the current location of the mobile. The current location of the mobile is in the form of a Mobile Station Roaming Number (MSRN) which is a regular ISDN number used to route a call to the MSC where the mobile is currently located. There is logically one HLR per GSM network, although it may be implemented as a distributed database.
The Visitor Location Register contains selected administrative information from the HLR, necessary for call control and provision of the subscribed services, for each mobile currently located in the geographical area controlled by the VLR. Although each functional entity can be implemented as an independent unit, most manufacturers of switching equipment implement one VLR together with one MSC, so that the geographical area controlled by the MSC corresponds to that controlled by the VLR, simplifying the signaling required. Note that the MSC contains no information about particular mobile stations – this information is stored in the location registers.
The other two registers are used for authentication and security purposes. The Equipment Identity Register (EIR) is a database that contains a list of all valid mobile equipment on the network, where each mobile station is identified by its International Mobile Equipment Identity (IMEI). An IMEI is marked as invalid if it has been reported stolen or is not type approved. The Authentication Center is a protected database that stores a copy of the secret key stored in each subscriber’s SIM card, which is used for authentication and ciphering of the radio channel.

Code division multiple access (CDMA)

Is a channel access method utilized by various radio communication technologies? It should not be confused with the mobile phone standards called cdmaOne and CDMA2000 (which are often referred to as simply “CDMA”), which use CDMA as an underlying channel access method.
One of the basic concepts in data communication is the idea of allowing several transmitters to send information simultaneously over a single communication channel. This allows several users to share a bandwidth of frequencies. This concept is called multiplexing. CDMA employs spread-spectrum technology and a special coding scheme (where each transmitter is assigned a code) to allow multiple users to be multiplexed over the same physical channel. By contrast, time division multiple access (TDMA) divides access by time, while frequency-division multiple access (FDMA) divides it by frequency. CDMA is a form of “spread-spectrum” signaling, since the modulated coded signal has a much higher data bandwidth than the data being communicated.
An analogy to the problem of multiple access is a room (channel) in which people wish to communicate with each other. To avoid confusion, people could take turns speaking (time division), speak at different pitches (frequency division), or speak in different languages (code division). CDMA is analogous to the last example where people speaking the same language can understand each other, but not other people. Similarly, in radio CDMA, each group of users is given a shared code. Many codes occupy the same channel, but only users associated with a particular code can understand each other.

General packet radio service (GPRS)

is a packet oriented mobile data service available to users of the 2G cellular communication systems global system for mobile communications (GSM), as well as in the 3G systems. In the 2G systems, GPRS provides data rates of 56-114 kbit/s.

GPRS data transfer is typically charged per megabyte of traffic transferred, while data communication via traditional circuit switching is billed per minute of connection time, independent of whether the user actually is using the capacity or is in an idle state. GPRS is a best-effort packet switched service, as opposed to circuit switching, where a certain quality of service (QoS) is guaranteed during the connection for non-mobile users.

2G cellular systems combined with GPRS are often described as 2.5G, that is, a technology between the second (2G) and third (3G) generations of mobile telephony. It provides moderate speed data transfer, by using unused time division multiple access (TDMA) channels in, for example, the GSM system. Originally there was some thought to extend GPRS to cover other standards, but instead those networks are being converted to use the GSM standard, so that GSM is the only kind of network where GPRS is in use. GPRS is integrated into GSM Release 97 and newer releases. It was originally standardized by European Telecommunications Standards Institute (ETSI), but now by the 3rd Generation Partnership Project (3GPP).

GPRS was developed as a GSM response to the earlier CDPD and i-mode packet switched cellular technologies.


Wireless Networking:

Wireless network refers to any type of computer network that is wireless, and is commonly associated with a telecommunications network whose interconnections between nodes is implemented without the use of wires.[1] Wireless telecommunications networks are generally implemented with some type of remote information transmission system that uses electromagnetic waves, such as radio waves, for the carrier and this implementation usually takes place at the physical level or “layer” of the network.
Wireless PAN
Wireless Personal Area Network (WPAN) is a type of wireless network that interconnects devices within a relatively small area, generally within reach of a person. For example, Bluetooth provides a WPAN for interconnecting a headset to a laptop. ZigBee also supports WPAN applications.[3]
Wireless LAN
Wireless Local Area Network (WLAN) is a wireless alternative to a computer Local Area Network (LAN) that uses radio instead of wires to transmit data back and forth between computers in a small area such as a home, office, or school. Wireless LANs are standardized under the IEEE 802.11 series.

Screenshots of Wi-Fi Network connections in Microsoft Windows Figure 1, left, shows that not all networks are encrypted (locked unless you have the code, or key), which means anyone in range can access them. Figures 2 and 3, middle and right, however, show that many networks are encrypted.
• Wi-Fir: Wi-Fi is a commonly used wireless network in computer systems to enable connection to the internet or other devices that have Wi-Fi functionalities. Wi-Fi networks broadcast radio waves that can be picked up by Wi-Fi receivers attached to different computers or mobile phones.
• Rpaths. It is often used in cities to connect networks in two or more buildings without physically wiring the buildings together.
Wireless MAN
Wireless Metropolitan area networks are a type of wireless network that connects several Wireless LANs.
WiMAX is the term used to refer to wireless MANs and is covered in IEEE 802.16d/802.16e.
Mobile devices networks
In recent decades with the development of smart phones, cellular telephone networks have been used to carry computer data in addition to telephone conversations:
Global System for Mobile Communications (GSM): The GSM network is divided into three major systems: the switching system, the base station system, and the operation and support system. The cell phone connects to the base system station which then connects to the operation and support station; it then connects to the switching station where the call is transferred to where it needs to go. GSM is the most common standard and is used for a majority of cell phones.[4]
• Personal Communications Service (PCS): PCS is a radio band that can be used by mobile phones in North America. Sprint happened to be the first service to set up a PCS.
D-AMPS: D-AMPS, which stands for Digital Advanced Mobile Phone Service, is an upgraded version of AMPS but it is being phased out due to advancement in technology. The newer GSM networks are replacing the older system.

IEEE 802.11 is a set of standards carrying out wireless local area network (WLAN) computer communication in the 2.4, 3.6 and 5 GHz frequency bands. They are implemented by the IEEE LAN/MAN Standards Committee (IEEE 802).



The original version of the standard IEEE 802.11 was released in 1997 and clarified in 1999, but is today obsolete. It specified two net bit rates of 1 or 2 megabits per second (Mbit/s), plus forward error correction code. It specifed three alternative physical layer technologies: diffuse infrared operating at 1 Mbit/s; frequency-hopping spread spectrum operating at 1 Mbit/s or 2 Mbit/s; and direct-sequence spread spectrum operating at 1 Mbit/s or 2 Mbit/s. The latter two radio technologies used microwave transmission over the Industrial Scientific Medical frequency band at 2.4 GHz. Some earlier WLAN technologies used lower frequencies, such as the U.S. 900 MHz ISM band.
Legacy 802.11 with direct-sequence spread spectrum was rapidly supplemented and popularized by 802.11b.

The 802.11a standard uses the same data link layer protocol and frame format as the original standard, but an OFDM based air interface (physical layer). It operates in the 5 GHz band with a maximum net data rate of 54 Mbit/s, plus error correction code, which yields realistic net achievable throughput in the mid-20 Mbit/s.
Since the 2.4 GHz band is heavily used to the point of being crowded, using the relatively un-used 5 GHz band gives 802.11a a significant advantage. However, this high carrier frequency also brings a disadvantage: The effective overall range of 802.11a is less than that of 802.11b/g; and in theory 802.11a signals cannot penetrate as far as those for 802.11b because they are absorbed more readily by walls and other solid objects in their path due to their smaller wavelength. In practice 802.11b typically has a higher distance range at low speeds (802.11b will reduce speed to 5 Mbit/s or even 1 Mbit/s at low signal strengths). However, at higher speeds, 802.11a typically has the same or higher range due to less interference.
802.11b has a maximum raw data rate of 11 Mbit/s and uses the same media access method defined in the original standard. 802.11b products appeared on the market in early 2000, since 802.11b is a direct extension of the modulation technique defined in the original standard. The dramatic increase in throughput of 802.11b (compared to the original standard) along with simultaneous substantial price reductions led to the rapid acceptance of 802.11b as the definitive wireless LAN technology.
802.11b devices suffer interference from other products operating in the 2.4 GHz band. Devices operating in the 2.4 GHz range include: microwave ovens, Bluetooth devices, baby monitors and cordless telephones.
In June 2003, a third modulation standard was ratified: 802.11g. This works in the 2.4 GHz band (like 802.11b), but uses the same OFDM based transmission scheme as 802.11a. It operates at a maximum physical layer bit rate of 54 Mbit/s exclusive of forward error correction codes, or about 22 Mbit/s average throughput.[4] 802.11g hardware is fully backwards compatible with 802.11b hardware and therefore is encumbered with legacy issues that reduce throughput when compared to 802.11a by ~21%.
The then-proposed 802.11g standard was rapidly adopted by consumers starting in January 2003, well before ratification, due to the desire for higher data rates, and reductions in manufacturing costs. By summer 2003, most dual-band 802.11a/b products became dual-band/tri-mode, supporting a and b/g in a single mobile adapter card or access point. Details of making b and g work well together occupied much of the lingering technical process; in an 802.11g network, however, activity of an 802.11b participant will reduce the data rate of the overall 802.11g network.
Like 802.11b, 802.11g devices suffer interference from other products operating in the 2.4 GHz band.
is an open wireless protocol for exchanging data over short distances from fixed and mobile devices, creating personal area networks (PANs). It was originally conceived as a wireless alternative to RS232 data cables. It can connect several devices, overcoming problems of synchronization.
Bluetooth Implementation
Bluetooth uses a radio technology called frequency-hopping spread spectrum, which chops up the data being sent and transmits chunks of it on up to 79 frequencies. In its basic mode, the modulation is Gaussian frequency-shift keying (GFSK). It can achieve a gross data rate of 1 Mb/s. Bluetooth provides a way to connect and exchange information between devices such as mobile phones, telephones, laptops, personal computers, printers, Global Positioning System (GPS) receivers, digital cameras, and video game consoles through a secure, globally unlicensed Industrial, Scientific and Medical (ISM) 2.4 GHz short-range radio frequency bandwidth. The Bluetooth specifications are developed and licensed by the Bluetooth Special Interest Group (SIG). The Bluetooth SIG consists of companies in the areas of telecommunication, computing, networking, and consumer electronics.

Data broadcasting
Data broadcast approach is an efficient technique for disseminating data in mobile computing environments. To reduce the response time and the power consumption of the data broadcast approach, a mobile client may store frequently accessed data items in its cache. When a cached data item becomes out-of-date, the mobile client has to reaccess the new content of the data item from the broadcast channel. Reaccessing a cached data item may incur significant power consumption and suffer from a long delay. In this paper, we propose a data reaccess scheme which enables a mobile client to efficiently reaccess a cached data item. The strength of the proposed scheme lies in its capability to allow a mobile client to correctly reaccess its cached data items while the server inserts data items into or deletes data items from the broadcast structure in the course of data broadcasting. Our experiment shows that the proposed scheme significantly reduces the tuning time required in reaccessing a cached data item.

Introduction to Mobile IP
IP version 4 assumes that a node’s IP address uniquely identifies its physical attachment to the Internet. Therefore, when a corespondent host (CH) tries to send a packet to a mobile node (MN), that packet is routed to the MN’s home network; independently of the current attachment of that MN (this is because CHs do not have any knowledge of mobility).

When the MN is on its home network, and a CH sends packets to the mobile node, the Mobile Node obtains those packets and answers them as a normal host (this is one important requirement in Mobile IP), but if the MN is away from its home network, it needs an agent to work on behalf of it. That agent is called Home Agent (HA). This agent must be able to communicate with the MN all the time that it is “on-line”, independently of the current position of the MN. So, HA must know where the physical location of the MN is.

In order to do that, when the MN is away from home, it must get a temporary address (which is called care-of address), which will be communicated to the HA to tell its current point of attachment. This care-of address can be obtained by several ways, but the most typical one is that the MN gets that address from an agent. In this case, this agent is called Foreign Agent (FA).

Therefore, when a MN is away from home, and it’s connected to a foreign network, it detects is on a different network and sends a registration request through the FA to the HA requesting mobile capabilities for a period of time. The HA sends a registration reply back to the MN (through the FA) allowing or denying that registration. This is true when the Mobile Node is using a Foreign Agent for the registration. If the Mobile Node obtains the care-of address by other meanings, that step (registration through the FA) is not necessary.

If the HA allows that registration, it will work as a proxy of the MN. When MN’s home network receives packets addressed to the MN, HA intercepts those packets (using Proxy ARP), encapsulates them, and sends them to the care-of address, which is one of the addresses of the FA. The FA will decapsulate those packets, and it will forward them to the MN (because it knows exactly, where the MN is).

Encapsulation is the method used by the HA to deliver information to the MN putting an extra IP header on top of the packet and tunnelling that packet to the MN (when it’s on a foreign network). Tunneling and encapsulation are defined in IP in IP tunneling and IP encapsulation within IP.

So, when the MN is on a foreign network, it uses its home agent to tunnel encapsulated packets to itself via FA. This occurs until the lifetime expires (or the MN moves away). When this happens (time out) MN must register again with its HA through the FA (if the MN obtains its care-of address for other meanings, it acts as its own FA).

When the MN moves to another network and it detects so, it sends a new registration request through (one more time) the new FA. In this case, HA will change MN’s care-of address and it will forward encapsulated packets to that new care-of address (which, usually, belongs to the FA). Some extensions of Mobile IP allows to a MN to have several care-of addresses. Then, HA will send the same information to all the care-of addresses. This is particularly useful when the MN is at the edges of cells on a wireless environment, and it is moving constantly.

MN bases its movement detection basically looking at periodic adverts of the FA (and HA), which sends to its localnet. Those messages are one extension of the ICMP router discovery messages and they are called Agent Advertisement (because they advertises a valid agent for Mobile Nodes).

There are two different methods to detect network movement:

a) The first method is based on network prefixes. For further information look at Mobile IP RFC 2002 (section, page 22). This method is not included in our current implementation.

b) The second method is based upon the Lifetime field within the main body of the ICMP Router Advertisement portion of the Agent Advertisement. Mobile nodes keep track of that Lifetime and if it expires, it sends an Agent Solicitation (asking for a new Agent Advertisement) and it presumes that it has been moved.

When the MN returns to its home network, it does not require mobility capabilities, so it sends a deregistration request to the HA, telling it that it’s at home (just to deactivate tunneling and to remove previous care-of address (es)).

At this point, MN does not have to (de)register again, until it moves away from its network. The detection of the movement is based on the same method explained before.

Wireless Application Protocol (commonly referred to as WAP)

is an open international standard for application layer network communications in a wireless communication environment. Its main use is to enable access to the Mobile Web from a mobile phone or PDA.

A WAP browser provides all of the basic services of a computer based web browser but simplified to operate within the restrictions of a mobile phone, such as its smaller view screen. WAP sites are websites written in, or dynamically converted to, WML (Wireless Markup Language) and accessed via the WAP browser.

Before the introduction of WAP, service providers had extremely limited opportunities to offer interactive data services. Interactive data applications are required to support now commonplace activities such as:

• Email by mobile phone
• Tracking of stock market prices
• Sports results
• News headlines
• Music downloads

Technical specifications:
• The WAP standard describes a protocol suite that allows the interoperability of WAP equipment and software with many different network technologies. The rationale for this was to build a single platform for competing network technologies such as GSM and IS-95 (also known as CDMA) networks.
• The bottom-most protocol in the suite is the WAP Datagram Protocol (WDP), which is an adaptation layer that makes every data network look a bit like UDP to the upper layers by providing unreliable transport of data with two 16-bit port numbers (origin and destination). WDP is considered by all the upper layers as one and the same protocol, which has several “technical realizations” on top of other “data bearers” such as SMS, USSD, etc. On native IP bearers such as GPRS, UMTS packet-radio service, or PPP on top of a circuit-switched data connection, WDP is in fact exactly UDP.
• WTLS provides a public-key cryptography-based security mechanism similar to TLS. Its use is optional.
• WTP provides transaction support (reliable request/response) that is adapted to the wireless world. WTP supports more effectively than TCP the problem of packet loss, which is common in 2G wireless technologies in most radio conditions, but is misinterpreted by TCP as network congestion.
• Finally, WSP is best thought of on first approach as a compressed version of HTTP.
This protocol suite allows a terminal to emit requests that have an HTTP or HTTPS equivalent to a WAP gateway; the gateway translates requests into plain HTTP.
Wireless Application Environment (WAE)
In this space, application-specific markup languages are defined.
For WAP version 1.X, the primary language of the WAE is WML, which has been designed from scratch for handheld devices with phone-specific features. In WAP 2.0, the primary markup language is XHTML Mobile Profile.
TCP over wireless network
TCP is the most common transport protocol used in Internet. It was designed primarily for wired networks.The characteristics of wireless links are very different from wired links, particularly in terms of loss behaviour.

This results in poor TCP performance over wireless links. In this paper, we have proposed enhancements to TCP so that it cans dierentiate between a loss due to congestion, and a loss due to noise in the wireless link. With this knowledge, TCP can react dierently to the two kinds of losses, leading to improved perfor- mance. The modied TCPis compatible with existing implementations.


Wireless links have fundamentally different characteristics than wired links. They are characterized by low bandwidth and error rates that are high, bursty and time-varying. This difference in error characteristics causes signicant degradation in the performance of TCP
That was originally developed for wired networks. For example, TCP misinterprets packet loss over a wireless link as a sign of network congestion, causing poor throughput.

The problem has been studied by several researchers, and many solutions have been proposed.

Nanda et al. have suggested introducing reliability at the link layer for wireless link using nite number of retransmissions. This approach does not completely shield the source transport layer from all losses on the wireless link. Also, studies have shown that link-layer retransmissions may interfere with TCP’s end-to-end retransmissions leading to degraded performance.
Bakre et al. and Yavatkar et aladvocate Splitting end-to-end TCP connection into two separate TCPconnections – one the wired network and the other over the wireless link, with base station serving as the common point for two connections. This approach isolates the transport layer from the erratic behavior of the wireless link. However, this approach does not preserve the semantics of TCP.Also; every packet incurs the overhead of going through the TCP
protocol processing twice at the base station. Further, it requires the base station to maintain state for every TCP connection passing through it.

a module called snoop agent to the routing code at the base station. This agent monitors packets owing on a TCPconnection. It maintains a cache of unacknowledged TCP
Packets on a per-connection basis and performs local retransmission when it detects a packe loss. This approach does not completely shield the sender from losses on a wireless link. Also, it assumes that a wire- less link is the last hop in the network path. It also requires a base station to maintain state information and a cache of unacknowledged packets for every TCP connection passing through it.
All proposed solutions are based on the assumption that no changes can be made to existing TCP imple- mentations. However, we note that it is possible that newer hosts run modied TCP
Implementations. We need to only ensure that the modications should be backward-compatible. And that the older hosts should not notice a degradation of performance.

In this paper, we propose to augment TCP to respond to control signals sent from wireless gateway nodes. The control signals are such that they are unambiguosly discarded by the receiving hosts running older version of TCP.The simulation results show that the scheme provides substantial performance improvement with low cost overhead.

A Simple Scheme

We rst propose a simple solution. We conduct simulation experiments to show that it works in most circumstances, but our results also show that it does not improve performance under certain conditions. We propose a renement to our basic scheme in the next section.
The basic idea is that if TCP can distinguish a packet loss due to congestion from that due to channel noise, algorithms can be developed to deal with such losses evidently. In this paper, we make the assumption that losses in the wired network are due to congestion, while those in the wireless network are related to noise. These are the same assumptions made by others. In order that TCP be able to differentiate between two kinds of losses, the network needs to generate some control information and send it to the source TCP entity. We can use ICMP for this purpose. We propose a new message type within ICMP to carry the required information. This has the additional advantage that nodes which do not understand this message-type will simply discard the message. The new message type is called ICMP-DEFER.

2.1 Description of a Simple Scheme In the proposed scheme, the base station generates an ICMP-DEFER message when the rst attempt at transmitting the packet on the wireless link is unsuccessful. This policy ensures that within one round trip time, TCP
Will receive either an acknowledgment or an ICMP message. This will ensure that end-to-end retransmissions do not start while link-layer retransmissions may be going on. A lack of both will signal a congestion loss. Thus, TCP
Can distinguish between two kinds of losses. The control information consists of TCP
And IP headers (typical ICMP message). This is enough for source TCP
To decide which particular packet was lost on the wireless link. ICMP-DEFER messages are not retransmitted, thus the overhead on the network is minimal.

TCP performs the following actions on receipt of an ICMP-DEFER message. If retransmit timer is set for the segment indicated in the ICMP message, it postpones its expiry by the current estimate of retransmission timeout (RTO) value. This will avoid conflict between the link-layer retransmissions and end-to-end retransmissions. We assume that one RTO time is su-
Cient for the base station to exhaust all local retransmission attempts for a packet. If the wireless link remain in error-state for longer duration, TCP
Would stop transmission because there would be no acks, and thus send window would not move.

If a segment needs to be retransmitted, TCP checks if it had received an ICMP-DEFER message for this segment. If it had, it would retransmit the segment but it does not reduce the cwnd. Also, when coming out of fast recovery algorithm, it resets cwnd back to its value before fast retransmission and fast recovery.


Data Management in Mobile Computing :

Most mobile technologies physically support broadcast data management mechanism and a server can take advantage of push to disseminate information to all mobile clients in its cell. On the other hand, in pull-based scheme, clients only access data when needed. These two facts introduce new mechanisms for data management different from the traditional algorithms proposed for traditional client-server distributed database systems. In this tutorial, I will discuss broadcast data management issues in a mobile environment. I will present taxonomies that categorize algorithms proposed. These taxonomies provide insights into the tradeoffs inherent in data management algorithms in mobile computing environment. I will give executive summary of the problems, proposed models and solutions and in the process I will compare and contrast existing algorithms for their merit and drawbacks. The tutorial will focus on the following topics:
Characteristics of data used in mobile Computing
•= Mobile or nomadic computing
•= Wireless
•= Ubiquitous
•= Personal
Architecture Issues
Key Issues: Mobility, Wireless Communications, and Portability

The location of mobile elements and therefore their point of attachment to the fixed network change as they move.
The consequences of mobility are numerous.
•= The configuration of a system that includes mobile elements is not static. Thus,
1. In designing distributed algorithms, we can no more rely on a fixed topology.
2. The center of activity, the system load, and locality change dynamically.
•= Location management
1. The search cost to locate mobile elements is added to the cost of each communication involving them.
2. Efficient data structures, algorithms, and query execution plans must be devised for representing, managing, and querying the location of mobile elements, which is a fast changing data.
Mobility (continued)
= Heterogeneity
1. Connectivity becomes highly variant in performance and reliability. For instance, outdoors, a mobile client may have to rely on low and width networks, while inside a building it may be offered reliable highbandwidth connectivity or even operates connected via wireline connections. Moreover, there may be areas with no adequate coverage resulting in disconnections of various durations.
2. The number of devices in a network cell changes with time, and so do both the load at the base station and bandwidth availability.
3. There may be also variability in the provision of specific services, such as in the type of available printers or weather reports.
4. The resources available to a mobile element vary, for example, a docked computer has more memory or is equipped with a larger screen.
Mobility also raises very important security and authentication issues.
Wireless Medium
•= (Weak and Intermittent Connectivity)
Wireless networks are more expensive, offer less bandwidth, and are less reliable than wireline networks.
Wireless communications face many obstacles because the surrounding environment interacts with the signal.
Thus, while the growth in wired network bandwidth has been tremendous (in current technology Ethernet provides 10 Mbps,FDDI 100 Mbps and ATM 155 Mbps), products for wireless communication achieve only 19 Kbps for packet radio communications, and 9-14 Kbps for cellular telephony. The typical bandwidth of wireless LANs ranges from 250 Kbps to 2Mbps and it is expected to increase to 10 Mbps .Since the bandwidth is divided among the users sharing a cell,the deliverable bandwidth per user is even lower.
For radio transmission the error rate is so high that the effective bandwidth is limited to less than 10 Kbps .Thus, bandwidth is a scarce resource.
Furthermore, data transmission over the air is currently monetarily expensive .
Mobile elements May voluntary operate disconnected.

Wireless Medium (continued)
= (Variant Connectivity)
Wireless technologies vary on the degree of bandwidth and reliability they provide.
•= (Broadcast Facility)
There is a high bandwidth broadcast channel from the base station to all mobile clients in its cell.
•= (Tarrifs)
For some networks (e.g., in cellular telephones), network access is charged per connection-time, while for others (e.g., in packet radio), it is charged per message (packet).
Portability of Mobile Elements
•= Mobile elements are resource poor when compared to static elements.
Mobile elements must be light and small to be easily carried around. Such considerations, in conjunction with a given cost and level of technology, will keep mobile elements having less resources than static elements, including memory, screen size and disk capacity.
This results in an asymmetry between static and mobile elements.
= Mobile elements rely on battery.
Even with advances in battery technology, this concern will not cease to exist.
Concern for power consumption must span various levels in hardware and software design.
Mobile elements are easier to be accidentally damaged, stolen,or lost.
Thus, they are less secure and reliable than static elements.

Caching and Replication in Mobile Data Management

Mobile computing refers to computing using devices that are not attached to a specific location, but instead their position (network or geographical) changes. Mobile computing can be traced back to file systems and the need for disconnected operations in the end of the 80s. With the rapid growth in mobile technologies and the cost effectiveness in deploying wireless networks in the 90s, the goal of mobile computing was to support of AAA(anytime, anywhere and any-form) access to data by users from their portable computers and mobile phones, devices with small displays and limited resources. This led to research in mobile data management including transaction processing, query processing and data dissemination. A key characteristic of all these research efforts was the great emphasis on the mobile computing challenges, including:
• Intermittent Connectivity This refers to the fact that computation must proceed despite short or long periods
of network unavailability.
Scarcity of Resources Due to the small sizes of portable devices, there are implicit restrictions in the availability of storage and computation capacity and mostly of energy.
Mobility The implications of mobility are varying. First, mobility introduces a number of technical challenges at the networking layer. It also offers a number of opportunities at the higher layers for explicitly using location information either at the semantic level (for instance, for providing personalization) or at the performance level (for instance, for data prefetching). Finally, it amplifies heterogeneity.
In general, one can distinguish between single-hop and multi-hop underlying infrastructures. In single hop infrastructures, each mobile device communicates with a stationary host, which corresponds to its point of attachment to the network. The rest of the routing is the responsibility of a stationary infrastructure, e.g., the Internet. On the other hand, in multi-hop wireless communication, an ad-hoc wireless network is formed in which, wireless hosts participate in routing messages among each other. In both infrastructures, the host between the source (and sources) that holds the data and the requester of data (or data sink) form a dissemination tree. The hosts (mobile or stationary) that form the dissemination tree may store data and take part in the computation towards achieving in network processing (e.g., aggregation). Locally caching or replicating data at the wireless host or at the intermediate nodes of the dissemination tree are important for improving systemperformance and availability.
Caching and replication generally attempt to guarantee that most data requests are for data that is being held in main memory or local storage, negating the need to perform I/O, or remote data retrieval. Hence, the uses of appropriate caching/replication schemes have been traditionally used to improve performance and reduce service time. In mobile environments the performance considerations go beyond simple speedups and data retrieval delays. In this article, we examine how caching and replication has been utilized in mobile data management and more specifically in the first infrastructure where data are cached at the mobile device in order to avoid excessive energy consumption and to cope with intermittent connectivity.
Consistency Levels
We consider the case in which a mobile computing device (such as a portable computer or cellular phone) is connected to the rest of the network typically through a wireless link. Wireless communication has a double impact on the mobile device since the limited bandwidth of wireless links increases the response times for accessing remote data from a mobile host and transmitting as well as receiving of data are high energy consumption operations.
The principal goal is to store appropriate pieces of data locally at the mobile device so that it can operate on its own data, thus reducing the need for communication that consumes both energy and bandwidth. At some point, operations performed at the mobile device must be synchronized with operations performed at other sites. The complexity of this synchronization depends greatly on whether updates are allowed at the mobile device. The main reason for allowing such updates is to sustain network disconnections. When there are no local updates, the important issue is disseminating updates from the rest of the network to the mobile device.
Synchronization depends on the level at which correctness is sought. This can be roughly categorized as replica-level correctness and transaction-level correctness. At the replica level, correctness or coherency requirements are expressed per item in terms of the allowable divergence among the values of the copies of each item. There are many ways to characterize the divergence among copies of an item.
For example, with quasi copies, the coherency or freshness requirements between a cached copy of an item and its primary at the server are specified by limiting

(a) the number of updates (versions) between them

(b) their distance in time, or

(c) the difference between their values. At the transaction level, the strictest form of correctness is achieved through global serializability that requires the execution of all transactions running at mobile and stationary hosts to be equivalent to some serial execution of the same transactions. In case of replication, one copy serializability provides equivalence with a serial execution on a one-copy database. One-copy serializability does not allow any divergence among copies.
There is a large number of correctness criteria proposed besides serializability. A practical such criterion is snapshot isolation . With snapshot isolation, a transaction is allowed to read data from any database snapshot at a time earlier than its start time. Central are also criteria that treat read-only transactions, i.e.,transactions with no update operations, differently. Consistency of read-only transactions is achieved by ensuring
that transactions read a database instance that does not violate any integrity constraints (as for example with snapshot isolation), while freshness of read-only transactions refers to the freshness of the values read.
Finally, relaxed-currency serializability allows update transactions to read out-of-date values as long as they satisfy some freshness constraints specified by the users.
There are two basic ways of propagating updates. Eager replication synchronizes all copies of an item within a single transaction, whereas with lazy replication, transactions for keeping replica coherent run as separate, independent database transactions after the original transaction. One-copy serializability as well as other forms of correctness can be achieved either through eager or lazy update propagation.
3 Two-tier Caching
In this section, we assume that data can be updated at the mobile device. The main motivation is support for disconnected operation. Disconnected operation refers to the autonomous operation of a mobile client, when network connectivity becomes unavailable for instance, due to physical constraints, or undesirable, for example, for reducing power consumption. Preloading or prefetching data to sustain a forthcoming disconnection is often termed hoarding. The content of data to be prefetched may be determined

(a) automatically by the system by utilizing implicit information, most often based on the past history of data references, or

(b) by instructions given explicitly by the users, as in profile-driven data prefetching, where a simple profile language is introduced for specifying the items to be prefetched along with their relative importance. Additional information such as a set of allowable operations or a characterization of the required data quality may also be cached along with data. For example, in the Pro-Motion infrastructure the unit of caching and replication is a compact, an object that encapsulates the cached data, operations for accessing the cached data, state information (such as the number of accesses to the object), consistency rules that must be followed to guarantee consistency, and obligations (such as deadlines).
To allow concurrent operation at both the mobile client and other sites during disconnection, optimistic approaches to consistency control are typically deployed. Optimistic consistency maintenance protocols allow data to be accessed concurrently at multiple sites without a priori synchronization between the sites, potentially resulting in short term inconsistencies. Such protocols trade-off quality of data for improving quality of service.Optimistic replication has been extensively studied as a means for consistency maintenance in distributed systems (for example, see for a thorough recent survey). In this paper, we present some representative examples of optimistic protocols in the context of mobile computing.
Consistent operation during disconnected operation has been also extensively addressed in the context of network partitioning. In this context, a network failure partitions the sites of a distributed database system into disconnected clusters. Various approaches have been proposed and are excellently reviewed in In general; protocols in network partition follow peer-to-peer models where transactions executed in any partition are of equal importance, whereas the related protocols in mobile computing most often consider transactions at the mobile host as second-class, for instance, by considering their updates as tentative. Furthermore, disconnections in mobile computing are common and some of them may be considered foreseeable.
Disconnections correspond to the extreme case of total lack of connectivity. Other connectivity constraints, such as weak or intermittent connectivity also affect the protocols for enforcing consistency. In general, weak connectivity is handled by appropriately revising those operations whose deployment involves the network. For instance, the frequency of propagation to the server of updates performed at the local data may depend on connectivity conditions.
In early research in mobile computing, a general concern has been whether issues such connectivity or mobility should be transparent or hidden from the users. In this respect, adapting the levels of transaction or replica correctness to the system conditions such as the availability of connectivity or the quality of the network connection and providing applications with less than strict notions of correctness can be seen as making such conditions visible to the users. This is also achieved by explicitly extending queries with quality of data specifications for example for constraining the divergence between copies.
Some common characteristics of protocols for consistency in two-tier caching are:
The propagation of updates performed at the mobile site follow in general lazy protocols,
• Reads are allowed at the local data, while updates of local data are tentative in the sense that they need to
Be further validated before commitment.
• For integrating operations at the mobile hosts with transactions at other sites, in the case of replicalevel
Consistency, copies of an item are reconciled following some conflict resolution protocol. At the transaction-level, local transactions are validated against some application or system level criterion. If the criterion is met, the transaction is committed. Otherwise, the execution of the transaction is aborted, reconciled or compensated. Such actions may have cascaded effects on other tentatively committed transactions that have seen the results of the transaction.
Next, we present a number of consistency protocols that have been proposed for mobile computing.
Isolation-Only transactions in Coda Coda is one of the first file systems designed to support disconnections and weak connectivity. Coda introduced isolation-only transactions (IOTs) in file systems. An IOT is a sequence of file access operations. A transaction T is called a first-class transaction, if it does not have any partitioned file access, i.e., the mobile host maintains a connection for every file it has accessed. Otherwise, T is called a second-class transaction. Whereas the result of a first-class transaction is immediately committed, a second-class transaction remains in the pending state till connectivity is restored. The result of a second-class transaction is held within the local cache and visible only to subsequent accesses on the same host. Second-class transactions are guaranteed to be locally serializable among them. A first-class transaction is guaranteed to be serializable with all transactions that were previously resolved or committed at the fixed host. Upon reconnection, a second-class transaction T is validated against one of the following two serialization constraints.
The first is global serializability, which means that if a pending transaction’s local result were written to the fixed host as is, it would be serializable with all previously committed or resolved transactions. The second is a stronger consistency criterion called global certifiability (GC) which requires a pending transaction be globally serializable not only with, but also after all previously committed or resolved transactions.
Two-tier Replication With two-tier replication replicated data have two versions at mobile nodes: master and tentative versions. A master version records the most recent value received while the site was connected.
A tentative version records local updates. There are two types of transactions analogous to second- and firstclass IOTs: tentative and base transactions. A tentative transaction works on local tentative data and produces tentative data. A base transaction works only on master data and produce master data. Base transactions involve only connected sites. Upon reconnection, tentative transactions are reprocessed as base transactions. If they fail to meet some application-specific acceptance criteria, they are aborted.
Two-Layer Transactions With two-layer transactions , transactions that run solely at the mobile host are called weak, while the rest are called strict. A distinction is drawn between weak copies and strict copies. In contrast to strict copies, weak copies are only tentatively committed and hold possibly obsolete values. Weak transactions update weak copies, while strict transactions access strict copies located at any site. Weak copies
are integrated with strict copies either when connectivity improves or when an application-defined freshness limit to the allowable deviation among weak and strict copies is passed. Before reconciliation, the results of weak transactions are visible only to weak transactions at the same site. Strict transactions are slower than weak transactions, since they involve the wireless link but guarantee permanence of updates and currency of reads.
During disconnection, applications can only use weak transactions. In this case, weak transactions have similar semantics with second-class IOTs and tentative transactions Adaptability is achieved by restricting the number of strict transactions depending on the available connectivity and by adjusting the application-defined degree of divergence among copies.
Bayou Bayou is built on a peer-to-peer architecture with a number of replicated servers weakly connected to each other. Bayou does not support full-fledged transactions. A user application can read-any and write-any available copy. Writes are propagated to other servers during pair-wise contracts called antientropy sessions. When a write is accepted by a Bayou server, it is initially deemed tentative. As in two-tier replication [each server maintains two views of the database: a copy that only reflects committed data and another full copy that also reflects the tentative writes currently known to the server. Eventually, each write is committed using a primary-commit schema. That is, one server designated as the primary takes responsibility for committing updates. Because servers may receive writes from clients and other servers in different orders, servers may need to undo the effects of some previous tentative execution of a write operation and re-apply it. The Bayou systems provide dependency checks for automatic conflict detection and merge procedures for resolution. Instead of transactions, Bayou supports sessions. Sessions is an abstraction for a sequence of read and writes operations performed during the execution of an application. Session guarantees are enforced to avoid inconsistencies when accessing copies at different servers; for example, a session guarantee may be that read operations reflect previous writes or that writes are propagated after writes that logically precede them. Different degrees of connectivity are supported by individually selectable session guarantees, choices of committed or tentative data, and by placing an age parameter on reads. Arbitrary disconnections among Bayou’s servers are also supported since Bayou relies only on pair-wise communication. Thus, groups of servers may be disconnected from the rest of the system yet remain connected to each other.
4 Update Dissemination
In this section, we consider data at the mobile device to be read-only, as in traditional client-server caching. In this case, the main issue is developing efficient protocols for disseminating server updates to mobile clients. Most such cache invalidation protocols developed for mobile computing focus on the case in which a large number of clients is attached to a single server. Often, the server is equipped with an efficient broadcast facility that allows it to propagate updates to all of its clients. Different assumptions are made on whether the server maintains or not any information about which clients it is serving, what are the contents of their cache, and when their cache was last validated. Servers that hold such information are called stateful, while servers that do not are called stateless. Another issue pertinent to mobile computing is again handling disconnections, in particular, ensuring that cache invalidation are received by clients despite any temporary network unavailability.
Update propagation may be either synchronous or asynchronous. In synchronous methods, the server broadcasts an invalidation report periodically. A client must listen for the report first to decide whether its cache is valid or not. Thus, each client is confident for the validity of its cache only as of the last invalidation report.
This adds some latency to query processing, since to answer a query, a client has to wait for the next invalidation report. In case of disconnections, synchronous methods surpass asynchronous in that clients need only periodically tune in to read the invalidation report instead of continuously listening to the broadcast. However, if the client remains inactive longer than the period of the broadcast, the entire cache must be discarded, unless specialchecking is deployed.
Invalidation protocols vary in the type of information they convey to the clients. In case of replica level correctness, it suffices that single read operations access current data. In this case, invalidation may include just a list of the updated items or in addition to this, their updated values. Including the updated values may be wasteful of bandwidth especially when the corresponding items are cached at only a few clients. On the other hand, if the values are not included, the client must either discard the item from its cache or communicate with the server to receive the updated value. The reports can provide information for individual items or aggregate information for sets of items. In case of transaction level correctness, invalidation reports must include additional information regarding server transactions.
The efficiency of an update dissemination protocol depends on the connectivity behavior of the mobile clients. In clients that are often connected are called workaholic, while clients that are often disconnected are the sleepers. Three synchronous strategies for stateless servers are considered. In the broadcasting timestamps strategy (TS), the invalidation report contains the timestamps of the latest change for items that have had updates in the last w seconds. In the amnestic terminals strategy (AT), the server only broadcasts the identifiers of the items that changed since the last invalidation report. In the signatures strategy, signatures are broadcast. A signature is a checksum computed over the value of a number of items by applying data compression techniques similar to those used for file comparison. Each of these strategies is shown to be effective for different types of clients. Signatures are best for long sleepers, that is, when the period of disconnection is long and hard to predict. The AT method is best for workaholic. Finally, TS is shown to be advantageous when the rate of queries is greater than the rate of updates provided that the clients are not workaholics.
Another model of operation in the content of mobile databases is that of a broadcast or push model In this model, the server broadcasts (periodically) data to a set of mobile clients. Clients monitor the broadcast and retrieve the data items they need as they arrive. Data of interest may also be cached locally at the client.
When clients read data from the broadcast, a number of different replica-level correctness models are reasonable. For example, if clients do not cache data, the server always broadcasts the most recent values, and there is no backchannel for on-demand data delivery, then the latest value model is models that arise naturally. In this model, clients read the most recent value of a data item. Methods for enforcing transaction-level correctness are presented in With the invalidation method; the server broadcasts an invalidation report with the data items that have been updated since the broadcast of the previous report. Transactions that read obsolete items are aborted. With the serialization graph testing (SGT) method, the server broadcasts control information related to conflicting operations. Clients use this information to ensure that their read-only transactions are serializable with the server transactions. With multiversion broadcast multiple versions of each item are broadcast, so that client transactions always read a consistent database snapshot.
A general theory of correctness for broadcast databases as well as the fundamental properties underlying the techniques for enforcing it are given in . Correctness characterizes the freshness of the values seen by the clients with regards to the values at the server as well as the temporal discrepancy among the values read by the same transaction.
More recently, the concept of materialized views was extended in the context of mobile databases to operate in a fashion similar to data caches supporting local query processing As in traditional databases, materialized views in mobile databases provide a means to present different portions of the databases based on users’ perspectives and, similar to data warehouses, materialized views provide a mean to support personalized information gathering from multiple data sources. Personalization is expressed in the form of view maintenance options for recomputation and incremental maintenance. They offer a finer grain of control and balance between data availability and currency, the amount of wireless communication and the cost of maintaining consistency. In order to better characterize these personalizations, in recomputational consistency was introduced and the materialized view consistency was enhanced with new levels which correspond to specific view currency customizations.

Mobile agent:

In computer science, a mobile agent is a composition of computer software and data which is able to migrate (move) from one computer to another autonomously and continue its execution on the destination computer.
Definition and overview
A Mobile Agent, namely, is a type of software agent, with the feature of autonomy, social ability, learning, and most importantly, mobility.
More specifically, a mobile agent is a process that can transport its state from one environment to another, with its data intact, and be capable of performing appropriately in the new environment. Mobile agents decide when and where to move. Movement is often evolved from RPC methods. Just as a user directs an Internet browser to “visit” a website (the browser merely downloads a copy of the site or one version of it in the case of dynamic web sites), similarly, a mobile agent accomplishes a move through data duplication. When a mobile agent decides to move, it saves its own state, transports this saved state to the new host, and resumes execution from the saved state.
A mobile agent is a specific form of mobile code. However, in contrast to the Remote evaluation and Code on demand programming paradigms, mobile agents are active in that they can choose to migrate between computers at any time during their execution. This makes them a powerful tool for implementing distributed applications in a computer network.
An open multi-agent system (MAS) is a system in which agents, which are owned by a variety of stakeholders, continuously enter and leave the system.
Reputation and Trust
The following are general concerns about the Trust and Reputation in Mobile Agent research area:
1. Source of trust information
• Direct experience
• Witness information
• Role-based rules
• Third-party references
What are the differences between trust and reputation system?
Trust systems produce a score that reflects the relying party’s subjective view of an entity’s trustworthiness, whereas reputation systems produce an entity’s (pub- lic) reputation score as seen by the whole community.
Some advantages which mobile agents have over conventional agents:
• Computation bundles – converts computational client/server round trips to relocatable data bundles, reducing network load.
• Parallel processing -asynchronous execution on multiple heterogeneous network hosts
• Dynamic adaptation – actions are dependent on the state of the host environment
• Tolerant to network faults – able to operate without an active connection between client and server
• Flexible maintenance – to change an agent’s actions, only the source (rather than the computation hosts) must be updated
One particular advantage for remote deployment of software includes increased portability thereby making system requirements less influential.
Security Threats for Mobile Agent

Threats to security generally fall into three main classes: disclosure of information, denial
Of service, and corruption of information. There are a variety of ways to examine these
Classes of threats in greater detail as they apply to agent systems. Here, we use the
Components of an agent system to categorize the threats as a way to identify the possible
Certain computer software and hardware products and standards are identified in this paper for
Illustration purposes. Such identification is not intended to imply recommendation or endorsement by the National Institute of Standards and Technology, nor is it intended to imply that these computer software and hardware products and standards identified are necessarily the best available.

Source and target of an attack.

It is important to note that many of the threats that are discussed have counterparts in conventional client-server systems and have always existed in some form in the past (e.g., executing any code from an unknown source either downloaded from a network or supplied on floppy disk). Mobile agents simply offer a greater opportunity for abuse and misuse, broadening the scale of threats significantly.

A number of models exist for describing agent systems
; However, for discussing security issues it is sufficient to use a very simple one, consisting of only two main components: the agent and the agent platform. Here, an agent is comprised of the code
And state information needed to carry out some computation. Mobility allows an agent to
Move, or hop, among agent platforms. The agent platform provides the computational
Environment in which an agent operates. The platform from which an agent originates is
Referred to as the home platform, and normally is the most trusted environment for an
Agent. One or more hosts may comprise an agent platform, and an agent platform may
Support multiple computational environments, or meeting places, where agents can

Four threat categories are identified: threats stemming from an agent attacking an agent
Platform, an agent platform attacking an agent, an agent attacking another agent on the
Agent platform and other entities attacking the agent system. The last category covers the
Cases of an agent attacking an agent on another agent platform, and of an agent platform
Attacking another platform, since these attacks are primarily focused on the
Communications capability of the platform to exploit potential vulnerabilities. The last
Category also includes more conventional attacks against the underlying operating system
Of the agent platform.

2.1. Agent-to-Platform
The agent-to-platform category represents the set of threats in which agents exploit
Security weaknesses of an agent platform or launch attacks against an agent platform. This
set of threats includes masquerading, denial of service and unauthorized access.


When an unauthorized agent claims the identity of another agent it is said to be
Masquerading. The masquerading agent may pose as an authorized agent in an effort to
Gain access to services and resources to which it is not entitled. The masquerading agent
May also pose as another unauthorized agent in an effort to shift the blame for any actions
For which it does not want to be held accountable. A masquerading agent may damage the
Trust the legitimate agent has established in an agent community and its associated

. Denial of Service

Mobile agents can launch denial of service attacks by consuming an excessive amount of
The agent platform’s computing resources. This denial of service attacks can be launched
Intentionally by running attack scripts to exploit system vulnerabilities, or unintentionally
Through programming errors. Program testing, configuration management, design reviews, independent testing, and other software engineering practices have been proposed to help reduce the risk of programmers
Intentionally, or unintentionally, introducing malicious code into an organization’s
Computer systems.

Unauthorized Access

Access control mechanisms are used to prevent unauthorized users or processes from
Accessing services and resources for which they have not been granted permission and
Privileges as specified by a security policy. Each agent visiting a platform must be subject
to the platform’s security policy. Applying the proper access control mechanisms requires
The platform or agent to first authenticate a mobile agent’s identity before it is instantiated
On the platform. An agent that has access to a platform and its services without having the
Proper authorization can harm other agents and the platform itself. A platform that hosts
Agents representing various users and organizations must ensure that agents do not have
Read or write access to data for which they have no authorization, including access to
Residual data that may be stored in a cache or other temporary storage.

The agent-to-agent category represents the set of threats in which agents exploit security
Weaknesses of other agents or launch attacks against other agents. This set of threats
Includes masquerading, unauthorized access, denial of service and repudiation. Many
Agent platform components are also agents themselves. These platform agents provide
System-level services such as directory services and inter-platform communication
Services. Some agent platforms allow direct inter-platform agent-to-agent communication,
While others require all incoming and outgoing messages to go through a platform
Communication agent. These architecture decisions intertwine agent-to-agent and agentto-
Platform security. This section addresses agent-to-agent security threats and leaves the
Discussion of platform related threats to sections 2.1 and 2.3.


Agent-to-agent communication can take place directly between two agents or may require
the participation of the underlying platform and the agent services it provides. In either
case, an agent may attempt to disguise its identity in an effort to deceive the agent with
which it is communicating. An agent may pose as a well-known vendor of goods and
services, for example, and try to convince another unsuspecting agent to provide it with
credit card numbers, bank account information, some form of digital cash, or other private
information. Masquerading as another agent harms both the agent that is being deceived
and the agent who’s identity has been assumed, especially in agent societies where
reputation is valued and used as a means to establish trust.

. Denial of Service

In addition to launching denial of service attacks on an agent platform, agents can also
launch denial of service attacks against other agents. For example, repeatedly sending
messages to another agent, or spamming agents with messages, may place undue burden
on the message handling routines of the recipient. Agents that are being spammed may
choose to block messages from unauthorized agents, but even this task requires some
processing by the agent or its communication proxy. If an agent is charged by the number
of CPU cycles it consumes on a platform, spamming an agent may cause the spammed
agent to have to pay a monetary cost in addition to a performance cost. Agent
communication languages and conversation policies must ensure that a malicious agent
doesn’t engage another agent in an infinite conversation loop or engage the agent in
elaborate conversations with the sole purpose of tying up the agent’s resources. Malicious
agents can also intentionally distribute false or useless information to prevent other agents
from completing their tasks correctly or in a timely manner.


Repudiation occurs when an agent, participating in a transaction or communication, later
claims that the transaction or communication never took place. Whether the cause for
repudiation is deliberate or accidental, repudiation can lead to serious disputes that may
not be easily resolved unless the proper countermeasures are in place. An agent platform
cannot prevent an agent from repudiating a transaction, but platforms can ensure the
availability of sufficiently strong evidence to support the resolution of disagreements. This
evidence may deter an agent that values its reputation and the level of trust others place in
it, from falsely repudiating future transactions. Disagreements may arise not only when an
agent falsely repudiates a transaction, but also because imperfect business processes may
lead to different views of events. Repudiation often occurs within non-agent systems and
real-life business transactions within an organization. Documents are occasionally forged,
documents are often lost, created by someone without authorization, or modified without
being properly reviewed. Since an agent may repudiate a transaction as the result of a
misunderstanding, it is important that the agents and agent platforms involved in the
transaction maintain records to help resolve any dispute.

. Unauthorized Access

If the agent platform has weak or no control mechanisms in place, an agent can directly
interfere with another agent by invoking its public methods (e.g., attempt buffer overflow,
reset to initial state, etc.), or by accessing and modifying the agent’s data or code.
Modification of an agent’s code is a particularly insidious form of attack, since it can
radically change the agent’s behavior (e.g., turning a trusted agent into malicious one). An
agent may also gain information about other agents’ activities by using platform services
to eavesdrop on their communications.

. Platform-to-Agent
The platform-to-agent category represents the set of threats in which platforms
compromise the security of agents. This set of threats includes masquerading, denial of
service, eavesdropping, and alteration.


One agent platform can masquerade as another platform in an effort to deceive a mobile
agent as to its true destination and corresponding security domain. An agent platform
masquerading as a trusted third party may be able to lure unsuspecting agents to the
platform and extract sensitive information from these agents. The masquerading platform
can harm both the visiting agent and the platform whose identity it has assumed.

Denial of Service
When an agent arrives at an agent platform, it expects the platform to execute the agent’s
requests faithfully, provide fair allocation of resources, and abide by quality of service
agreements. A malicious agent platform, however, may ignore agent service requests,
introduce unacceptable delays for critical tasks such as placing market orders in a stock
market, simply not execute the agent’s code, or even terminate the agent without

. Eavesdropping

The classical eavesdropping threat involves the interception and monitoring of secret
communications. The threat of eavesdropping, however, is further exacerbated in mobile
agent systems because the agent platform can not only monitor communications, but also
can monitor every instruction executed by the agent, all the unencrypted or public data it
brings to the platform, and all the subsequent data generated on the platform. Since the
platform has access to the agent’s code, state, and data, the visiting agent must be wary of
the fact that it may be exposing proprietary algorithms, trade secrets, negotiation
strategies, or other sensitive information.


When an agent arrives at an agent platform it is exposing its code, state, and data to the
platform. Since an agent may visit several platforms under various security domains
throughout its lifetime, mechanisms must be in place to ensure the integrity of the agent’s
code, state, and data. A compromised or malicious platform must be prevented from
modifying an agent’s code, state, or data without being detected. Modification of an agent’s code, and thus the subsequent behavior of the agent on other platforms, can be
detected by having the original author digitally sign the agent’s code.
Agent platforms can also tamper with agent communications. Tampering with agent
communications, for example, could include deliberately changing data fields in financial
transactions or even changing a “sell” message to a “buy” message. This type of goaloriented
alteration of the data is more difficult than simply corrupting a message, but the
attacker clearly has a greater incentive and reward, if successful, in a goal-oriented
alteration attack.

Agent Platform
The other-to-agent platform category represents the set of threats in which external
entities, including agents and agent platforms, threaten the security of an agent platform.
This set of threats includes masquerading, denial of service, unauthorized access, and copy
and replay.

. Masquerade

Agents can request platform services both remotely and locally. An agent on a remote
platform can masquerade as another agent and request services and resources for which it
is not authorized. Agents masquerading as other agents may act in conjunction with a
malicious platform to help deceive another remote platform or they may act alone. A remote platform can also masquerade as another platform and mislead unsuspecting
platforms or agents about its true identity.

Unauthorized Access

Remote users, processes, and agents may request resources for which they are not
authorized. Remote access to the platform and the host machine itself must be carefully
protected, since conventional attack scripts freely available on the Internet can be used to
subvert the operating system and directly gain control of all resources. Remote
administration of the platform’s attributes or security policy may be desirable for an
administrator that is responsible for several distributed platforms, but allowing remote
administration may make the system administrator’s account or session the target of an

Denial of Service

Agent platform services can be accessed both remotely and locally. The agent services
Offered by the platform and inter-platform communications can be disrupted by common
Denial of service attacks. Agent platforms are also susceptible to all the conventional
Denial of service attacks aimed at the underlying operating system or communication
Protocols. These attacks are tracked by organizations such as the Computer Emergency
Response Team (CERT) at the Carnegie Mellon University and the Federal Computer
Incident Response Capability (FedCIRC).

. Copy and Replay

Every time a mobile agent moves from one platform to another it increases its exposure to
security threats. A party that intercepts an agent, or agent message, in transit can attempt
to copy the agent, or agent message, and clone or retransmit it. For example, the
interceptor can capture an agent’s “buy order” and replay it several times, having the agent
buy more than the original agent had intended. The interceptor may copy and replay an
agent message or a complete agent.

Security Requirements

The users of networked computer systems have four main security requirements:
confidentiality, integrity, availability, and accountability. The users of agent and mobile
agent frameworks also have these same security requirements. This section provides a
brief overview of these security requirements and how they apply to agent frameworks.


Any private data stored on a platform or carried by an agent must remain confidential.
Agent frameworks must be able to ensure that their intra- and inter-platform
communications remain confidential. Eavesdroppers can gather information about an
agent’s activities not only from the content of the messages exchanged, but also from the
message flow from one agent to another agent or agents. Monitoring the message flow
may allow other agents to infer useful information without having access to the actual
message content. A burst of messages from one agent to another, for example, may be an
indication that an agent is in the market for a particular set of services offered by the other


The agent platform must protect agents from unauthorized modification of their code,
state, and data and ensure that only authorized agents or processes carry out any
modification of shared data. The agent itself cannot prevent a malicious agent platform
from tampering with its code, state, or data, but the agent can take measures to detect this


Each process, human user, or agent on a given platform must be held accountable for their
actions. In order to be held accountable each process, human user, or agent must be
uniquely identified, authenticated, and audited. Example actions for which they must be
held accountable include: access to an object, such as a file, or making administrative
changes to a platform security mechanism.

Audit logs keep users and processes accountable for their actions. Mobile agents create
audit trails across several platforms, with each platform logging separate events and
possibly auditing different views of the same event (e.g., registration within a remote
directory). In the case where an agent may require access to information distributed across
different platforms it may be difficult to reconstruct a sequence of events. Platforms that
keep distributed audit logs must be able to maintain a concept of global time or ordering
of events.

Accountability is also essential for building trust among agents and agent platforms. An
authenticated agent may comply with the security policies of the agent platform, but still
exhibit malicious behavior by lying or deliberately spreading false information. Additional
auditing may be helpful in proving the malicious agent’s attempts at deception. For
example, the communication acts of an ACL conversation could be logged, but this results
in costly overhead. If it is assumed that the malicious agent does not value its reputation
within an agent community, then it would be difficult to prevent malicious agents from


The agent platform must be able to ensure the availability of both data and services to
local and remote agents. The agent platform must be able to provide controlled
concurrency, support for simultaneous access, deadlock management, and exclusive
access as required. Shared data must be available in a usable form, capacity must be
available to meet service needs, and provisions for the fair allocation of resources and
timeliness of service must be made.


The agent platform may need to balance an agent’s need for privacy with the platform’s
need to hold the agent accountable for its actions. The platform may be able to keep the
agent’s identity secret from other agents and still maintain a form of reversible anonymity
where it can determine the agent’s identity if necessary and legal.


Many conventional security techniques used in contemporary distributed applications
(e.g., client-server) also have utility as countermeasures within the mobile agent paradigm.
Moreover, there are a number of extensions to conventional techniques and techniques
devised specifically for controlling mobile code and executable content (e.g., Java applets)
that are applicable to mobile agent security. We review these countermeasures by
considering those techniques that can be used to protect agent platforms, separately from
those used to protect the agents that run on them.

. Protecting the Agent Platform

One of the main concerns with an agent system implementation is ensuring that agents are
not able to interfere with one another or with the underlying agent platform. One common
approach for accomplishing this is to establish separate isolated domains for each agent
and the platform, and control all inter-domain access. In traditional terms this concept is
referred to as a reference monitorAn implementation of a reference monitor has the
following characteristics:

• It is always invoked and non-bypassable, mediating all accesses;
• It is tamper-proof; and
• It is small enough to be analyzed and tested.

Implementations of the reference monitor concept have been around since the early 1980’s
and employ a number of conventional security techniques, which are applicable to the
agent environment. Such conventional techniques include the following:
• Mechanisms to isolate processes from one another and from the control process,
• Mechanisms to control access to computational resources,
• Cryptographic methods to encipher information exchanges,
• Cryptographic methods to identify and authenticate users, agents, and platforms, and
• Mechanisms to audit security-relevant events occurring at the agent platform.

More recently developed techniques aimed at mobile code and mobile agent security have
for the most part evolved along these traditional lines. Techniques devised for protecting
the agent platform include the following:

• Software-Based Fault Isolation,
• Safe Code Interpretation,
• Signed Code,
• Authorization and Attribute Certificates,
• State Appraisal,
• Path Histories, and
• Proof Carrying Code.

Software-Based Fault Isolation

Software-Based Fault Isolation , as its name implies, is a method of isolating
application modules into distinct fault domains enforced by software. The technique
allows untrusted programs written in an unsafe language, such as C, to be executed safely
within the single virtual address space of an application.

. Safe Code Interpretation

Agent systems are often developed using an interpreted script or programming language.
The main motivation for doing this is to support agent platforms on heterogeneous
computer systems. Moreover, the higher conceptual level of abstraction provided by an
interpretative environment can facilitate the development of the agent’s code The
idea behind Safe Code Interpretation is that commands considered harmful can be either
made safe for or denied to an agent.

Signed Code

A fundamental technique for protecting an agent system is signing code or other objects
With a digital signature. A digital signature serves as a means of confirming the
Authenticity of an object, its origin, and its integrity. Typically the code signer is either the
Creator of the agent, the user of the agent, or some entity that has reviewed the agent.

State Appraisal

The goal of State Appraisal is to ensure that an agent has not been somehow
subverted due to alterations of its state information. The success of the technique relies on
the extent to which harmful alterations to an agent’s state can be predicted, and
countermeasures, in the form of appraisal functions, can be prepared before using the

Path Histories

The basic idea behind Path Histories is to maintain an authentic table record of
the prior platforms visited by an agent, so that a newly visited platform can determine
whether to process the agent and what resource constraints to apply..

. Proof Carrying Code

The approach taken by Proof Carrying Code obligates the code producer (e.g., the
author of an agent) to formally prove that the program possesses safety properties
previously stipulated by the code consumer (e.g., security policy of the agent platform).
Proof Carrying Code is a prevention technique, while code signing is an authenticity and
identification technique used to deter, but not prevent the execution of unsafe code. The
code and proof are sent together to the code consumer where the safety properties can be

. Protecting Agents

While countermeasures directed towards platform protection are a direct evolution of
traditional mechanisms employed by trusted hosts, and emphasize active prevention
measures, countermeasures directed toward agent protection tend more toward detection
measures as a deterrent. This is due to the fact that an agent is completely susceptible to
an agent platform and cannot prevent malicious behavior from occurring, but may be able
to detect it.
. Some more general-purpose techniques for
protecting an agent include the following:

• Partial Result Encapsulation,
• Mutual Itinerary Recording,
• Itinerary Recording with Replication and Voting,
• Execution Tracing,
• Environmental Key Generation,
• Computing with Encrypted Functions, and
• Obfuscated Code (Time Limited Black box).

. Partial Result Encapsulation

One approach used to detect tampering by malicious hosts is to encapsulate the results of
an agent’s actions, at each platform visited, for subsequent verification, either when the
agent returns to the point of origin or possibly at intermediate points as well.
Encapsulation may be done for different purposes with different mechanisms, such as
providing confidentiality using encryption, or for integrity and accountability using digital

Mutual Itinerary Recording

One interesting variation of Path Histories is a general scheme for allowing an agent’s
itinerary to be recorded and tracked by another cooperating agent and vice versa in a
mutually supportive arrangement. When moving between agent platforms, an agent
conveys the last platform, current platform, and next platform information to the
cooperating peer through an authenticated channel..

. Itinerary Recording with Replication and Voting

A faulty agent platform can behave similar to a malicious one. Therefore, applying fault
tolerant capabilities to this environment should help counter the effects of malicious

. Execution Tracing

Execution tracing is a technique for detecting unauthorized modifications of an agent
through the faithful recording of the agent’s behavior during its execution on each agent

. Environmental Key Generation

Environmental Key Generation describes a scheme for allowing an agent to take
predefined action when some environmental condition is true. The approach centers on
constructing agents in such a way that upon encountering an environmental condition
(e.g., string match in search), a key is generated, which is used to unlock some executable
code cryptographically.

. Computing with Encrypted Functions

The goal of Computing with Encrypting Functions is to determine a method whereby
mobile code can safely compute cryptographic primitives, such as a digital signature, even
though the code is executed in untrusted computing environments and operates
autonomously without interactions with the home platform.
classes of functions. The technique, while very powerful, does not prevent denial of
service, replay, experimental extraction, and other forms of attack against the agent.

Obfuscated Code

Hohld overview of the threats stemming from an agent encountering a
malicious host as motivation for Blackbox Security. The strategy behind this technique is
simple — scramble the code in such a way that no one is able to gain a complete
understanding of its function (i.e., specification and data), or to modify the resulting code
without detection.

Mobile Agent Applications and Security Scenarios

Mobile agent technology is beginning to make its way out of research labs and is finding
its way into many commercial applications areas. The following section takes a look at
these application areas and discusses relevant security issues for typical scenarios.

. Electronic Commerce

Mobile agent-based electronic commerce applications have been proposed and are being
developed for a number of diverse business areas, including contract negotiations, service
brokering, auctions, and stock trading .

Network Management

Mobile agents are also well suited for network management applications such as remote
network management, software distribution, and adaptive response to network events.
Most of the current network management software is based on the Simple Network
Management Protocol (SNMP).

Personal Digital Assistants (PDA)

Manufacturers of cell phones, personal organizers, car radios, and other consumer
electronic devices are introducing more and more functionality into their products and are
becoming the focus of agent developers.

Security, Design, and Performance Issues

A number of advantages of using mobile code and mobile agent computing paradigms
have been proposed These advantages include: overcoming network latency,
reducing network load, executing asynchronously and autonomously, adapting
dynamically, operating in heterogeneous environments, and having robust and faulttolerant

Overcoming Network Latency

Mobile agent solutions have been proposed for critical systems that need to respond to
changes in their environments in real time. An example of such an application is the use of
mobile agents to control robots employed in distributed manufacturing processes. Mobile
agents have been offered as a solution, since they can be dispatched from a central
controller to act locally and directly execute the controller’s instructions

. Reducing Network Load

Mobile agents are well suited for search and analysis problems involving multiple
distributed resources that require specialized tasks that aren’t supported by the data server
A mobile agent-based search and data analysis approach can help decrease
network traffic resulting from the transfer of large amounts of data across a network for
local processing.

. Asynchronous Execution and Autonomy

A lot of attention is being focused on the use of mobile agents with mobile devices such as
cellular phones, personal digital assistants, automotive electronics, and military equipment
[4]. Their asynchronous execution and autonomy makes them well-suited for applications
that use fragile or expensive network connections. A mobile agent can be launched and
continue to operate even after the machine that launched it is no longer connected to the
. Adapting Dynamically

Mobile agents have the ability to sense their execution environment and autonomously
react to changes ..

Operating in Heterogeneous Environments

Since mobile agents are generally computer- and transport-layer-independent, and
dependent only on their execution environment, they offer an attractive approach for
heterogeneous system integration.

Robust and Fault-Tolerant Behavior

The ability of mobile agents to react dynamically to unfavorable situations and events
makes it easier to build robust and fault-tolerant distributed systems.

Application of mobile agents
Common applications include:
• Resource availability, discovery, monitoring
• Information retrieval, system information collection, support operations in client/server paradigm
system information
• Network management, remote collection of network throughput, available bandwidth monitoring, other remote machine network parameters
• Data replication and colation, server configuration backup, file collecting & sorting, other remote machine data backup
• Dynamic software deployment, remote install monitoring & gauging

PFS (Personal File System)
What is PFS
PFS (Personal File System) is a portable network file sharing system designed for mobile computers. It is constructed from file servers on stationary hosts and mobile clients. It has a cache storage on the client, and dynamically adapts for variety of network speeds and bandwidths includes disconnection. All of PFS system is implemented on user land on UNIX, and communicates with client kernel with traditional NFS. Then, PFS can run on variety of UNIX variants.
In short, you can mount file server’s disks onto your laptop computer over the network, and you can continue accessing to these files even when your computer disconnected from the network. You have no need to modify Operating Systems on both server and client!
Recently, mobile computers become small, light and powerful. Many users can perform their work on small laptops. These machines can also connect to networks as well as stationary machines.
In a well networked environment, a mobile computer can act likes a stationary one. Users will share and use programs and data over the network. However, once it moves out to disconnected environment, it must work only with its own resources.
Mobile computers connect to his network with variety of methods from fast ethernet to slow wireless modems. And even during disconnected environment, File System must work.
In such environments, we can assume that a single user uses a mobile machine and an optimistic file consistency guaranty is acceptable. So, this file system is named as “Personal File System.”
Support Disconnection
• PFS client has a cache storage on it. It works within the cache during disconnection and re-integrate with file server after reconnection.
• Automatically Adapt to Network Environments
• PFS has multiple algorithms to synchronize files between server and clients. PFS measures file transfer performance itself, and automatically changes the algorithm to be fit current environment.
• PFS is implemented as a set of user level programs and depend on Internet standard TCP/IP and NFS. It uses Berkeley DB library and memory map (mmap) system call, too. However, It’s not an essential problem.
• Since all of PFS is implemented on user level, you have no need to modify kernel both on server and client.
• It works at least once on FreeBSD-2.2.8R, BSD/OS 3.1, RedHat Linux 5.2, SunOS 4.1.3 and Solaris 2.7.


Mobile ad hoc network (MANET)
sometimes called a mobile mesh network, is a self-configuring network of mobile devices connected by wireless linksr
Each device in a MANET is free to move independently in any direction, and will therefore change its links to other devices frequently. Each must forward traffic unrelated to its own use, and therefore be a router. The primary challenge in building a MANET is equipping each device to continuously maintain the information required to properly route traffic.
Such networks may operate by themselves or may be connected to the larger Internet.
MANETs are a kind of wireless ad hoc networks that usually has a routeable networking environment on top of a Link Layer ad hoc network. They are also a type of mesh network, but many mesh networks are not mobile or not wireless.
The growth of laptops and 802.11/Wi-Fi wireless networking have made MANETs a popular research topic since the mid- to late 1990s. Many academic papers evaluate protocols and abilities assuming varying degrees of mobility within a bounded space, usually with all nodes within a few hops of each other and usually with nodes sending data at a constant rate. Different protocols are then evaluated based on the packet drop rate, the overhead introduced by the routing protocol, and other measures.
Types of MANET
• Vehicular Ad Hoc Networks (VANETs) are used for communication among vehicles and between vehicles and roadside equipment.
• Intelligent vehicular ad hoc networks (InVANETs) are a kind of artificial intelligence that helps vehicles to behave in intelligent manners during vehicle-to-vehicle collisions, accidents, drunken driving etc.
• Internet Based Mobile Ad-hoc Networks (iMANET) are ad-hoc networks that link mobile nodes and fixed Internet-gateway nodes. In such type of networks normal ad-hoc routing algorithms don’t apply directly.
List of ad-hoc routing protocols
An ad hoc routing protocol is a convention, or standard, that controls how nodes decide which way to route packets between computing devices in a mobile ad-hoc network .
In ad hoc networks, nodes do not start out familiar with the topology of their networks; instead, they have to discover it. The basic idea is that a new node may announce its presence and should listen for announcements broadcast by its neighbours. Each node learns about nodes nearby and how to reach them, and may announce that it, too, can reach them.
Note that in a wider sense, ad-hoc protocol can also be used literally, that is, to mean an improvised and often impromptu protocol established for a specific purpose.
The following is a list of some ad-hoc network routing protocols.
Pro-active (table-driven) routing
This type of protocols maintains fresh lists of destinations and their routes by periodically distributing routing tables throughout the network. The main disadvantages of such algorithms are:
1. Respective amount of data for maintenance.
2. Slow reaction on restructuring and failures.
Examples of pro-active algorithms are
• AWDS (Ad-hoc Wireless Distribution Service)
• CGSR (Clusterhead Gateway Switch Routing protocol)
• DFR (“Direction” Forward Routing)
• DBF (Distributed Bellman-Ford Routing Protocol)
• DSDV (Highly Dynamic Destination-Sequenced Distance Vector routing protocol) HSR (Hierarchical State Routing protocol)
• WRP (Wireless Routing Protocol)

Reactive (on-demand) routing
This type of protocols finds a route on demand by flooding the network with Route Request packets. The main disadvantages of such algorithms are –
High latency time in route finding.
Excessive flooding can lead to network clogging.
Examples of reactive algorithms are
Ad-hoc On-demand Distance Vector
Dynamic Source Routing
Flow State in the Dynamic Source Routing
Dynamic NIx-Vector Routing –
Dynamic Manret On-demand Routing –
Mobile Ad-hoc On-Demand Data Delivery Protocol
Flow-oriented routing
This type of protocols finds a route on demand by following present flows. One option is to unicast consecutively when forwarding data while promoting a new link. The main rdisadvantages of such algorithms are
Takes long time when exploring new routes without a priori knowledge.
May refer to entitative existing traffic to compensate for missing knowledge on routes.
Examples of flow oriented algorithms are
GB (Gafni-Bertsekas),
IERP (Interzone Routing Protocol/reactive part of the ZRP)
LBR (Link life Based routing)
LMR (Lightweight Mobile Routing protocol) for Ad Hoc Wireless Networks”,
VRR (Vehicular Reactive Routing protocol) Adaptive (situation-aware) routing
This type of protocols combines the advantages of proactive and of reactive routing. The routing is initially established with some proactively prospected routes and then serves the demand from additionally activated nodes through reactive flooding. Some metrics must support the choice of reaction. The main disadvantages of such algorithms are –
1. Advantage depends on amount of nodes activated.
2. Reaction to traffic demand depends on gradient of traffic volume.
An example of adaptive algorithms is
TORA (Temporally-ordered routing algorithm routing protocol)
Hybrid (both pro-active and reactive) routing
This type of protocols combines the advantages of proactive and of reactive routing. The routing is initially established with some proactively prospected routes and then serves the demand from additionally activated nodes through reactive flooding. The choice for one or the other method requires predetermination for typical cases. The main disadvantages of such algorithms are –
1. Advantage depends on amount of nodes activated.
2. Reaction to traffic demand depends on gradient of traffic volume.

Examples of hybrid algorithms are
ARPAM, specialized for aeronautical MANETs.
HRPLS (Hybrid Routing Protocol for Large Scale Mobile Ad Hoc Networks with Mobile Backbones)
OORP (Order One Routing Protocol) – proactive/reactive distance vector combined with a hierarchy that is not used to route data. TORA
The main disadvantages of such algorithms are

1. This method induces a delay for each transmission.
2. No relevance for energy network powered transmission operated via sufficient repeater infrastructure.
Multicast routing
ABAM (On-Demand Associativity-Based Multicast)
ADMR (Adaptive Demand-Driven Multicast Routing)
AMRIS (Ad hoc Multicast Routing protocol utilizing Increasing id-numbers) – AM Route (Adhoc Multicast Routing Protocol)
BEMRP (Bandwidth-Efficient Multicast Routing Protocol)


Write a Reply or Comment