Showing posts with label Computer Science. Show all posts
Showing posts with label Computer Science. Show all posts
This is a network protocol which enables a client in a NAT (or multiple NATs) to find out its public address, the type of NAT behind it and the internet side port associated by the NAT with a particular local port and this whole process aids to set up UDP communication between two hosts that are both behind NAT routers. STUN stands for Simple Traversal of UDP (User Datagram Protocol) through NATs (Network Address Translators).
Protocol overview
STUN is a client-server protocol. Any VoIP phone or software package includes a STUN client, which sends a request to the STUN server. As a reply the public IP address of the NAT router and the port was opened by the NAT to allow incoming traffic back in to the network is sent to the STUN client. Such a response also helps the STUN client to identify the NAT being used as different types of NATs handle incoming UDP packets vividly. Its compatible with Full Cone, Restricted Cone, and Port Restricted Cone. (Restricted Cone or Port Restricted Cone NATs, allows packets from the endpoint through to the client from the NAT once the client has send a packet to the endpoint). Symmetric NAT (also known as bi-directional NAT) which is frequently found in the networks of large companies does not work with STUN as the IP addresses of the STUN server and the endpoint is different, and therefore the NAT mapping the STUN server is different from the mapping that the endpoint uses to send packets through to the client. Network address translation could give you more information on this.
After the client discovers its external addresses communication with its peers occurs. When the NATs are full cone,either side can initiate communication and if they are restricted cone or restricted port cone both sides must start transmitting together. The techniques described in the STUN RFC does not necessarily require using the STUN protocol; they can be used in the design of any UDP protocol. STUN comes in handy in the cases of Protocols like SIP which use UDP packets for the transfer of sound/video/text signaling traffic across the Internet. As both endpoints are often behind NAT, a connection cannot be set up in the traditional way. The STUN server communicates on UDP port 3478 but the server will hint clients to perform tests on alternate IP and port number too (STUN servers have two IP addresses).
Labels: Computer Science, Seminar Topics, Seminars
The demand for faster processors, memory and I/O is a familiar refrain in market applications ranging from personal computers and servers to networking systems and from video games to office automation equipment. Once information is digitized, the speed at which it is processed becomes the foremost determinate of product success. Faster system speed leads to faster processing. Faster processing leads to faster system performance. Faster system performance results in greater success in the marketplace. This obvious logic has led a generation of processor and memory designers to focus on one overriding objective - squeezing more speed from processors and memory devices. Processor designers have responded with faster clock rates and super pipelined architectures that use level 1 and level 2 caches to feed faster execution units even faster. Memory designers have responded with dual data rate memories that allow data access on both the leading and trailing clock edges doubling data access. I/O developers have responded by designing faster and wider I/O channels and introducing new protocols to meet anticipated I/O needs. Today, processors hit the market with 2+ GHz clock rates, memory devices provide sub5 ns access times and standard I/O buses are 32- and 64-bit wide, with new higher speed protocols on the horizon.Increased processor speeds, faster memories, and wider I/O channels are not always practical answers to the need for speed. The main problem is integration of more and faster system elements. Faster execution units, faster memories and wider, faster I/O buses lead to crowding of more high-speed signal lines onto the physical printed circuit board. One aspect of the integration problem is the physical problems posed by speed.
Hyper Transport technology has been designed to provide system architects with significantly more bandwidth, low-latency responses, lower pin counts, compatibility with legacy PC buses, extensibility to new SNA buses, and transparency to operating system software, with little impact on peripheral drivers.
Labels: Computer Science, Seminar Topics, Seminars
FluidFM: Combining AFM and nanofluidics for single cell applications
The Atomic Force Microscope (AFM) is a key tool for nanotechnology. This instrument has become the most widely used tool for imaging, measuring and manipulating matter at the nanoscale and in turn has inspired a variety of other scanning probe techniques. Originally the AFM was used to image the topography of surfaces, but by modifying the tip it is possible to measure other quantities (for example, electric and magnetic properties, chemical potentials, friction and so on), and also to perform various types of spectroscopy and analysis. Increasingly, the AFM is also becoming a tool for nanofabrication.
Relatively new is the use of AFM in cell biology. We wrote about this recently in a Spotlight that described a novel method to probe the mechanical properties of living and dead bacteria via AFM indentation experimentations ("Dead or alive – nanotechnology technique tells the difference ").
Researchers in Switzerland have now demonstrated novel cell biology applications using hollow force-controlled AFM cantilevers – a new device they have called FluidFM.
"The core of the invention is to have fixed already existing microchanneled cantilevers to an opportunely drilled AFM probeholder" Tomaso Zambelli tells Nanowerk. "In this way, the FluidFM is not restricted to air but can work in liquid environments. Since it combines a nanofluidics circuit, every soluble agent can be added to the solution to be dispensed. Moreover, the force feedback allows to approach very soft objects like cells without damaging them."
As cell biology is moving towards single cell technologies and applications, single cell injection or extraction techniques are in high demand. Apart from this, however, the FluidFM could also be used for nanofabrication applications such as depositing a conductive polymer wire between to microelectrodes, or to etch ultrafine structures out of solid materials using acids as the spray agent. The team has reported their findings in a recent paper in Nano Letters ("FluidFM: Combining Atomic Force Microscopy and Nanofluidics in a Universal Liquid Delivery System for Single Cell Applications and Beyond").
Zambelli originally realized that the technology of the atomic force microscope that is normally used only to image cells could be transformed into a microinjection system. The result of the development by Zambelli and his colleagues in the Laboratory of Biosensors and Bioelectronics at the Institute of Biomedical technology at ETH Zurich and in the Swiss Center for Electronics and Microtechnology (CSEM) in Neuchâtel was the "fluid force microscope", currently the smallest automated nanosyringe currently in existence.
"Our FluidFM even operates under water or in other liquids – a precondition for being able to use the instrument to study cells" says Zambelli.
The force detection system of the FluidFM is so sensitive that the interactions between tip and sample can be reduced to the piconewton range, thereby allowing to bring the hollow cantilever into gentle but close contact with cells without puncturing or damaging the cell membrane.
On the other hand, if membrane perforation for intracellular injection is desired, this is simply achieved by selecting a higher force set point taking advantage of the extremely sharp tip (radius of curvature on the order of tens of nanometers).
To enable solutions to be injected into the cell through the needle, scientists at CSEM installed a microchannel in the cantilever. Substances such as medicinal active ingredients, DNA, and RNA can be injected into a cell through the tip. At the same time, samples can also be taken from a cell through the needle for subsequent analysis.
According to Zambelli, while this approach is similar to microinjection using glass pipettes, there are a number of essential differences.
"Microinjection uses optical microscopy to control the position of the glass pipette tip both in the xy plane and in the z direction (via image focusing)" he explains. "As consequence of the limited resolution of optical microscopy, subcellular domains cannot be addressed and tip contact with the cell membrane cannot be discriminated from tip penetration of the membrane. Cells are often lethally damaged and skilled personnel are required for microinjection."
"The limited resolution of this method and the absence of mechanical information contrast strongly with the high resolution imaging and the direct control of applied forces that are possible with AFM. Precise force feedback reduces potential damage to the cell; the cantilever geometry minimizes both the normal contact forces on the cell and the lateral vibrations of the tip that can tear the cell membrane during microinjection; the spatial resolution is determined by the submicrometer aperture so that injection into subcellular domains becomes easily achievable."
Experiments conducted by the Swiss team demonstrate the potential of the FluidFM in the field of single-cell biology through precise stimulation of selected cell domains with whatever soluble agents at a well-defined time.
"We confidently expect that the inclusion of an electrode in the microfluidics circuit will allow a similar approach toward patch-clamping with force controlled gigaseal formation," says Zambelli. "We will also explore other strategies at the single-cell level, such as the controlled perforation of the cell membrane for local extraction of cytoplasm
"Zambelli and his colleagues are convinced that their technology has great commercial potential. Rejecting offers from well-known manufacturers of atomic force microscopes for the sale of the patent for the FluidFM, they have founded Cytosurge LLC, a company dedicated to commercially develop the instrument.
Today, Zambelli's laboratory contains two prototypes of the instrument, which are being tested in collaboration with biologists.
The Atomic Force Microscope (AFM) is a key tool for nanotechnology. This instrument has become the most widely used tool for imaging, measuring and manipulating matter at the nanoscale and in turn has inspired a variety of other scanning probe techniques. Originally the AFM was used to image the topography of surfaces, but by modifying the tip it is possible to measure other quantities (for example, electric and magnetic properties, chemical potentials, friction and so on), and also to perform various types of spectroscopy and analysis. Increasingly, the AFM is also becoming a tool for nanofabrication.
Relatively new is the use of AFM in cell biology. We wrote about this recently in a Spotlight that described a novel method to probe the mechanical properties of living and dead bacteria via AFM indentation experimentations ("Dead or alive – nanotechnology technique tells the difference ").
Researchers in Switzerland have now demonstrated novel cell biology applications using hollow force-controlled AFM cantilevers – a new device they have called FluidFM.
"The core of the invention is to have fixed already existing microchanneled cantilevers to an opportunely drilled AFM probeholder" Tomaso Zambelli tells Nanowerk. "In this way, the FluidFM is not restricted to air but can work in liquid environments. Since it combines a nanofluidics circuit, every soluble agent can be added to the solution to be dispensed. Moreover, the force feedback allows to approach very soft objects like cells without damaging them."
As cell biology is moving towards single cell technologies and applications, single cell injection or extraction techniques are in high demand. Apart from this, however, the FluidFM could also be used for nanofabrication applications such as depositing a conductive polymer wire between to microelectrodes, or to etch ultrafine structures out of solid materials using acids as the spray agent. The team has reported their findings in a recent paper in Nano Letters ("FluidFM: Combining Atomic Force Microscopy and Nanofluidics in a Universal Liquid Delivery System for Single Cell Applications and Beyond").
Zambelli originally realized that the technology of the atomic force microscope that is normally used only to image cells could be transformed into a microinjection system. The result of the development by Zambelli and his colleagues in the Laboratory of Biosensors and Bioelectronics at the Institute of Biomedical technology at ETH Zurich and in the Swiss Center for Electronics and Microtechnology (CSEM) in Neuchâtel was the "fluid force microscope", currently the smallest automated nanosyringe currently in existence.
"Our FluidFM even operates under water or in other liquids – a precondition for being able to use the instrument to study cells" says Zambelli.
The force detection system of the FluidFM is so sensitive that the interactions between tip and sample can be reduced to the piconewton range, thereby allowing to bring the hollow cantilever into gentle but close contact with cells without puncturing or damaging the cell membrane.
On the other hand, if membrane perforation for intracellular injection is desired, this is simply achieved by selecting a higher force set point taking advantage of the extremely sharp tip (radius of curvature on the order of tens of nanometers).
To enable solutions to be injected into the cell through the needle, scientists at CSEM installed a microchannel in the cantilever. Substances such as medicinal active ingredients, DNA, and RNA can be injected into a cell through the tip. At the same time, samples can also be taken from a cell through the needle for subsequent analysis.
According to Zambelli, while this approach is similar to microinjection using glass pipettes, there are a number of essential differences.
"Microinjection uses optical microscopy to control the position of the glass pipette tip both in the xy plane and in the z direction (via image focusing)" he explains. "As consequence of the limited resolution of optical microscopy, subcellular domains cannot be addressed and tip contact with the cell membrane cannot be discriminated from tip penetration of the membrane. Cells are often lethally damaged and skilled personnel are required for microinjection."
"The limited resolution of this method and the absence of mechanical information contrast strongly with the high resolution imaging and the direct control of applied forces that are possible with AFM. Precise force feedback reduces potential damage to the cell; the cantilever geometry minimizes both the normal contact forces on the cell and the lateral vibrations of the tip that can tear the cell membrane during microinjection; the spatial resolution is determined by the submicrometer aperture so that injection into subcellular domains becomes easily achievable."
Experiments conducted by the Swiss team demonstrate the potential of the FluidFM in the field of single-cell biology through precise stimulation of selected cell domains with whatever soluble agents at a well-defined time.
"We confidently expect that the inclusion of an electrode in the microfluidics circuit will allow a similar approach toward patch-clamping with force controlled gigaseal formation," says Zambelli. "We will also explore other strategies at the single-cell level, such as the controlled perforation of the cell membrane for local extraction of cytoplasm
"Zambelli and his colleagues are convinced that their technology has great commercial potential. Rejecting offers from well-known manufacturers of atomic force microscopes for the sale of the patent for the FluidFM, they have founded Cytosurge LLC, a company dedicated to commercially develop the instrument.
Today, Zambelli's laboratory contains two prototypes of the instrument, which are being tested in collaboration with biologists.
Modern Communication Services
Society is becoming more informationally and visually oriented every day. Personal computing facilitates easy access, manipulation, storage, and exchange of information. These processes require reliable transmission of data information. Communicating documents by images and the use of high resolution graphics terminals provide a more natural and informative mode of human interaction than just voice and data. Video teleconferencing enhances group interaction at a distance. High definition entertainment video improves the quality of picture at the expense of higher transmission bit-rates, which may require new transmission means other than the present overcrowded radio spectrum. A modern Telecommunications network (such as the broadband network) must provide all these different services (multi-services) to the user.
Differences between traditional (telephony) and modern communication services
Conventional telephony communicates using:
* the voice medium only
* connects only two telephones per call
* uses circuits of fixed bit rate
In contrast, modern communication services depart from the conventional telephony service in these three essential aspects. Modern communication services can be:
* Multimedia
* point to point, and
* multi-rate
These aspects are examined Individually in the following three sub-sections.
* Multi-media: A multi-media call may communicate audio, data, still images, or full-motion video, or any combination of these media. Each medium has different demands for communication qualities, such as:
o bandwidth requirement
o signal latency within the network, and
o signal fidelity upon delivery by the network
Moreover, the information content of each medium may affect the information generated by other media. For example, voice could be transcribed into data via voice recognition and data commands may control the way voice and video are presented. These interactions most often occur at the communication terminals, but may also occur within the network .
* Multi-point: A multi-point call involves the setup of connections among more than two people. These connections can be multi-media. They can be one way or two way communications. These connections may be reconfigured many times within the duration of a call. A few examples will be used to contrast point-to-point communications versus multi-point communications. Traditional voice calls are predominantly two party calls, requiring a point-to-point connection using only the voice medium. To access pictorial information in a remote database would require a point-to-point connection that sends low bit-rate queries to the database, and high bit-rate video from the database. Entertainment video applications are largely point-to-multi-point connections, requiring one way communication of full motion video and audio from the program source to the viewers. Video teleconferencing involves connections among many parties, communicating voice, video, as well as data. Thus offering future services requires flexible management of the connection and media requests of a multi-point, multi-media communication call .
* Multi-rate A multi-rate service network is one which allocates transmission capacity flexibly to connections. A multi-media network has to support a broad range of bit-rates demanded by connections, not only because there are many communication media, but also because a communication medium may be encoded by algorithms with different bit-rates. For example, audio signals can be encoded with bit-rates ranging from less than 1 kbit/s to hundreds of kbit/s, using different encoding algorithms with a wide range of complexity and quality of audio reproduction. Similarly, full motion video signals may be encoded with bit-rates ranging from less than 1 Mbit/s to hundreds of Mbit/s. Thus a network transporting both video and audio signals may have to integrate traffic with a very broad range of bit-rates.
Society is becoming more informationally and visually oriented every day. Personal computing facilitates easy access, manipulation, storage, and exchange of information. These processes require reliable transmission of data information. Communicating documents by images and the use of high resolution graphics terminals provide a more natural and informative mode of human interaction than just voice and data. Video teleconferencing enhances group interaction at a distance. High definition entertainment video improves the quality of picture at the expense of higher transmission bit-rates, which may require new transmission means other than the present overcrowded radio spectrum. A modern Telecommunications network (such as the broadband network) must provide all these different services (multi-services) to the user.
Differences between traditional (telephony) and modern communication services
Conventional telephony communicates using:
* the voice medium only
* connects only two telephones per call
* uses circuits of fixed bit rate
In contrast, modern communication services depart from the conventional telephony service in these three essential aspects. Modern communication services can be:
* Multimedia
* point to point, and
* multi-rate
These aspects are examined Individually in the following three sub-sections.
* Multi-media: A multi-media call may communicate audio, data, still images, or full-motion video, or any combination of these media. Each medium has different demands for communication qualities, such as:
o bandwidth requirement
o signal latency within the network, and
o signal fidelity upon delivery by the network
Moreover, the information content of each medium may affect the information generated by other media. For example, voice could be transcribed into data via voice recognition and data commands may control the way voice and video are presented. These interactions most often occur at the communication terminals, but may also occur within the network .
* Multi-point: A multi-point call involves the setup of connections among more than two people. These connections can be multi-media. They can be one way or two way communications. These connections may be reconfigured many times within the duration of a call. A few examples will be used to contrast point-to-point communications versus multi-point communications. Traditional voice calls are predominantly two party calls, requiring a point-to-point connection using only the voice medium. To access pictorial information in a remote database would require a point-to-point connection that sends low bit-rate queries to the database, and high bit-rate video from the database. Entertainment video applications are largely point-to-multi-point connections, requiring one way communication of full motion video and audio from the program source to the viewers. Video teleconferencing involves connections among many parties, communicating voice, video, as well as data. Thus offering future services requires flexible management of the connection and media requests of a multi-point, multi-media communication call .
* Multi-rate A multi-rate service network is one which allocates transmission capacity flexibly to connections. A multi-media network has to support a broad range of bit-rates demanded by connections, not only because there are many communication media, but also because a communication medium may be encoded by algorithms with different bit-rates. For example, audio signals can be encoded with bit-rates ranging from less than 1 kbit/s to hundreds of kbit/s, using different encoding algorithms with a wide range of complexity and quality of audio reproduction. Similarly, full motion video signals may be encoded with bit-rates ranging from less than 1 Mbit/s to hundreds of Mbit/s. Thus a network transporting both video and audio signals may have to integrate traffic with a very broad range of bit-rates.
SyncML
SyncML (Synchronization Markup Language) is the former name (currently referred to as: Open Mobile Alliance Data Synchronization and Device Management) for a platform-independent information synchronization standard. Existing synchronization solutions have mostly been somewhat vendor-, application- or operating system specific. The purpose of SyncML is to change this by offering an open standard as a replacement. Several major companies such as Motorola, Nokia, Sony Ericsson, LG, IBM and Siemens AG already support SyncML in their products, although LG do not support it in all their phone models, preferring to use their own proprietary syncing protocols such as LG Sync SPP. Philippe Kahn was instrumental in the founding vision for synchronization with Starfish Software, later acquired by Motorola. The founding vision as expressed by Kahn was: "Global synchronization and integration of wireless and wireline devices".
SyncML is most commonly thought of as a method to synchronize contact and calendar information (personal information manager) between some type of handheld device and a computer (personal, or network-based service), such as between a mobile phone and a personal computer. The new version of the specification includes support for push email, providing a standard protocol alternative to proprietary solutions like BlackBerry.
Some products are now using SyncML for more general information synchronization purposes, such as to synchronize project task information across a distributed group of team members. SyncML can also be used as a base for backup solutions.
Problem areas
* A fairly intricate and vague protocol specification has meant that in general there are major interworking problems with different servers against different clients.
* In addition to the server address, user name and password, SyncML requires a database name to be specified for opening a connection. This database name is not standardized, and different servers use different names for the same service. E.g. one server might use card while another ./contacts for the contact database.
* Only the over-the-air (OTA) interface has any degree of standardization (e.g. OMA CP 1.1, OTA 7.0) whereas synchronization over a local interface is not standardized, and requires specific solution for any device, if available at all.
SyncML (Synchronization Markup Language) is the former name (currently referred to as: Open Mobile Alliance Data Synchronization and Device Management) for a platform-independent information synchronization standard. Existing synchronization solutions have mostly been somewhat vendor-, application- or operating system specific. The purpose of SyncML is to change this by offering an open standard as a replacement. Several major companies such as Motorola, Nokia, Sony Ericsson, LG, IBM and Siemens AG already support SyncML in their products, although LG do not support it in all their phone models, preferring to use their own proprietary syncing protocols such as LG Sync SPP. Philippe Kahn was instrumental in the founding vision for synchronization with Starfish Software, later acquired by Motorola. The founding vision as expressed by Kahn was: "Global synchronization and integration of wireless and wireline devices".
SyncML is most commonly thought of as a method to synchronize contact and calendar information (personal information manager) between some type of handheld device and a computer (personal, or network-based service), such as between a mobile phone and a personal computer. The new version of the specification includes support for push email, providing a standard protocol alternative to proprietary solutions like BlackBerry.
Some products are now using SyncML for more general information synchronization purposes, such as to synchronize project task information across a distributed group of team members. SyncML can also be used as a base for backup solutions.
Problem areas
* A fairly intricate and vague protocol specification has meant that in general there are major interworking problems with different servers against different clients.
* In addition to the server address, user name and password, SyncML requires a database name to be specified for opening a connection. This database name is not standardized, and different servers use different names for the same service. E.g. one server might use card while another ./contacts for the contact database.
* Only the over-the-air (OTA) interface has any degree of standardization (e.g. OMA CP 1.1, OTA 7.0) whereas synchronization over a local interface is not standardized, and requires specific solution for any device, if available at all.
smart card

A smart card, chip card, or integrated circuit card (ICC), is any pocket-sized card with embedded integrated circuits which can process data. This implies that it can receive input which is processed — by way of the ICC applications — and delivered as an output. There are two broad categories of ICCs. Memory cards contain only non-volatile memory storage components, and perhaps some specific security logic. Microprocessor cards contain volatile memory and microprocessor components. The card is made of plastic, generally PVC, but sometimes ABS. The card may embed a hologram to avoid counterfeiting. Using smartcards is also a form of strong security authentication for single sign-on within large companies and organizations.
A Smart Card is a plastic card the size of a credit card with an integrated circuit built into it. This integrated circuit may consist only of EEPROM in the case of a memory card, or it may also contain ROM, RAM and even a CPU.
Most smart cards have been designed with the look and feel of a credit or debit card, but can function on at least three levels (credit - debit - personal information). Smart cards include a microchip as the central processing unit, random access memory (RAM) and data storage of around 10MB.
A smart card is a mini-computer without the display screen and keyboard. Smart cards contain a microchip with an integrated circuit capable of processing and storing thousands of bytes of electronic data. Due to the portability and size of smart cards they are seen as the next generation of data exchange.
Smart cards contain an operating system just like personal computers. Smart cards can store and process information and are fully interactive. Advanced smart cards also contain a file structure with secret keys and encryption algorithms. Due to the encrypted file system, data can be stored in separated files with full security.
Organizations are steadily migrating toward this technology. The days are numbered for a single mainframe used for computing every directive. Today, the delegation of tasks is being transferred to small, but dedicated smart cards. Their usefulness may soon exceed that of the standard computer for a variety of applications due, in part, to their portability and ease of use.
The smart card is an electronic recording device. Information in the microchip can instantaneously verify the cardholder's identity and any privileges to which the cardholder may be entitled. Information such as withdrawals, sales, and bills can be processed immediately and if/when necessary; those records can be transmitted to a central computer for file updating.
Smart cards are secure, compact and intelligent data carriers. Smart cards should be regarded as specialized computers capable of processing, storing and safeguarding thousands of bytes of data. Smart cards have electrical contacts and a thin metallic plate just above center line on one side of the card. Beneath this dime-sized plate is an integrated circuit (IC) chip containing a central processing unit (CPU), random access memory (RAM) and non-volatile data storage. Data stored in the smart card's microchip can be accessed only through the chip operating system (COS), providing a high level of data security. This security takes the form of passwords allowing a user to access parts of the IC chip's memory or encryption/decryption measures which translate the bytes stored in memory into useful information.
Smart cards typically hold 2,000 to 8,000 electronic bytes of data (the equivalent of several pages of data). Because those bytes can be electronically coded, the effective storage capacity of each card is significantly increased. Magnetic-stripe cards, such as those issued by banks and credit card companies, lack the security of microchips but remain inexpensive due to their status as a single-purpose card. Smart cards can be a carrier of multiple records for multiple purposes. Once those purposes are maximized, the smart card is often viewed as superior and, ultimately, less expensive. The distributed processing possible with smart cards reduces the need for ever-larger mainframe computers and the expense of local and long-distance phone circuits required to maintain an on-line connection to a central computer.
Overview
A "smart card" is also characterized as follows:
* Dimensions are normally credit card size. The ID-1 of ISO/IEC 7810 standard defines them as 85.60 × 53.98 mm. Another popular size is ID-000 which is 25 × 15 mm (commonly used in SIM cards). Both are 0.76 mm thick.
* Contains a security system with tamper-resistant properties (e.g. a secure cryptoprocessor, secure file system, human-readable features) and is capable of providing security services (e.g. confidentiality of information in the memory).
* Asset managed by way of a central administration system which interchanges information and configuration settings with the card through the security system. The latter includes card hotlisting, updates for application data.
* Card data is transferred to the central administration system through card reading devices, such as ticket readers, ATMs etc.
[edit] Benefits
Smart cards can be used for identification, authentication, and data storage.
Smart cards provide a means of effecting business transactions in a flexible, secure, standard way with minimal human intervention.
Smart card can provide strong authentication for single sign-on or enterprise single sign-on to computers, laptops, data with encryption, enterprise resource planning platforms such as SAP, etc.
History
The automated chip card was invented by German rocket scientist Helmut Gröttrup and his colleague Jürgen Dethloff in 1968; the patent was finally approved in 1982. The first mass use of the cards was for payment in French pay phones, starting in 1983 (Télécarte).
Roland Moreno actually patented his first concept of the memory card in 1974. In 1977, Michel Ugon from Honeywell Bull invented the first microprocessor smart card. In 1978, Bull patented the SPOM (Self Programmable One-chip Microcomputer) that defines the necessary architecture to auto-program the chip. Three years later, the very first "CP8" based on this patent was produced by Motorola. At that time, Bull had 1200 patents related to smart cards. In 2001, Bull sold its CP8 Division together with all its patents to Schlumberger. Subsequently, Schlumberger combined its smart card department and CP8 and created Axalto. In 2006, Axalto and Gemplus, at the time the world's no.2 and no.1 smart card manufacturers, merged and became Gemalto.
The second use was with the integration of microchips into all French debit cards (Carte Bleue) completed in 1992. When paying in France with a Carte Bleue, one inserts the card into the merchant's terminal, then types the PIN, before the transaction is accepted. Only very limited transactions (such as paying small autoroute tolls) are accepted without PIN.
Smart-card-based electronic purse systems (in which value is stored on the card chip, not in an externally recorded account, so that machines accepting the card need no network connectivity) were tried throughout Europe from the mid-1990s, most notably in Germany (Geldkarte), Austria (Quick), Belgium (Proton), France (Moneo), the Netherlands (Chipknip and Chipper), Switzerland ("Cash"), Norway ("Mondex"), Sweden ("Cash"), Finland ("Avant"), UK ("Mondex"), Denmark ("Danmønt") and Portugal ("Porta-moedas Multibanco").
The major boom in smart card use came in the 1990s, with the introduction of the smart-card-based SIM used in GSM mobile phone equipment in Europe. With the ubiquity of mobile phones in Europe, smart cards have become very common.
The international payment brands MasterCard, Visa, and Europay agreed in 1993 to work together to develop the specifications for the use of smart cards in payment cards used as either a debit or a credit card. The first version of the EMV system was released in 1994. In 1998 a stable release of the specifications was available. EMVco, the company responsible for the long-term maintenance of the system, upgraded the specification in 2000 and most recently in 2004. The goal of EMVco is to assure the various financial institutions and retailers that the specifications retain backward compatibility with the 1998 version.
With the exception of countries such as the United States of America there has been significant progress in the deployment of EMV-compliant point of sale equipment and the issuance of debit and or credit cards adhering the EMV specifications. Typically, a country's national payment association, in coordination with MasterCard International, Visa International, American Express and JCB, develop detailed implementation plans assuring a coordinated effort by the various stakeholders involved.
The backers of EMV claim it is a paradigm shift in the way one looks at payment systems. In countries where banks do not currently offer a single card capable of supporting multiple account types, there may be merit to this statement. Though some banks in these countries are considering issuing one card that will serve as both a debit card and as a credit card, the business justification for this is still quite elusive. Within EMV a concept called Application Selection defines how the consumer selects which means of payment to employ for that purchase at the point of sale.
For the banks interested in introducing smart cards the only quantifiable benefit is the ability to forecast a significant reduction in fraud, in particular counterfeit, lost and stolen. The current level of fraud a country is experiencing, coupled with whether that country's laws assign the risk of fraud to the consumer or the bank, determines if there is a business case for the financial institutions. Some critics claim that the savings are far less than the cost of implementing EMV, and thus many believe that the USA payments industry will opt to wait out the current EMV life cycle in order to implement new, contactless technology.
Smart cards with contactless interfaces are becoming increasingly popular for payment and ticketing applications such as mass transit. Visa and MasterCard have agreed to an easy-to-implement version currently being deployed (2004-2006) in the USA. Across the globe, contactless fare collection systems are being implemented to drive efficiencies in public transit. The various standards emerging are local in focus and are not compatible, though the MIFARE Standard card from Philips has a considerable market share in the US and Europe.
Smart cards are also being introduced in personal identification and entitlement schemes at regional, national, and international levels. Citizen cards, drivers’ licenses, and patient card schemes are becoming more prevalent; For example in Malaysia, the compulsory national ID scheme MyKad includes 8 different applications and is rolled out for 18 million users. Contactless smart cards are being integrated into ICAO biometric passports to enhance security for international travel.
Contact smart card
Contact smart cards have a contact area, comprising several gold-plated contact pads, that is about 1 cm square. When inserted into a reader, the chip makes contact with electrical connectors that can read information from the chip and write information back.
The ISO/IEC 7816 and ISO/IEC 7810 series of standards define:
* the physical shape
* the positions and shapes of the electrical connectors
* the electrical characteristics
* the communications protocols, that includes the format of the commands sent to the card and the responses returned by the card.
* robustness of the card
* the functionality
The cards do not contain batteries; energy is supplied by the card reader.
Smart Card Usage
The uses of smart cards are as versatile as any mini-computer. At a hospital emergency room, for example, the card could identify the person's health-insurance carrier and transfer all necessary information from the microchip to an admittance sheet. Tests, treatment, billing and prescriptions could be processed more quickly using the card. Major clinical findings could be added to the medical information section within the microchip.
In the U.S., smart cards are utilized in GSM mobile telephones, in DirecTV and EchoStar satellite receivers, and in the American Express Blue card.
Smart Card Operating Systems
Smart cards designed for specific applications may run proprietary operating systems. Smart cards designed with the capability to run multiple applications usually run MULTOS or Java Card.


A Smart Card is a plastic card the size of a credit card with an integrated circuit built into it. This integrated circuit may consist only of EEPROM in the case of a memory card, or it may also contain ROM, RAM and even a CPU.
Most smart cards have been designed with the look and feel of a credit or debit card, but can function on at least three levels (credit - debit - personal information). Smart cards include a microchip as the central processing unit, random access memory (RAM) and data storage of around 10MB.
A smart card is a mini-computer without the display screen and keyboard. Smart cards contain a microchip with an integrated circuit capable of processing and storing thousands of bytes of electronic data. Due to the portability and size of smart cards they are seen as the next generation of data exchange.
Smart cards contain an operating system just like personal computers. Smart cards can store and process information and are fully interactive. Advanced smart cards also contain a file structure with secret keys and encryption algorithms. Due to the encrypted file system, data can be stored in separated files with full security.
Organizations are steadily migrating toward this technology. The days are numbered for a single mainframe used for computing every directive. Today, the delegation of tasks is being transferred to small, but dedicated smart cards. Their usefulness may soon exceed that of the standard computer for a variety of applications due, in part, to their portability and ease of use.
The smart card is an electronic recording device. Information in the microchip can instantaneously verify the cardholder's identity and any privileges to which the cardholder may be entitled. Information such as withdrawals, sales, and bills can be processed immediately and if/when necessary; those records can be transmitted to a central computer for file updating.
Smart cards are secure, compact and intelligent data carriers. Smart cards should be regarded as specialized computers capable of processing, storing and safeguarding thousands of bytes of data. Smart cards have electrical contacts and a thin metallic plate just above center line on one side of the card. Beneath this dime-sized plate is an integrated circuit (IC) chip containing a central processing unit (CPU), random access memory (RAM) and non-volatile data storage. Data stored in the smart card's microchip can be accessed only through the chip operating system (COS), providing a high level of data security. This security takes the form of passwords allowing a user to access parts of the IC chip's memory or encryption/decryption measures which translate the bytes stored in memory into useful information.
Smart cards typically hold 2,000 to 8,000 electronic bytes of data (the equivalent of several pages of data). Because those bytes can be electronically coded, the effective storage capacity of each card is significantly increased. Magnetic-stripe cards, such as those issued by banks and credit card companies, lack the security of microchips but remain inexpensive due to their status as a single-purpose card. Smart cards can be a carrier of multiple records for multiple purposes. Once those purposes are maximized, the smart card is often viewed as superior and, ultimately, less expensive. The distributed processing possible with smart cards reduces the need for ever-larger mainframe computers and the expense of local and long-distance phone circuits required to maintain an on-line connection to a central computer.
Overview
A "smart card" is also characterized as follows:
* Dimensions are normally credit card size. The ID-1 of ISO/IEC 7810 standard defines them as 85.60 × 53.98 mm. Another popular size is ID-000 which is 25 × 15 mm (commonly used in SIM cards). Both are 0.76 mm thick.
* Contains a security system with tamper-resistant properties (e.g. a secure cryptoprocessor, secure file system, human-readable features) and is capable of providing security services (e.g. confidentiality of information in the memory).
* Asset managed by way of a central administration system which interchanges information and configuration settings with the card through the security system. The latter includes card hotlisting, updates for application data.
* Card data is transferred to the central administration system through card reading devices, such as ticket readers, ATMs etc.
[edit] Benefits
Smart cards can be used for identification, authentication, and data storage.
Smart cards provide a means of effecting business transactions in a flexible, secure, standard way with minimal human intervention.
Smart card can provide strong authentication for single sign-on or enterprise single sign-on to computers, laptops, data with encryption, enterprise resource planning platforms such as SAP, etc.
History
The automated chip card was invented by German rocket scientist Helmut Gröttrup and his colleague Jürgen Dethloff in 1968; the patent was finally approved in 1982. The first mass use of the cards was for payment in French pay phones, starting in 1983 (Télécarte).
Roland Moreno actually patented his first concept of the memory card in 1974. In 1977, Michel Ugon from Honeywell Bull invented the first microprocessor smart card. In 1978, Bull patented the SPOM (Self Programmable One-chip Microcomputer) that defines the necessary architecture to auto-program the chip. Three years later, the very first "CP8" based on this patent was produced by Motorola. At that time, Bull had 1200 patents related to smart cards. In 2001, Bull sold its CP8 Division together with all its patents to Schlumberger. Subsequently, Schlumberger combined its smart card department and CP8 and created Axalto. In 2006, Axalto and Gemplus, at the time the world's no.2 and no.1 smart card manufacturers, merged and became Gemalto.
The second use was with the integration of microchips into all French debit cards (Carte Bleue) completed in 1992. When paying in France with a Carte Bleue, one inserts the card into the merchant's terminal, then types the PIN, before the transaction is accepted. Only very limited transactions (such as paying small autoroute tolls) are accepted without PIN.
Smart-card-based electronic purse systems (in which value is stored on the card chip, not in an externally recorded account, so that machines accepting the card need no network connectivity) were tried throughout Europe from the mid-1990s, most notably in Germany (Geldkarte), Austria (Quick), Belgium (Proton), France (Moneo), the Netherlands (Chipknip and Chipper), Switzerland ("Cash"), Norway ("Mondex"), Sweden ("Cash"), Finland ("Avant"), UK ("Mondex"), Denmark ("Danmønt") and Portugal ("Porta-moedas Multibanco").
The major boom in smart card use came in the 1990s, with the introduction of the smart-card-based SIM used in GSM mobile phone equipment in Europe. With the ubiquity of mobile phones in Europe, smart cards have become very common.
The international payment brands MasterCard, Visa, and Europay agreed in 1993 to work together to develop the specifications for the use of smart cards in payment cards used as either a debit or a credit card. The first version of the EMV system was released in 1994. In 1998 a stable release of the specifications was available. EMVco, the company responsible for the long-term maintenance of the system, upgraded the specification in 2000 and most recently in 2004. The goal of EMVco is to assure the various financial institutions and retailers that the specifications retain backward compatibility with the 1998 version.
With the exception of countries such as the United States of America there has been significant progress in the deployment of EMV-compliant point of sale equipment and the issuance of debit and or credit cards adhering the EMV specifications. Typically, a country's national payment association, in coordination with MasterCard International, Visa International, American Express and JCB, develop detailed implementation plans assuring a coordinated effort by the various stakeholders involved.
The backers of EMV claim it is a paradigm shift in the way one looks at payment systems. In countries where banks do not currently offer a single card capable of supporting multiple account types, there may be merit to this statement. Though some banks in these countries are considering issuing one card that will serve as both a debit card and as a credit card, the business justification for this is still quite elusive. Within EMV a concept called Application Selection defines how the consumer selects which means of payment to employ for that purchase at the point of sale.
For the banks interested in introducing smart cards the only quantifiable benefit is the ability to forecast a significant reduction in fraud, in particular counterfeit, lost and stolen. The current level of fraud a country is experiencing, coupled with whether that country's laws assign the risk of fraud to the consumer or the bank, determines if there is a business case for the financial institutions. Some critics claim that the savings are far less than the cost of implementing EMV, and thus many believe that the USA payments industry will opt to wait out the current EMV life cycle in order to implement new, contactless technology.
Smart cards with contactless interfaces are becoming increasingly popular for payment and ticketing applications such as mass transit. Visa and MasterCard have agreed to an easy-to-implement version currently being deployed (2004-2006) in the USA. Across the globe, contactless fare collection systems are being implemented to drive efficiencies in public transit. The various standards emerging are local in focus and are not compatible, though the MIFARE Standard card from Philips has a considerable market share in the US and Europe.
Smart cards are also being introduced in personal identification and entitlement schemes at regional, national, and international levels. Citizen cards, drivers’ licenses, and patient card schemes are becoming more prevalent; For example in Malaysia, the compulsory national ID scheme MyKad includes 8 different applications and is rolled out for 18 million users. Contactless smart cards are being integrated into ICAO biometric passports to enhance security for international travel.
Contact smart card
Contact smart cards have a contact area, comprising several gold-plated contact pads, that is about 1 cm square. When inserted into a reader, the chip makes contact with electrical connectors that can read information from the chip and write information back.
The ISO/IEC 7816 and ISO/IEC 7810 series of standards define:
* the physical shape
* the positions and shapes of the electrical connectors
* the electrical characteristics
* the communications protocols, that includes the format of the commands sent to the card and the responses returned by the card.
* robustness of the card
* the functionality
The cards do not contain batteries; energy is supplied by the card reader.
Smart Card Usage
The uses of smart cards are as versatile as any mini-computer. At a hospital emergency room, for example, the card could identify the person's health-insurance carrier and transfer all necessary information from the microchip to an admittance sheet. Tests, treatment, billing and prescriptions could be processed more quickly using the card. Major clinical findings could be added to the medical information section within the microchip.
In the U.S., smart cards are utilized in GSM mobile telephones, in DirecTV and EchoStar satellite receivers, and in the American Express Blue card.
Smart Card Operating Systems
Smart cards designed for specific applications may run proprietary operating systems. Smart cards designed with the capability to run multiple applications usually run MULTOS or Java Card.
ZigBee


ZigBee is a specification for a suite of high level communication protocols using small, low-power digital radios based on the IEEE 802.15.4-2003 standard for wireless personal area networks (WPANs), such as wireless headphones connecting with cell phones via short-range radio. The technology defined by the ZigBee specification is intended to be simpler and less expensive than other WPANs, such as Bluetooth. ZigBee is targeted at radio-frequency (RF) applications that require a low data rate, long battery life, and secure networking.
The ZigBee Alliance is a group of companies which maintain and publish the ZigBee standard.
Overview
ZigBee is a low-cost, low-power, wireless mesh networking proprietary standard. The low cost allows the technology to be widely deployed in wireless control and monitoring applications, the low power-usage allows longer life with smaller batteries, and the mesh networking provides high reliability and larger range.
The ZigBee Alliance, the standards body which defines ZigBee, also publishes application profiles that allow multiple OEM vendors to create interoperable products. The current list of application profiles either published or in the works are:
* Home Automation
* ZigBee Smart Energy
* Commercial Building Automation
* Telecommunication Applications
* Personal, Home, and Hospital Care
The relationship between IEEE 802.15.4 and ZigBee is similar to that between IEEE 802.11 and the Wi-Fi Alliance. The ZigBee 1.0 specification was ratified on 14 December 2004 and is available to members of the ZigBee Alliance. Most recently, the ZigBee 2007 specification was posted on 30 October 2007. The first ZigBee Application Profile, Home Automation, was announced 2 November 2007.
For non-commercial purposes, the ZigBee specification is available free to the general public. An entry level membership in the ZigBee Alliance, called Adopter, costs US$3500 annually and provides access to the as-yet unpublished specifications and permission to create products for market using the specifications.
ZigBee operates in the industrial, scientific and medical (ISM) radio bands; 868 MHz in Europe, 915 MHz in the USA and Australia, and 2.4 GHz in most jurisdictions worldwide. The technology is intended to be simpler and less expensive than other WPANs such as Bluetooth. ZigBee chip vendors typically sell integrated radios and microcontrollers with between 60K and 128K flash memory, such as the Freescale MC13213, the Ember EM250 and the Texas Instruments CC2430. Radios are also available stand-alone to be used with any processor or microcontroller. Generally, the chip vendors also offer the ZigBee software stack, although independent ones are also available.
"In the U.S., as of 2006, the retail price of a Zigbee-compliant transceiver is approaching $1, and the price for one radio, processor, and memory package is about $3." Comparatively, the price of consumer-grade Bluetooth chips is now under $3.. In other countries the prices are higher. For example in the UK, (March 2009) the one-off cost to a hobbyist for a barebones ZigBee surface-mount transceiver IC varies from £5 to £9, with pre-assembled modules around £10 more (excluding aerials).
Because Zigbee can activate (go from sleep to active mode) in 15 msec or less, the latency can be very low and devices can be very responsive — particularly compared to Bluetooth wake-up delays, which are typically around three seconds. Because Zigbees can sleep most of the time, average power consumption can be very low, resulting in long battery life.
The first stack release is now called Zigbee 2004. The second stack release is called Zigbee 2006, and mainly replaces the MSG/KVP structure used in 2004 with a "cluster library". The 2004 stack is now more or less obsolete.
Zigbee 2007, now the current stack release, contains two stack profiles, stack profile 1 (simply called ZigBee), for home and light commercial use, and stack profile 2 (called ZigBee Pro). ZigBee Pro offers more features, such as multi-casting, many-to-one routing and high security with Symmetric-Key Key Exchange (SKKE), while ZigBee (stack profile 1) offers a smaller footprint in RAM and flash. Both offer full mesh networking and work with all ZigBee application profiles.
ZigBee 2007 is fully backward compatible with ZigBee 2006 devices: A ZigBee 2007 device may join and operate on a ZigBee 2006 network and vice versa. Due to differences in routing options, ZigBee Pro devices must become non-routing ZigBee End-Devices (ZEDs) on a ZigBee 2006 or ZigBee 2007 network, the same as ZigBee 2006 or ZigBee 2007 devices must become ZEDs on a ZigBee Pro network. The applications running on those devices work the same, regardless of the stack profile beneath them.
Uses
ZigBee protocols are intended for use in embedded applications requiring low data rates and low power consumption. ZigBee's current focus is to define a general-purpose, inexpensive, self-organizing mesh network that can be used for industrial control, embedded sensing, medical data collection, smoke and intruder warning, building automation, home automation, etc. The resulting network will use very small amounts of power — individual devices must have a battery life of at least two years to pass ZigBee certification[8].
Typical application areas include
* Home Entertainment and Control — Smart lighting, advanced temperature control, safety and security, movies and music
* Home Awareness — Water sensors, power sensors, smoke and fire detectors, smart appliances and access sensors
* Mobile Services — m-payment, m-monitoring and control, m-security and access control, m-healthcare and tele-assist
* Commercial Building — Energy monitoring, HVAC, lighting, access control
* Industrial Plant — Process control, asset management, environmental management, energy management, industrial device control
Device types
There are three different types of ZigBee devices:
* ZigBee coordinator (ZC): The most capable device, the coordinator forms the root of the network tree and might bridge to other networks. There is exactly one ZigBee coordinator in each network since it is the device that started the network originally. It is able to store information about the network, including acting as the Trust Centre & repository for security keys.
* ZigBee Router (ZR): As well as running an application function, a router can act as an intermediate router, passing on data from other devices.
* ZigBee End Device (ZED): Contains just enough functionality to talk to the parent node (either the coordinator or a router); it cannot relay data from other devices. This relationship allows the node to be asleep a significant amount of the time thereby giving long battery life. A ZED requires the least amount of memory, and therefore can be less expensive to manufacture than a ZR or ZC.
Software and hardware
The software is designed to be easy to develop on small, inexpensive microprocessors. The radio design used by ZigBee has been carefully optimized for low cost in large scale production. It has few analog stages and uses digital circuits wherever possible.
Even though the radios themselves are inexpensive, the ZigBee Qualification Process involves a full validation of the requirements of the physical layer. This amount of concern about the Physical Layer has multiple benefits, since all radios derived from that semiconductor mask set would enjoy the same RF characteristics. On the other hand, an uncertified physical layer that malfunctions could cripple the battery lifespan of other devices on a ZigBee network. Where other protocols can mask poor sensitivity or other esoteric problems in a fade compensation response, ZigBee radios have very tight engineering constraints: they are both power and bandwidth constrained. Thus, radios are tested to the ISO 17025 standard with guidance given by Clause 6 of the 802.15.4-2006 Standard. Most vendors plan to integrate the radio and microcontroller onto a single chip.
ZigBee and ZigBee PRO technologies
All ZigBee technology devices implement a layered stack architecture. The ZigBee stack uses the IEEE 802.15.4 Physical (PHY) and Medium Access Control (MAC) Layers. In addition to the IEEE layers, the ZigBee Alliance has defined a set of standardized layers that sit on top of the IEEE layers and together these layers make up the ZigBee technology stack architecture.
The lower layers (including PHY/MAC, Network and Security layers) make up the ZigBee stack. On top of this there is an Application layer which will be specific to the particular "Profile" being implemented. The Profile contains the protocol that is specific to the application ZigBee is being used to implement.
Two types of ZigBee technology have been defined. These are ZigBee and ZigBee PRO. The differences between the types relate to network architecture and available security, however they have been design to allow a mixture of types within one network under certain network configurations. For further information relating to the two types of ZigBee technology click on the following link ZigBee & ZigBee PRO
Global Approvals
In addition to the regulatory Radio, Safety and EMC requirements that apply to ZigBee devices, the ZigBee Alliance has put in place a certification process that must be satisfied prior to using the ZigBee name or logo.
For further information on gaining market access to specific markets, just follow the relevant link:
* European
* North America
* South and Central America
* Africa
* Asia
* Australasia
* Middle East
The ZigBee Advantage
The ZigBee protocol was designed to carry data through the hostile RF environments that routinely exist in commercial and industrial applications.
ZigBee protocol features:
* Low duty cycle - Provides long battery life
* Low latency
* Support for multiple network topologies: Static, dynamic, star and mesh
* Direct Sequence Spread Spectrum (DSSS)
* Up to 65,000 nodes on a network
* 128-bit AES encryption – Provides secure connections between devices
* Collision avoidance
* Link quality indication
* Clear channel assessment
* Retries and acknowledgements
* Support for guaranteed time slots and packet freshness
corDECT


corDECT is a wireless local loop standard developed in India by IIT Madras and Midas Communications (www.midascomm.com) at Chennai, under leadership of Prof Ashok Jhunjhunwala, based on the DECT digital cordless phone standard.
Overview
The technology is a Fixed Wireless Option, which has extremely low capital costs and is ideal for small start ups to scale, as well as for sparse rural areas. It is very suitable for ICT4D projects and India has one such organization, n-Logue Communications that has aptly done this.
The full form of DECT is Digital Enhanced Cordless Telecommunications, which is useful in designing small capacity WLL (wireless in local loop) systems. These systems are operative only on LOS Conditions and are very much affected by weather conditions.
System is designed for rural and sub urban areas where subscriber density is medium or low. "corDECT" system provides simultaneous voice and Internet access. Following are the main parts of the system.
DECT Interface Unit (DIU)
This is a 1000 line exchange provides E1 interface to the PSTN. This can cater up to 20 base stations. These base stations are interfaced through ISDN link which carries signals and power feed for the base stations even up to 3 km.
Compact Base Station (CBS)
This is the radio fixed part of the DECT wireless local loop. CBSs are typically mounted on a tower top which can cater up to 50 subscribers with 0.1 erlang traffic.
Base Station Distributor (BSD)
This is a traffic aggregator used to extend the range of the wireless local-loop where 4 CBS can be connected to this.
Relay Base Station (RBS)
This another technique used to extend the range of the corDECT wireless local loop up to 25 km by a radio chain.
Fixed Remote Station (FRS)
This is the subscriber-end equipment used the corDECT wireless local loop which provides standard telephone instrument and Internet access up to 70kbit/s through Ethernet port.
The new generation corDECT technology is called Broadband corDECT which supports provides broadband Internet access over wireless local loop.

corDECT is a wireless local loop standard developed in India by IIT Madras and Midas Communications (www.midascomm.com) at Chennai, under leadership of Prof Ashok Jhunjhunwala, based on the DECT digital cordless phone standard.
Overview
The technology is a Fixed Wireless Option, which has extremely low capital costs and is ideal for small start ups to scale, as well as for sparse rural areas. It is very suitable for ICT4D projects and India has one such organization, n-Logue Communications that has aptly done this.
The full form of DECT is Digital Enhanced Cordless Telecommunications, which is useful in designing small capacity WLL (wireless in local loop) systems. These systems are operative only on LOS Conditions and are very much affected by weather conditions.
System is designed for rural and sub urban areas where subscriber density is medium or low. "corDECT" system provides simultaneous voice and Internet access. Following are the main parts of the system.
DECT Interface Unit (DIU)
This is a 1000 line exchange provides E1 interface to the PSTN. This can cater up to 20 base stations. These base stations are interfaced through ISDN link which carries signals and power feed for the base stations even up to 3 km.
Compact Base Station (CBS)
This is the radio fixed part of the DECT wireless local loop. CBSs are typically mounted on a tower top which can cater up to 50 subscribers with 0.1 erlang traffic.
Base Station Distributor (BSD)
This is a traffic aggregator used to extend the range of the wireless local-loop where 4 CBS can be connected to this.
Relay Base Station (RBS)
This another technique used to extend the range of the corDECT wireless local loop up to 25 km by a radio chain.
Fixed Remote Station (FRS)
This is the subscriber-end equipment used the corDECT wireless local loop which provides standard telephone instrument and Internet access up to 70kbit/s through Ethernet port.
The new generation corDECT technology is called Broadband corDECT which supports provides broadband Internet access over wireless local loop.
Labels: Computer Science, Computer's Notes, corDECT, Seminar Topics, Seminars
GRID COMPUTING

Grid computing (or the use of computational grids) is the application of several computers to a single problem at the same time — usually to a scientific or technical problem that requires a great number of computer processing cycles or access to large amounts of data.
One of the main strategies of grid computing is using software to divide and apportion pieces of a program among several computers, sometimes up to many thousands. Grid computing is distributed[citation needed], large-scale cluster computing, as well as a form of network-distributed parallel processing[citation needed]. The size of grid computing may vary from being small — confined to a network of computer workstations within a corporation, for example — to being large, public collaboration across many companies and networks. "The notion of a confined grid may also be known as an intra-nodes cooperation whilst the notion of a larger, wider grid may thus refer to an inter-nodes cooperation"[1]. This inter-/intra-nodes cooperation "across cyber-based collaborative organizations are also known as Virtual Organizations"[2].
It is a form of distributed computing whereby a “super and virtual computer” is composed of a cluster of networked loosely coupled computers acting in concert to perform very large tasks. This technology has been applied to computationally intensive scientific, mathematical, and academic problems through volunteer computing, and it is used in commercial enterprises for such diverse applications as drug discovery, economic forecasting, seismic analysis, and back-office data processing in support of e-commerce and Web services.
What distinguishes grid computing from conventional cluster computing systems is that grids tend to be more loosely coupled, heterogeneous, and geographically dispersed. Also, while a computing grid may be dedicated to a specialized application, it is often constructed with the aid of general-purpose grid software libraries and middleware.
Grid Computing is a technique in which the idle systems in the Network and their “wasted” CPU cycles can be efficiently used by uniting pools of servers, storage systems and networks into a single large virtual system for resource sharing dynamically at runtime.
- High performance computer clusters.
-share application, data and computing resources.
Design considerations and variations
One feature of distributed grids is that they can be formed from computing resources belonging to multiple individuals or organizations (known as multiple administrative domains). This can facilitate commercial transactions, as in utility computing, or make it easier to assemble volunteer computing networks.
One disadvantage of this feature is that the computers which are actually performing the calculations might not be entirely trustworthy. The designers of the system must thus introduce measures to prevent malfunctions or malicious participants from producing false, misleading, or erroneous results, and from using the system as an attack vector. This often involves assigning work randomly to different nodes (presumably with different owners) and checking that at least two different nodes report the same answer for a given work unit. Discrepancies would identify malfunctioning and malicious nodes.
Due to the lack of central control over the hardware, there is no way to guarantee that nodes will not drop out of the network at random times. Some nodes (like laptops or dialup Internet customers) may also be available for computation but not network communications for unpredictable periods. These variations can be accommodated by assigning large work units (thus reducing the need for continuous network connectivity) and reassigning work units when a given node fails to report its results as expected.
The impacts of trust and availability on performance and development difficulty can influence the choice of whether to deploy onto a dedicated computer cluster, to idle machines internal to the developing organization, or to an open external network of volunteers or contractors.
In many cases, the participating nodes must trust the central system not to abuse the access that is being granted, by interfering with the operation of other programs, mangling stored information, transmitting private data, or creating new security holes. Other systems employ measures to reduce the amount of trust “client” nodes must place in the central system such as placing applications in virtual machines.
Public systems or those crossing administrative domains (including different departments in the same organization) often result in the need to run on heterogeneous systems, using different operating systems and hardware architectures. With many languages, there is a trade off between investment in software development and the number of platforms that can be supported (and thus the size of the resulting network). Cross-platform languages can reduce the need to make this trade off, though potentially at the expense of high performance on any given node (due to run-time interpretation or lack of optimization for the particular platform).
Various middleware projects have created generic infrastructure, to allow diverse scientific and commercial projects to harness a particular associated grid, or for the purpose of setting up new grids. BOINC is a common one for academic projects seeking public volunteers; more are listed at the end of the article.
In fact, the middleware can be seen as a layer between the hardware and the software. On top of the middleware, a number of technical areas have to be considered, and these may or may not be middleware independent. Example areas include SLA management, Trust and Security, Virtual organization management, License Management, Portals and Data Management. These technical areas may be taken care of in a commercial solution, though the cutting edge of each area is often found within specific research projects examining the field.
The term grid computing originated in the early 1990s as a metaphor for making computer power as easy to access as an electric power grid in Ian Foster's and Carl Kesselman's seminal work, "The Grid: Blueprint for a new computing infrastructure."
CPU scavenging and volunteer computing were popularized beginning in 1997 by distributed.net and later in 1999 by SETI@home to harness the power of networked PCs worldwide, in order to solve CPU-intensive research problems.
The ideas of the grid (including those from distributed computing, object-oriented programming, and Web services) were brought together by Ian Foster, Carl Kesselman, and Steve Tuecke, widely regarded as the “fathers of the grid[4].” They led the effort to create the Globus Toolkit incorporating not just computation management but also storage management, security provisioning, data movement, monitoring, and a toolkit for developing additional services based on the same infrastructure, including agreement negotiation, notification mechanisms, trigger services, and information aggregation. While the Globus Toolkit remains the de facto standard for building grid solutions, a number of other tools have been built that answer some subset of services needed to create an enterprise or global grid.
In 2007 the term cloud computing came into popularity, which is conceptually similar to the canonical Foster definition of grid computing (in terms of computing resources being consumed as electricity is from the power grid). Indeed, grid computing is often (but not always) associated with the delivery of cloud computing systems as exemplified by the AppLogic system from 3tera.
IMPORTANCE OF GRID COMPUTING
Flexible, Secure, Coordinated resource sharing.
Virtualization of distributed computing resources.
Give worldwide access to a network of distributed resources.
GRID REQUIREMENTS
Security
Resource Management
Data Management
Information Services
Fault Detection
Portability
TYPES OF GRID
Computational Grid
-computing power
Scavenging Grid
-desktop machines
Data Grid
-data access across multiple organizations
ARCHITECTURAL OVERVIEW
- Grid’s computer can be thousands of miles apart and connected with internet networking technologies.
- Grids can share processors and drive space.
Fabric : Provides resources to which shared access is mediated by grid protocols.
Connectivity : Provides authentication solutions.
Resources : Connectivity layer, communication and authentication protocols.
Collective : Coordinates multiple resources.
Application : Constructed by calling upon services defined at any layer.
GRID COMPONENTS
In a world-wide Grid environment, capabilities that the infrastructure needs to support include:
Remote storage
Publication of datasets
Security
Uniform access to remote resources
Publication of services and access cost
Composition of distributed applications
Discovery of suitable datasets
Discovery of suitable computational resources
Mapping and Scheduling of jobs
Submission, monitoring, steering of jobs execution
Movement of code
Enforcement of quality
Metering and accounting
GRID LAYERS
Grid Fabric layer
Core Grid middleware
User-level Grid middleware
Grid application and protocols
OPERATIONAL FLOW FROM USER’S PERSPECTIVE
- Installing Core Gridmiddleware
- Resource brokering and application deployment services
COMPONENT INTERACTION
- Distributed application
- Grid resource broker
- Grid information service
- Grid market directory
- Broker identifies the list of computational resources
- Executes the job and returns results
- Metering system passes the resource information to the accounting system
- Accounting system reports resource share allocation to the user
PROBLEM AND PROMISES
PROBLEMS
- Coordinated resource sharing and problem solving in dynamic, institutional organizations
- Improving distributed management
- Improving the availability of data
- Providing researchers with a uniform user friendly environment
PROMISES
- Grid utilizes the idle time
- Its ability to make more cost-effective use of resources
- To solve problems that can’t be approached without any enormous amount of computing power.

Grid computing (or the use of computational grids) is the application of several computers to a single problem at the same time — usually to a scientific or technical problem that requires a great number of computer processing cycles or access to large amounts of data.
One of the main strategies of grid computing is using software to divide and apportion pieces of a program among several computers, sometimes up to many thousands. Grid computing is distributed[citation needed], large-scale cluster computing, as well as a form of network-distributed parallel processing[citation needed]. The size of grid computing may vary from being small — confined to a network of computer workstations within a corporation, for example — to being large, public collaboration across many companies and networks. "The notion of a confined grid may also be known as an intra-nodes cooperation whilst the notion of a larger, wider grid may thus refer to an inter-nodes cooperation"[1]. This inter-/intra-nodes cooperation "across cyber-based collaborative organizations are also known as Virtual Organizations"[2].
It is a form of distributed computing whereby a “super and virtual computer” is composed of a cluster of networked loosely coupled computers acting in concert to perform very large tasks. This technology has been applied to computationally intensive scientific, mathematical, and academic problems through volunteer computing, and it is used in commercial enterprises for such diverse applications as drug discovery, economic forecasting, seismic analysis, and back-office data processing in support of e-commerce and Web services.
What distinguishes grid computing from conventional cluster computing systems is that grids tend to be more loosely coupled, heterogeneous, and geographically dispersed. Also, while a computing grid may be dedicated to a specialized application, it is often constructed with the aid of general-purpose grid software libraries and middleware.
Grid Computing is a technique in which the idle systems in the Network and their “wasted” CPU cycles can be efficiently used by uniting pools of servers, storage systems and networks into a single large virtual system for resource sharing dynamically at runtime.
- High performance computer clusters.
-share application, data and computing resources.
Design considerations and variations
One feature of distributed grids is that they can be formed from computing resources belonging to multiple individuals or organizations (known as multiple administrative domains). This can facilitate commercial transactions, as in utility computing, or make it easier to assemble volunteer computing networks.
One disadvantage of this feature is that the computers which are actually performing the calculations might not be entirely trustworthy. The designers of the system must thus introduce measures to prevent malfunctions or malicious participants from producing false, misleading, or erroneous results, and from using the system as an attack vector. This often involves assigning work randomly to different nodes (presumably with different owners) and checking that at least two different nodes report the same answer for a given work unit. Discrepancies would identify malfunctioning and malicious nodes.
Due to the lack of central control over the hardware, there is no way to guarantee that nodes will not drop out of the network at random times. Some nodes (like laptops or dialup Internet customers) may also be available for computation but not network communications for unpredictable periods. These variations can be accommodated by assigning large work units (thus reducing the need for continuous network connectivity) and reassigning work units when a given node fails to report its results as expected.
The impacts of trust and availability on performance and development difficulty can influence the choice of whether to deploy onto a dedicated computer cluster, to idle machines internal to the developing organization, or to an open external network of volunteers or contractors.
In many cases, the participating nodes must trust the central system not to abuse the access that is being granted, by interfering with the operation of other programs, mangling stored information, transmitting private data, or creating new security holes. Other systems employ measures to reduce the amount of trust “client” nodes must place in the central system such as placing applications in virtual machines.
Public systems or those crossing administrative domains (including different departments in the same organization) often result in the need to run on heterogeneous systems, using different operating systems and hardware architectures. With many languages, there is a trade off between investment in software development and the number of platforms that can be supported (and thus the size of the resulting network). Cross-platform languages can reduce the need to make this trade off, though potentially at the expense of high performance on any given node (due to run-time interpretation or lack of optimization for the particular platform).
Various middleware projects have created generic infrastructure, to allow diverse scientific and commercial projects to harness a particular associated grid, or for the purpose of setting up new grids. BOINC is a common one for academic projects seeking public volunteers; more are listed at the end of the article.
In fact, the middleware can be seen as a layer between the hardware and the software. On top of the middleware, a number of technical areas have to be considered, and these may or may not be middleware independent. Example areas include SLA management, Trust and Security, Virtual organization management, License Management, Portals and Data Management. These technical areas may be taken care of in a commercial solution, though the cutting edge of each area is often found within specific research projects examining the field.
The term grid computing originated in the early 1990s as a metaphor for making computer power as easy to access as an electric power grid in Ian Foster's and Carl Kesselman's seminal work, "The Grid: Blueprint for a new computing infrastructure."
CPU scavenging and volunteer computing were popularized beginning in 1997 by distributed.net and later in 1999 by SETI@home to harness the power of networked PCs worldwide, in order to solve CPU-intensive research problems.
The ideas of the grid (including those from distributed computing, object-oriented programming, and Web services) were brought together by Ian Foster, Carl Kesselman, and Steve Tuecke, widely regarded as the “fathers of the grid[4].” They led the effort to create the Globus Toolkit incorporating not just computation management but also storage management, security provisioning, data movement, monitoring, and a toolkit for developing additional services based on the same infrastructure, including agreement negotiation, notification mechanisms, trigger services, and information aggregation. While the Globus Toolkit remains the de facto standard for building grid solutions, a number of other tools have been built that answer some subset of services needed to create an enterprise or global grid.
In 2007 the term cloud computing came into popularity, which is conceptually similar to the canonical Foster definition of grid computing (in terms of computing resources being consumed as electricity is from the power grid). Indeed, grid computing is often (but not always) associated with the delivery of cloud computing systems as exemplified by the AppLogic system from 3tera.
IMPORTANCE OF GRID COMPUTING
Flexible, Secure, Coordinated resource sharing.
Virtualization of distributed computing resources.
Give worldwide access to a network of distributed resources.
GRID REQUIREMENTS
Security
Resource Management
Data Management
Information Services
Fault Detection
Portability
TYPES OF GRID
Computational Grid
-computing power
Scavenging Grid
-desktop machines
Data Grid
-data access across multiple organizations
ARCHITECTURAL OVERVIEW
- Grid’s computer can be thousands of miles apart and connected with internet networking technologies.
- Grids can share processors and drive space.
Fabric : Provides resources to which shared access is mediated by grid protocols.
Connectivity : Provides authentication solutions.
Resources : Connectivity layer, communication and authentication protocols.
Collective : Coordinates multiple resources.
Application : Constructed by calling upon services defined at any layer.
GRID COMPONENTS
In a world-wide Grid environment, capabilities that the infrastructure needs to support include:
Remote storage
Publication of datasets
Security
Uniform access to remote resources
Publication of services and access cost
Composition of distributed applications
Discovery of suitable datasets
Discovery of suitable computational resources
Mapping and Scheduling of jobs
Submission, monitoring, steering of jobs execution
Movement of code
Enforcement of quality
Metering and accounting
GRID LAYERS
Grid Fabric layer
Core Grid middleware
User-level Grid middleware
Grid application and protocols
OPERATIONAL FLOW FROM USER’S PERSPECTIVE
- Installing Core Gridmiddleware
- Resource brokering and application deployment services
COMPONENT INTERACTION
- Distributed application
- Grid resource broker
- Grid information service
- Grid market directory
- Broker identifies the list of computational resources
- Executes the job and returns results
- Metering system passes the resource information to the accounting system
- Accounting system reports resource share allocation to the user
PROBLEM AND PROMISES
PROBLEMS
- Coordinated resource sharing and problem solving in dynamic, institutional organizations
- Improving distributed management
- Improving the availability of data
- Providing researchers with a uniform user friendly environment
PROMISES
- Grid utilizes the idle time
- Its ability to make more cost-effective use of resources
- To solve problems that can’t be approached without any enormous amount of computing power.
Dynamic RAM Chip
Dynamic random access memories (DRAMs) are the simplest and hence the smallest, of all semiconductors memories, containing only one transistor and one capacitor per cell. For that reason they are the most widely used memory type wherever high density storage is needed, most obviously as the main memory in all types of computers. Static RAMs are faster; by their much larger cell size (which holds up to six transistors) keeps their densities one generation behind those that DRAMs can offer.
Dynamic random access memories (DRAMs) are the simplest and hence the smallest, of all semiconductors memories, containing only one transistor and one capacitor per cell. For that reason they are the most widely used memory type wherever high density storage is needed, most obviously as the main memory in all types of computers. Static RAMs are faster; by their much larger cell size (which holds up to six transistors) keeps their densities one generation behind those that DRAMs can offer.
COMPUTER FORENSICS
Computer forensics is a branch of forensic science pertaining to legal evidence found in computers and digital storage mediums. Computer forensics is also known as digital forensics.
The goal of computer forensics is to explain the current state of a digital artifact. The term digital artifact can include a computer system, a storage medium (such as a hard disk or CD-ROM), an electronic document (e.g. an email message or JPEG image) or even a sequence of packets moving over a computer network. The explanation can be as straightforward as "what information is here?" and as detailed as "what is the sequence of events responsible for the present situation?"
The field of computer forensics also has sub branches within it such as firewall forensics, network forensics, database forensics and mobile device forensics.
There are many reasons to employ the techniques of computer forensics:
* In legal cases, computer forensic techniques are frequently used to analyze computer systems belonging to defendants (in criminal cases) or litigants (in civil cases).
* To recover data in the event of a hardware or software failure.
* To analyze a computer system after a break-in, for example, to determine how the attacker gained access and what the attacker did.
* To gather evidence against an employee that an organization wishes to terminate.
* To gain information about how computer systems work for the purpose of debugging, performance optimization, or reverse-engineering.
Special measures should be taken when conducting a forensic investigation if it is desired for the results to be used in a court of law. One of the most important measures is to assure that the evidence has been accurately collected and that there is a clear chain of custody from the scene of the crime to the investigator---and ultimately to the court. In order to comply with the need to maintain the integrity of digital evidence, British examiners comply with the Association of Chief Police Officers (A.C.P.O.) guidelines. These are made up of four principles as follows:-
Principle 1: No action taken by law enforcement agencies or their agents should change data held on a computer or storage media which may subsequently be relied upon in court.
Principle 2: In exceptional circumstances, where a person finds it necessary to access original data held on a computer or on storage media, that person must be competent to do so and be able to give evidence explaining the relevance and the implications of their actions.
Principle 3: An audit trail or other record of all processes applied to computer based electronic evidence should be created and preserved. An independent third party should be able to examine those processes and achieve the same result.
Principle 4: The person in charge of the investigation (the case officer) has overall responsibility for ensuring that the law and these principles are adhered to.
The Forensic Process
There are five basic steps to the computer forensics:
1. Preparation (of the investigator, not the data)
2. Collection (the data)
3. Examination
4. Analysis
5. Reporting
The investigator must be properly trained to perform the specific kind of investigation that is at hand.
Tools that are used to generate reports for court should be validated. There are many tools to be used in the process. One should determine the proper tool to be used based on the case.
Collecting Digital Evidence
Digital evidence can be collected from many sources. Obvious sources include computers, cell phones, digital cameras, hard drives, CD-ROM, USB memory devices, and so on. Non-obvious sources include settings of digital thermometers, black boxes inside automobiles, RFID tags, and web pages (which must be preserved as they are subject to change).
Special care must be taken when handling computer evidence: most digital information is easily changed, and once changed it is usually impossible to detect that a change has taken place (or to revert the data back to its original state) unless other measures have been taken. For this reason it is common practice to calculate a cryptographic hash of an evidence file and to record that hash elsewhere, usually in an investigator's notebook, so that one can establish at a later point in time that the evidence has not been modified since the hash was calculated.
Other specific practices that have been adopted in the handling of digital evidence include:
* Imaging computer media using a writeblocking tool to ensure that no data is added to the suspect device.
* Establish and maintain the chain of custody.
* Documenting everything that has been done.
* Only use tools and methods that have been tested and evaluated to validate their accuracy and reliability.
Some of the most valuable information obtained in the course of a forensic examination will come from the computer user. An interview with the user can yield valuable information about the system configuration, applications, encryption keys and methodology. Forensic analysis is much easier when analysts have the user's passphrases to access encrypted files, containers, and network servers.
In an investigation in which the owner of the digital evidence has not given consent to have his or her media examined (as in some criminal cases) special care must be taken to ensure that the forensic specialist has the legal authority to seize, copy, and examine the data. Sometimes authority stems from a search warrant. As a general rule, one should not examine digital information unless one has the legal authority to do so. Amateur forensic examiners should keep this in mind before starting any unauthorized investigation.
Live vs. Dead analysis
Traditionally computer forensic investigations were performed on data at rest---for example, the content of hard drives. This can be thought of as a dead analysis. Investigators were told to shut down computer systems when they were impounded for fear that digital time-bombs might cause data to be erased.
In recent years there has increasingly been an emphasis on performing analysis on live systems. One reason is that many current attacks against computer systems leave no trace on the computer's hard drive---the attacker only exploits information in the computer's memory. Another reason is the growing use of cryptographic storage: it may be that the only copy of the keys to decrypt the storage are in the computer's memory, turning off the computer will cause that information to be lost.
Imaging electronic media (evidence)
The process of creating an exact duplicate of the original evidentiary media is often called Imaging. Using a standalone hard-drive duplicator or software imaging tools such as DCFLdd or IXimager, the entire hard drive is completely duplicated. This is usually done at the sector level, making a bit-stream copy of every part of the user-accessible areas of the hard drive which can physically store data, rather than duplicating the filesystem. The original drive is then moved to secure storage to prevent tampering. During imaging, a write protection device or application is normally used to ensure that no information is introduced onto the evidentiary media during the forensic process.
The imaging process is verified by using the SHA-1 message digest algorithm (with a program such as sha1sum) or other still viable algorithms such as MD5. At critical points throughout the analysis, the media is verified again, known as "hashing", to ensure that the evidence is still in its original state. In corporate environments seeking civil or internal charges, such steps are generally overlooked due to the time required to perform them. They are essential for evidence that is to be presented in a court room, however.
Collecting Volatile Data
If the machine is still active, any intelligence which can be gained by examining the applications currently open is recorded. If the machine is suspected of being used for illegal communications, such as terrorist traffic, not all of this information may be stored on the hard drive. If information stored solely in RAM is not recovered before powering down it may be lost. This results in the need to collect volatile data from the computer at the onset of the response.
Several Open Source tools are available to conduct an analysis of open ports, mapped drives (including through an active VPN connection), and open or mounted encrypted files (containers) on the live computer system. Utilizing open source tools and commercially available products, it is possible to obtain an image of these mapped drives and the open encrypted containers in an unencrypted format. Open Source tools for PCs include Knoppix and Helix. Commercial imaging tools include Access Data's Forensic Toolkit and Guidance Software's EnCase application.
The aforementioned Open Source tools can also scan RAM and Registry information to show recently accessed web-based email sites and the login/password combination used. Additionally these tools can also yield login/password for recently accessed local email applications including MS Outlook.
In the event that partitions with EFS are suspected to exist, the encryption keys to access the data can also be gathered during the collection process. With Microsoft's most recent addition, Vista, and Vista's use of BitLocker and the Trusted Platform Module (TPM), it has become necessary in some instances to image the logical hard drive volumes before the computer is shut down.
RAM can be analyzed for prior content after power loss. Although as production methods become cleaner the impurities used to indicate a particular cell's charge prior to power loss are becoming less common. However, data held statically in an area of RAM for long periods of time are more likely to be detectable using these methods. The likelihood of such recovery increases as the originally applied voltages, operating temperatures and duration of data storage increases. Holding unpowered RAM below − 60 °C will help preserve the residual data by an order of magnitude, thus improving the chances of successful recovery. However, it can be impractical to do this during a field examination.
Analysis
All digital evidence must be analyzed to determine the type of information that is stored upon it. For this purpose, specialty tools are used that can display information in a format useful to investigators. Such forensic tools include: AccessData's FTK, Guidance Software's EnCase, and Brian Carrier's Sleuth Kit. In many investigations, numerous other tools are used to analyze specific portions of information.
Typical forensic analysis includes a manual review of material on the media, reviewing the Windows registry for suspect information, discovering and cracking passwords, keyword searches for topics related to the crime, and extracting e-mail and images for review.
Reporting
Once the analysis is complete, a report is generated. This report may be a written report, oral testimony, or some combination of the two.
The increasing use of telecommunications, particularly the development of e-commerce, is steadily increasing the opportunities for crime in many guises, especially IT-related crime .Developments in information technology have begun to pose new challenges for policing. Most professions have had to adapt to the digital age, and the police profession must be particularly adaptive, because criminal exploitation of digital technologies necessitates new types of criminal investigation. More and more, information technology is becoming the instrument of criminal activity. Investigating these sophisticated crimes, and assembling the necessary evidence for presentation in a court of law, will become a significant police responsibility. The application of computer technology to the investigation of computer-based crime has given rise to the field of forensic computing. This paper provides an overview of the field of forensic computing.
Computer forensics is a branch of forensic science pertaining to legal evidence found in computers and digital storage mediums. Computer forensics is also known as digital forensics.
The goal of computer forensics is to explain the current state of a digital artifact. The term digital artifact can include a computer system, a storage medium (such as a hard disk or CD-ROM), an electronic document (e.g. an email message or JPEG image) or even a sequence of packets moving over a computer network. The explanation can be as straightforward as "what information is here?" and as detailed as "what is the sequence of events responsible for the present situation?"
The field of computer forensics also has sub branches within it such as firewall forensics, network forensics, database forensics and mobile device forensics.
There are many reasons to employ the techniques of computer forensics:
* In legal cases, computer forensic techniques are frequently used to analyze computer systems belonging to defendants (in criminal cases) or litigants (in civil cases).
* To recover data in the event of a hardware or software failure.
* To analyze a computer system after a break-in, for example, to determine how the attacker gained access and what the attacker did.
* To gather evidence against an employee that an organization wishes to terminate.
* To gain information about how computer systems work for the purpose of debugging, performance optimization, or reverse-engineering.
Special measures should be taken when conducting a forensic investigation if it is desired for the results to be used in a court of law. One of the most important measures is to assure that the evidence has been accurately collected and that there is a clear chain of custody from the scene of the crime to the investigator---and ultimately to the court. In order to comply with the need to maintain the integrity of digital evidence, British examiners comply with the Association of Chief Police Officers (A.C.P.O.) guidelines. These are made up of four principles as follows:-
Principle 1: No action taken by law enforcement agencies or their agents should change data held on a computer or storage media which may subsequently be relied upon in court.
Principle 2: In exceptional circumstances, where a person finds it necessary to access original data held on a computer or on storage media, that person must be competent to do so and be able to give evidence explaining the relevance and the implications of their actions.
Principle 3: An audit trail or other record of all processes applied to computer based electronic evidence should be created and preserved. An independent third party should be able to examine those processes and achieve the same result.
Principle 4: The person in charge of the investigation (the case officer) has overall responsibility for ensuring that the law and these principles are adhered to.
The Forensic Process
There are five basic steps to the computer forensics:
1. Preparation (of the investigator, not the data)
2. Collection (the data)
3. Examination
4. Analysis
5. Reporting
The investigator must be properly trained to perform the specific kind of investigation that is at hand.
Tools that are used to generate reports for court should be validated. There are many tools to be used in the process. One should determine the proper tool to be used based on the case.
Collecting Digital Evidence
Digital evidence can be collected from many sources. Obvious sources include computers, cell phones, digital cameras, hard drives, CD-ROM, USB memory devices, and so on. Non-obvious sources include settings of digital thermometers, black boxes inside automobiles, RFID tags, and web pages (which must be preserved as they are subject to change).
Special care must be taken when handling computer evidence: most digital information is easily changed, and once changed it is usually impossible to detect that a change has taken place (or to revert the data back to its original state) unless other measures have been taken. For this reason it is common practice to calculate a cryptographic hash of an evidence file and to record that hash elsewhere, usually in an investigator's notebook, so that one can establish at a later point in time that the evidence has not been modified since the hash was calculated.
Other specific practices that have been adopted in the handling of digital evidence include:
* Imaging computer media using a writeblocking tool to ensure that no data is added to the suspect device.
* Establish and maintain the chain of custody.
* Documenting everything that has been done.
* Only use tools and methods that have been tested and evaluated to validate their accuracy and reliability.
Some of the most valuable information obtained in the course of a forensic examination will come from the computer user. An interview with the user can yield valuable information about the system configuration, applications, encryption keys and methodology. Forensic analysis is much easier when analysts have the user's passphrases to access encrypted files, containers, and network servers.
In an investigation in which the owner of the digital evidence has not given consent to have his or her media examined (as in some criminal cases) special care must be taken to ensure that the forensic specialist has the legal authority to seize, copy, and examine the data. Sometimes authority stems from a search warrant. As a general rule, one should not examine digital information unless one has the legal authority to do so. Amateur forensic examiners should keep this in mind before starting any unauthorized investigation.
Live vs. Dead analysis
Traditionally computer forensic investigations were performed on data at rest---for example, the content of hard drives. This can be thought of as a dead analysis. Investigators were told to shut down computer systems when they were impounded for fear that digital time-bombs might cause data to be erased.
In recent years there has increasingly been an emphasis on performing analysis on live systems. One reason is that many current attacks against computer systems leave no trace on the computer's hard drive---the attacker only exploits information in the computer's memory. Another reason is the growing use of cryptographic storage: it may be that the only copy of the keys to decrypt the storage are in the computer's memory, turning off the computer will cause that information to be lost.
Imaging electronic media (evidence)
The process of creating an exact duplicate of the original evidentiary media is often called Imaging. Using a standalone hard-drive duplicator or software imaging tools such as DCFLdd or IXimager, the entire hard drive is completely duplicated. This is usually done at the sector level, making a bit-stream copy of every part of the user-accessible areas of the hard drive which can physically store data, rather than duplicating the filesystem. The original drive is then moved to secure storage to prevent tampering. During imaging, a write protection device or application is normally used to ensure that no information is introduced onto the evidentiary media during the forensic process.
The imaging process is verified by using the SHA-1 message digest algorithm (with a program such as sha1sum) or other still viable algorithms such as MD5. At critical points throughout the analysis, the media is verified again, known as "hashing", to ensure that the evidence is still in its original state. In corporate environments seeking civil or internal charges, such steps are generally overlooked due to the time required to perform them. They are essential for evidence that is to be presented in a court room, however.
Collecting Volatile Data
If the machine is still active, any intelligence which can be gained by examining the applications currently open is recorded. If the machine is suspected of being used for illegal communications, such as terrorist traffic, not all of this information may be stored on the hard drive. If information stored solely in RAM is not recovered before powering down it may be lost. This results in the need to collect volatile data from the computer at the onset of the response.
Several Open Source tools are available to conduct an analysis of open ports, mapped drives (including through an active VPN connection), and open or mounted encrypted files (containers) on the live computer system. Utilizing open source tools and commercially available products, it is possible to obtain an image of these mapped drives and the open encrypted containers in an unencrypted format. Open Source tools for PCs include Knoppix and Helix. Commercial imaging tools include Access Data's Forensic Toolkit and Guidance Software's EnCase application.
The aforementioned Open Source tools can also scan RAM and Registry information to show recently accessed web-based email sites and the login/password combination used. Additionally these tools can also yield login/password for recently accessed local email applications including MS Outlook.
In the event that partitions with EFS are suspected to exist, the encryption keys to access the data can also be gathered during the collection process. With Microsoft's most recent addition, Vista, and Vista's use of BitLocker and the Trusted Platform Module (TPM), it has become necessary in some instances to image the logical hard drive volumes before the computer is shut down.
RAM can be analyzed for prior content after power loss. Although as production methods become cleaner the impurities used to indicate a particular cell's charge prior to power loss are becoming less common. However, data held statically in an area of RAM for long periods of time are more likely to be detectable using these methods. The likelihood of such recovery increases as the originally applied voltages, operating temperatures and duration of data storage increases. Holding unpowered RAM below − 60 °C will help preserve the residual data by an order of magnitude, thus improving the chances of successful recovery. However, it can be impractical to do this during a field examination.
Analysis
All digital evidence must be analyzed to determine the type of information that is stored upon it. For this purpose, specialty tools are used that can display information in a format useful to investigators. Such forensic tools include: AccessData's FTK, Guidance Software's EnCase, and Brian Carrier's Sleuth Kit. In many investigations, numerous other tools are used to analyze specific portions of information.
Typical forensic analysis includes a manual review of material on the media, reviewing the Windows registry for suspect information, discovering and cracking passwords, keyword searches for topics related to the crime, and extracting e-mail and images for review.
Reporting
Once the analysis is complete, a report is generated. This report may be a written report, oral testimony, or some combination of the two.
The increasing use of telecommunications, particularly the development of e-commerce, is steadily increasing the opportunities for crime in many guises, especially IT-related crime .Developments in information technology have begun to pose new challenges for policing. Most professions have had to adapt to the digital age, and the police profession must be particularly adaptive, because criminal exploitation of digital technologies necessitates new types of criminal investigation. More and more, information technology is becoming the instrument of criminal activity. Investigating these sophisticated crimes, and assembling the necessary evidence for presentation in a court of law, will become a significant police responsibility. The application of computer technology to the investigation of computer-based crime has given rise to the field of forensic computing. This paper provides an overview of the field of forensic computing.
Subscribe to:
Posts (Atom)