corDECT is a wireless local loop standard developed in India by IIT Madras and Midas Communications ( at Chennai, under leadership of Prof Ashok Jhunjhunwala, based on the DECT digital cordless phone standard.


The technology is a Fixed Wireless Option, which has extremely low capital costs and is ideal for small start ups to scale, as well as for sparse rural areas. It is very suitable for ICT4D projects and India has one such organization, n-Logue Communications that has aptly done this.

The full form of DECT is Digital Enhanced Cordless Telecommunications, which is useful in designing small capacity WLL (wireless in local loop) systems. These systems are operative only on LOS Conditions and are very much affected by weather conditions.

System is designed for rural and sub urban areas where subscriber density is medium or low. "corDECT" system provides simultaneous voice and Internet access. Following are the main parts of the system.

DECT Interface Unit (DIU)
This is a 1000 line exchange provides E1 interface to the PSTN. This can cater up to 20 base stations. These base stations are interfaced through ISDN link which carries signals and power feed for the base stations even up to 3 km.

Compact Base Station (CBS)
This is the radio fixed part of the DECT wireless local loop. CBSs are typically mounted on a tower top which can cater up to 50 subscribers with 0.1 erlang traffic.

Base Station Distributor (BSD)
This is a traffic aggregator used to extend the range of the wireless local-loop where 4 CBS can be connected to this.

Relay Base Station (RBS)

This another technique used to extend the range of the corDECT wireless local loop up to 25 km by a radio chain.

Fixed Remote Station (FRS)
This is the subscriber-end equipment used the corDECT wireless local loop which provides standard telephone instrument and Internet access up to 70kbit/s through Ethernet port.

The new generation corDECT technology is called Broadband corDECT which supports provides broadband Internet access over wireless local loop.



Grid computing (or the use of computational grids) is the application of several computers to a single problem at the same time — usually to a scientific or technical problem that requires a great number of computer processing cycles or access to large amounts of data.

One of the main strategies of grid computing is using software to divide and apportion pieces of a program among several computers, sometimes up to many thousands. Grid computing is distributed[citation needed], large-scale cluster computing, as well as a form of network-distributed parallel processing[citation needed]. The size of grid computing may vary from being small — confined to a network of computer workstations within a corporation, for example — to being large, public collaboration across many companies and networks. "The notion of a confined grid may also be known as an intra-nodes cooperation whilst the notion of a larger, wider grid may thus refer to an inter-nodes cooperation"[1]. This inter-/intra-nodes cooperation "across cyber-based collaborative organizations are also known as Virtual Organizations"[2].

It is a form of distributed computing whereby a “super and virtual computer” is composed of a cluster of networked loosely coupled computers acting in concert to perform very large tasks. This technology has been applied to computationally intensive scientific, mathematical, and academic problems through volunteer computing, and it is used in commercial enterprises for such diverse applications as drug discovery, economic forecasting, seismic analysis, and back-office data processing in support of e-commerce and Web services.

What distinguishes grid computing from conventional cluster computing systems is that grids tend to be more loosely coupled, heterogeneous, and geographically dispersed. Also, while a computing grid may be dedicated to a specialized application, it is often constructed with the aid of general-purpose grid software libraries and middleware.

Grid Computing is a technique in which the idle systems in the Network and their “wasted” CPU cycles can be efficiently used by uniting pools of servers, storage systems and networks into a single large virtual system for resource sharing dynamically at runtime.
- High performance computer clusters.
-share application, data and computing resources.

Design considerations and variations

One feature of distributed grids is that they can be formed from computing resources belonging to multiple individuals or organizations (known as multiple administrative domains). This can facilitate commercial transactions, as in utility computing, or make it easier to assemble volunteer computing networks.

One disadvantage of this feature is that the computers which are actually performing the calculations might not be entirely trustworthy. The designers of the system must thus introduce measures to prevent malfunctions or malicious participants from producing false, misleading, or erroneous results, and from using the system as an attack vector. This often involves assigning work randomly to different nodes (presumably with different owners) and checking that at least two different nodes report the same answer for a given work unit. Discrepancies would identify malfunctioning and malicious nodes.

Due to the lack of central control over the hardware, there is no way to guarantee that nodes will not drop out of the network at random times. Some nodes (like laptops or dialup Internet customers) may also be available for computation but not network communications for unpredictable periods. These variations can be accommodated by assigning large work units (thus reducing the need for continuous network connectivity) and reassigning work units when a given node fails to report its results as expected.

The impacts of trust and availability on performance and development difficulty can influence the choice of whether to deploy onto a dedicated computer cluster, to idle machines internal to the developing organization, or to an open external network of volunteers or contractors.

In many cases, the participating nodes must trust the central system not to abuse the access that is being granted, by interfering with the operation of other programs, mangling stored information, transmitting private data, or creating new security holes. Other systems employ measures to reduce the amount of trust “client” nodes must place in the central system such as placing applications in virtual machines.

Public systems or those crossing administrative domains (including different departments in the same organization) often result in the need to run on heterogeneous systems, using different operating systems and hardware architectures. With many languages, there is a trade off between investment in software development and the number of platforms that can be supported (and thus the size of the resulting network). Cross-platform languages can reduce the need to make this trade off, though potentially at the expense of high performance on any given node (due to run-time interpretation or lack of optimization for the particular platform).

Various middleware projects have created generic infrastructure, to allow diverse scientific and commercial projects to harness a particular associated grid, or for the purpose of setting up new grids. BOINC is a common one for academic projects seeking public volunteers; more are listed at the end of the article.

In fact, the middleware can be seen as a layer between the hardware and the software. On top of the middleware, a number of technical areas have to be considered, and these may or may not be middleware independent. Example areas include SLA management, Trust and Security, Virtual organization management, License Management, Portals and Data Management. These technical areas may be taken care of in a commercial solution, though the cutting edge of each area is often found within specific research projects examining the field.

The term grid computing originated in the early 1990s as a metaphor for making computer power as easy to access as an electric power grid in Ian Foster's and Carl Kesselman's seminal work, "The Grid: Blueprint for a new computing infrastructure."

CPU scavenging and volunteer computing were popularized beginning in 1997 by and later in 1999 by SETI@home to harness the power of networked PCs worldwide, in order to solve CPU-intensive research problems.

The ideas of the grid (including those from distributed computing, object-oriented programming, and Web services) were brought together by Ian Foster, Carl Kesselman, and Steve Tuecke, widely regarded as the “fathers of the grid[4].” They led the effort to create the Globus Toolkit incorporating not just computation management but also storage management, security provisioning, data movement, monitoring, and a toolkit for developing additional services based on the same infrastructure, including agreement negotiation, notification mechanisms, trigger services, and information aggregation. While the Globus Toolkit remains the de facto standard for building grid solutions, a number of other tools have been built that answer some subset of services needed to create an enterprise or global grid.

In 2007 the term cloud computing came into popularity, which is conceptually similar to the canonical Foster definition of grid computing (in terms of computing resources being consumed as electricity is from the power grid). Indeed, grid computing is often (but not always) associated with the delivery of cloud computing systems as exemplified by the AppLogic system from 3tera.

Flexible, Secure, Coordinated resource sharing.
Virtualization of distributed computing resources.
Give worldwide access to a network of distributed resources.

Resource Management
Data Management
Information Services
Fault Detection

Computational Grid
-computing power
Scavenging Grid
-desktop machines
Data Grid
-data access across multiple organizations

- Grid’s computer can be thousands of miles apart and connected with internet networking technologies.
- Grids can share processors and drive space.

Fabric : Provides resources to which shared access is mediated by grid protocols.
Connectivity : Provides authentication solutions.
Resources : Connectivity layer, communication and authentication protocols.
Collective : Coordinates multiple resources.
Application : Constructed by calling upon services defined at any layer.

In a world-wide Grid environment, capabilities that the infrastructure needs to support include:
Remote storage
Publication of datasets
Uniform access to remote resources
Publication of services and access cost
Composition of distributed applications
Discovery of suitable datasets
Discovery of suitable computational resources
Mapping and Scheduling of jobs
Submission, monitoring, steering of jobs execution
Movement of code
Enforcement of quality
Metering and accounting

Grid Fabric layer
Core Grid middleware
User-level Grid middleware
Grid application and protocols
- Installing Core Gridmiddleware
- Resource brokering and application deployment services

- Distributed application
- Grid resource broker
- Grid information service
- Grid market directory
- Broker identifies the list of computational resources
- Executes the job and returns results
- Metering system passes the resource information to the accounting system
- Accounting system reports resource share allocation to the user

- Coordinated resource sharing and problem solving in dynamic, institutional organizations
- Improving distributed management
- Improving the availability of data
- Providing researchers with a uniform user friendly environment
- Grid utilizes the idle time
- Its ability to make more cost-effective use of resources
- To solve problems that can’t be approached without any enormous amount of computing power.



EDGE is an enhancement to the GSM mobile cellular phone system. It is a step towards the evolution of 3G networks. The name EDGE stands for Enhanced Data rates for GSM Evolution. When applied to GSM/GPRS networks, EDGE dramatically increases data throughputs, as well as network capacity. EDGE provides three times the data capacity of GPRS. Using EDGE, operators can handle three times more subscribers than GPRS, triple their data rate per subscriber, or add extra capacity to their voice communications. EDGE uses the same TDMA (Time Division Multiple Access) frame structure, logic channel and 200 kHz carrier bandwidth as today's GSM networks, which allows existing cell plans to remain intact. But it uses a new modulation scheme, 8-PSK. In this seminar an overview of the EDGE technology, the modulation scheme and its applications will be discussed.

Dynamic RAM Chip

Dynamic RAM Chip

Dynamic random access memories (DRAMs) are the simplest and hence the smallest, of all semiconductors memories, containing only one transistor and one capacitor per cell. For that reason they are the most widely used memory type wherever high density storage is needed, most obviously as the main memory in all types of computers. Static RAMs are faster; by their much larger cell size (which holds up to six transistors) keeps their densities one generation behind those that DRAMs can offer.

Disease Detection Using Bio-robotics

This seminar deals with the design and the development of a bio-robotic system based on fuzzy logic to diagnose and monitor the neuro-psychophysical conditions of an individual. The system, called DDX, is portable without losing efficiency and accuracy in diagnosis and also provides the ability to transfer diagnosis through a remote communication interface, in order to monitor the daily health of a patient. DDX is a portable system, involving multiple parameters such as reaction time, speed, strength and tremor which are processed by means of fuzzy logic. The resulting output can be visualized through a display or transmitted by a communication interface.


The MIL-STD-1553B bus is a differential serial bus used in military and space equipment. It is comprised of multiple redundant bus connections and communicates at 1MB per second.

The bus has a single active bus controller (BC) and up to 31remote terminals (RTs). The BC manages all data transfers on the bus using the command and status protocol. The bus controller initiates every transfer by sending a command word and data if required. The selected RT will respond with a status word and data if required.

The 1553B command word contains a five-bit RT address, transmit or receive bit, five-bit sub-address and five-bit word count. This allows for 32 RTs on the bus. However, only 31RTs may be connected, since the RT address (31) is used to indicate a broadcast transfer, i.e. all RTs should accept the following data. Each RT has 30 sub-addresses reserved for data transfers. The other two sub-addresses (0 and 31) are reserved for mode codes used for bus control functions. Data transfers contain up to (32) 16-bit data words. Mode code command words are used for bus control functions such as synchronization.



Computer forensics is a branch of forensic science pertaining to legal evidence found in computers and digital storage mediums. Computer forensics is also known as digital forensics.

The goal of computer forensics is to explain the current state of a digital artifact. The term digital artifact can include a computer system, a storage medium (such as a hard disk or CD-ROM), an electronic document (e.g. an email message or JPEG image) or even a sequence of packets moving over a computer network. The explanation can be as straightforward as "what information is here?" and as detailed as "what is the sequence of events responsible for the present situation?"

The field of computer forensics also has sub branches within it such as firewall forensics, network forensics, database forensics and mobile device forensics.

There are many reasons to employ the techniques of computer forensics:

* In legal cases, computer forensic techniques are frequently used to analyze computer systems belonging to defendants (in criminal cases) or litigants (in civil cases).
* To recover data in the event of a hardware or software failure.
* To analyze a computer system after a break-in, for example, to determine how the attacker gained access and what the attacker did.
* To gather evidence against an employee that an organization wishes to terminate.
* To gain information about how computer systems work for the purpose of debugging, performance optimization, or reverse-engineering.

Special measures should be taken when conducting a forensic investigation if it is desired for the results to be used in a court of law. One of the most important measures is to assure that the evidence has been accurately collected and that there is a clear chain of custody from the scene of the crime to the investigator---and ultimately to the court. In order to comply with the need to maintain the integrity of digital evidence, British examiners comply with the Association of Chief Police Officers (A.C.P.O.) guidelines. These are made up of four principles as follows:-

Principle 1: No action taken by law enforcement agencies or their agents should change data held on a computer or storage media which may subsequently be relied upon in court.

Principle 2: In exceptional circumstances, where a person finds it necessary to access original data held on a computer or on storage media, that person must be competent to do so and be able to give evidence explaining the relevance and the implications of their actions.

Principle 3: An audit trail or other record of all processes applied to computer based electronic evidence should be created and preserved. An independent third party should be able to examine those processes and achieve the same result.

Principle 4: The person in charge of the investigation (the case officer) has overall responsibility for ensuring that the law and these principles are adhered to.

The Forensic Process

There are five basic steps to the computer forensics:
1. Preparation (of the investigator, not the data)
2. Collection (the data)
3. Examination
4. Analysis
5. Reporting

The investigator must be properly trained to perform the specific kind of investigation that is at hand.

Tools that are used to generate reports for court should be validated. There are many tools to be used in the process. One should determine the proper tool to be used based on the case.

Collecting Digital Evidence

Digital evidence can be collected from many sources. Obvious sources include computers, cell phones, digital cameras, hard drives, CD-ROM, USB memory devices, and so on. Non-obvious sources include settings of digital thermometers, black boxes inside automobiles, RFID tags, and web pages (which must be preserved as they are subject to change).

Special care must be taken when handling computer evidence: most digital information is easily changed, and once changed it is usually impossible to detect that a change has taken place (or to revert the data back to its original state) unless other measures have been taken. For this reason it is common practice to calculate a cryptographic hash of an evidence file and to record that hash elsewhere, usually in an investigator's notebook, so that one can establish at a later point in time that the evidence has not been modified since the hash was calculated.

Other specific practices that have been adopted in the handling of digital evidence include:

* Imaging computer media using a writeblocking tool to ensure that no data is added to the suspect device.
* Establish and maintain the chain of custody.
* Documenting everything that has been done.
* Only use tools and methods that have been tested and evaluated to validate their accuracy and reliability.

Some of the most valuable information obtained in the course of a forensic examination will come from the computer user. An interview with the user can yield valuable information about the system configuration, applications, encryption keys and methodology. Forensic analysis is much easier when analysts have the user's passphrases to access encrypted files, containers, and network servers.

In an investigation in which the owner of the digital evidence has not given consent to have his or her media examined (as in some criminal cases) special care must be taken to ensure that the forensic specialist has the legal authority to seize, copy, and examine the data. Sometimes authority stems from a search warrant. As a general rule, one should not examine digital information unless one has the legal authority to do so. Amateur forensic examiners should keep this in mind before starting any unauthorized investigation.

Live vs. Dead analysis

Traditionally computer forensic investigations were performed on data at rest---for example, the content of hard drives. This can be thought of as a dead analysis. Investigators were told to shut down computer systems when they were impounded for fear that digital time-bombs might cause data to be erased.

In recent years there has increasingly been an emphasis on performing analysis on live systems. One reason is that many current attacks against computer systems leave no trace on the computer's hard drive---the attacker only exploits information in the computer's memory. Another reason is the growing use of cryptographic storage: it may be that the only copy of the keys to decrypt the storage are in the computer's memory, turning off the computer will cause that information to be lost.

Imaging electronic media (evidence)

The process of creating an exact duplicate of the original evidentiary media is often called Imaging. Using a standalone hard-drive duplicator or software imaging tools such as DCFLdd or IXimager, the entire hard drive is completely duplicated. This is usually done at the sector level, making a bit-stream copy of every part of the user-accessible areas of the hard drive which can physically store data, rather than duplicating the filesystem. The original drive is then moved to secure storage to prevent tampering. During imaging, a write protection device or application is normally used to ensure that no information is introduced onto the evidentiary media during the forensic process.

The imaging process is verified by using the SHA-1 message digest algorithm (with a program such as sha1sum) or other still viable algorithms such as MD5. At critical points throughout the analysis, the media is verified again, known as "hashing", to ensure that the evidence is still in its original state. In corporate environments seeking civil or internal charges, such steps are generally overlooked due to the time required to perform them. They are essential for evidence that is to be presented in a court room, however.

Collecting Volatile Data

If the machine is still active, any intelligence which can be gained by examining the applications currently open is recorded. If the machine is suspected of being used for illegal communications, such as terrorist traffic, not all of this information may be stored on the hard drive. If information stored solely in RAM is not recovered before powering down it may be lost. This results in the need to collect volatile data from the computer at the onset of the response.

Several Open Source tools are available to conduct an analysis of open ports, mapped drives (including through an active VPN connection), and open or mounted encrypted files (containers) on the live computer system. Utilizing open source tools and commercially available products, it is possible to obtain an image of these mapped drives and the open encrypted containers in an unencrypted format. Open Source tools for PCs include Knoppix and Helix. Commercial imaging tools include Access Data's Forensic Toolkit and Guidance Software's EnCase application.

The aforementioned Open Source tools can also scan RAM and Registry information to show recently accessed web-based email sites and the login/password combination used. Additionally these tools can also yield login/password for recently accessed local email applications including MS Outlook.

In the event that partitions with EFS are suspected to exist, the encryption keys to access the data can also be gathered during the collection process. With Microsoft's most recent addition, Vista, and Vista's use of BitLocker and the Trusted Platform Module (TPM), it has become necessary in some instances to image the logical hard drive volumes before the computer is shut down.

RAM can be analyzed for prior content after power loss. Although as production methods become cleaner the impurities used to indicate a particular cell's charge prior to power loss are becoming less common. However, data held statically in an area of RAM for long periods of time are more likely to be detectable using these methods. The likelihood of such recovery increases as the originally applied voltages, operating temperatures and duration of data storage increases. Holding unpowered RAM below − 60 °C will help preserve the residual data by an order of magnitude, thus improving the chances of successful recovery. However, it can be impractical to do this during a field examination.


All digital evidence must be analyzed to determine the type of information that is stored upon it. For this purpose, specialty tools are used that can display information in a format useful to investigators. Such forensic tools include: AccessData's FTK, Guidance Software's EnCase, and Brian Carrier's Sleuth Kit. In many investigations, numerous other tools are used to analyze specific portions of information.

Typical forensic analysis includes a manual review of material on the media, reviewing the Windows registry for suspect information, discovering and cracking passwords, keyword searches for topics related to the crime, and extracting e-mail and images for review.


Once the analysis is complete, a report is generated. This report may be a written report, oral testimony, or some combination of the two.

The increasing use of telecommunications, particularly the development of e-commerce, is steadily increasing the opportunities for crime in many guises, especially IT-related crime .Developments in information technology have begun to pose new challenges for policing. Most professions have had to adapt to the digital age, and the police profession must be particularly adaptive, because criminal exploitation of digital technologies necessitates new types of criminal investigation. More and more, information technology is becoming the instrument of criminal activity. Investigating these sophisticated crimes, and assembling the necessary evidence for presentation in a court of law, will become a significant police responsibility. The application of computer technology to the investigation of computer-based crime has given rise to the field of forensic computing. This paper provides an overview of the field of forensic computing.

Coexistence & Migration

Coexistence & Migration

The migration of IPv4 to IPv6 will not happen overnight. Rather, there will be a period of transition when both protocols are in use over the same infrastructure. To address this, the designers of IPv6 have created technologies and types of addresses so that nodes can communicate with each other in a mixed environment, even if they are separated by an IPv4 infrastructure. This article describes IPv4 and IPv6 coexistence and migration technologies and how these technologies are supported by the IPv6 protocol for the Windows .NET Server 2003 family. This article is intended for network engineers and support professionals who are already familiar with basic networking concepts, TCP/IP, and IPv6.

Brain Computer Interface

Brain Computer Interface

A Brain Computer Interface is a device that enables people to interact with computer based systems through conscious control of their thoughts. BCI is any system that can derive meaningful information directly from the user’s brain activity in real time. The current and most important application of BCI is the restoration of communication channel for patients with locked-in-syndrome. Most current BCI’s are not invasive. The electrodes pick up the brain’s electrical activity and carry it into amplifiers. These amplifiers amplify the signal approximately ten thousand times and then pass the signal via an analog to digital converter to a computer for processing. The computer processes the EEG signal and uses it in order to accomplish tasks such as communication and environmental control.

Block Oriented Instrument Software Design

A new method for writing instrumentation software is proposed. It is based on the abstract description of the instrument operation and combines the advantages of a reconfigurable instrument and interchangeability of the instrumentation modules. The proposed test case is the implementation of a microwave network analyzer for nonlinear systems based on VISA and plug and play instrument drivers.

Modern Instruments or Instrumentation setups are likely to be built-up around generic hardware and custom software. The disadvantage is that the amount of software required to operate such a device is very high. An acceptable development time for a reasonably low number of software bugs can therefore only be obtained if the software is maximally reused from earlier developments. Most attempts used a two-step approach. In the first step transport interface between computer and instrument is abstracted. The first step in this approach has always been quite successful. The first transport abstraction stems from the IEEE-488 interface. Afterward SICL and VISA were developed to support multiple transport busses (IEEE-488, RS-232 and later Ethernet and IEE-1394). These methods use a file as the conceptual model for an instrument. The commands sent to the files are independent of the transmission medium, medium dependency is localized only in the initialization call. Most interfaces that can be used for instrumentation control are, hence, supported by these frameworks.

In the second step the instrumentation command is abstracted to empower interchangeability of similar pieces of instrumentation. For this, the situation always has been much less obvious. Only end-users have something to gain in instrument interchangeability. An abstract model to programming instrumentation setups is proposed which is easy and general enough to be used for complex setups.


The increasing complexity of microelectronic circuitry, as witnessed by multi-chip modules and system-on-a-chip and the rapid growth of manufacturing process automation require, that more effective and efficient testing and fault diagnosis techniques be developed to improve system reliability, reduce system downtime, and esemnhance productivity. As a design philosophy, built-in-test (BIT) is receiving increasing attention from the research community. This paper presents an overview of BIT search in several areas of industry, including semiconductor, manufacturing.



In recent years, automation technology has migrated to new methods of transferring information. Increasingly, field-level devices such as sensors and actuators have internal intelligence capabilities and higher communication demands. The AS-i bus system provides the solution for a digital serial interface with a single unshielded two-wire cable which replaces traditional cable harness parallel wiring between masters and slaves.

AS-i technology is compatible with any fieldbus or device network. Low-cost gateways exist to use AS-i with CAN, PROFIBUS, Interbus, FIP, LON, RS-485 and RS-232.

The AS-i uses the Isolation Penetration Technology. The AS-i follows the ISO/OSI model to successfully implement the master/slave communication.

64-Point FT Chip

64-Point FT Chip

A fixed-point 16-bit word-length 64-point FFT/IFFT processor developed primarily for the application in an OFDM based IEEE 802.11a wireless LAN base band processor. The 64-point FFT is realized by decomposing it in to a two dimensional structure of 8-point FFTs. This approach reduces the number of required complex multiplication compared to the conventional radix-2 64-point FFT algorithm. The complex multiplication operations are realized using shift and add operation. Thus, the processor does not use a two-input digital multiplier. It also does not need any RAM or ROM for internal storage of coefficients. The core area of this chip is 6.8mm². The average dynamic power consumption is 41mW at 20Mhz operating frequency and 1.8Volt supply voltage. The processor completes one parallel-to-parallel 64-point FFT computation in 23 cycles; it can be used for any application that requires fast operation as well as low power consumption.

Microelectronic pill

Microelectronic pill

A “Microelectronic pill” is a basically a multichannel sensor used for remote biomedical measurements using micro technology. This has been developed for the internal study and detection of diseases and abnormalities in the gastrointestinal (GI) tract where restricted access prevents the use of traditional endoscope. The measurement parameters for detection include real – time remote recording of temperature, pH, conductivity and dissolved oxygen in the GI tract.

This paper deals with the design of the “Microelectronic pill” which mainly consists of an outer biocompatible capsule encasing 4–channel micro sensors, a control chip, a discrete component radio transmitter and 2 silver oxide cells.

Electronic Nose (E-NOSE)

Electronic Nose (E-NOSE)

An electronic nose is a device intended to detect odors or flavors.

An electronic nose (e-nose) is a device that identifies the specific components of an odor and analyzes its chemical makeup to identify it. An electronic nose consists of a mechanism for chemical detection, such as an array of electronic sensors, and a mechanism for pattern recognition, such as a neural network . Electronic noses have been around for several years but have typically been large and expensive. Current research is focused on making the devices smaller, less expensive, and more sensitive. The smallest version, a nose-on-a-chip is a single computer chip containing both the sensors and the processing components.

An odor is composed of molecules, each of which has a specific size and shape. Each of these molecules has a correspondingly sized and shaped receptor in the human nose. When a specific receptor receives a molecule, it sends a signal to the brain and the brain identifies the smell associated with that particular molecule. Electronic noses based on the biological model work in a similar manner, albeit substituting sensors for the receptors, and transmitting the signal to a program for processing, rather than to the brain. Electronic noses are one example of a growing research area called biomimetics , or biomimicry, which involves human-made applications patterned on natural phenomena.

Electronic noses were originally used for quality control applications in the food, beverage and cosmetics industries. Current applications include detection of odors specific to diseases for medical diagnosis, and detection of pollutants and gas leaks for environmental protection.

Over the last decade, “electronic sensing” or “e-sensing” technologies have undergone important developments from a technical and commercial point of view. The expression “electronic sensing” refers to the capability of reproducing human senses using sensor arrays and pattern recognition systems. Since 1982, research has been conducted to develop technologies, commonly referred to as electronic noses, that could detect and recognize odors and flavors. The stages of the recognition process are similar to human olfaction and are performed for identification, comparison, quantification and other applications. However, hedonic evaluation is a specificity of the human nose given that it is related to subjective opinions. These devices have undergone much development and are now used to fulfill industrial needs.

Other techniques to analyze odors

In industry, aroma assessment is usually performed by human sensory analysis, Chemosensors or by gas chromatography (GC, GC/MS). The latter technique gives information about volatile organic compounds but the correlation between analytical results and actual odor perception is not direct due to potential interactions between several odorous components.

Electronic Nose working principle

The electronic nose was developed in order to mimic human olfaction that functions as a non-separative mechanism: i.e. an odor / flavor is perceived as a global fingerprint.

Electronic Noses include three major parts: a sample delivery system, a detection system, a computing system.

The sample delivery system enables the generation of the headspace (volatile compounds) of a sample, which is the fraction analyzed. The system then injects this headspace into the detection system of the electronic nose. The sample delivery system is essential to guarantee constant operating conditions.

The detection system, which consists of a sensor set, is the “reactive” part of the instrument. When in contact with volatile compounds, the sensors react, which means they experience a change of electrical properties. Each sensor is sensititive to all volatile molecules but each in their specific way. Most electronic noses use sensor-arrays that react to volatile compounds on contact: the adsorption of volatile compounds on the sensor surface causes a physical change of the sensor. A specific response is recorded by the electronic interface transforming the signal into a digital value. Recorded data are then computed based on statistical models.

The more commonly used sensors include metal oxide semiconductors (MOS), conducting polymers (CP), quartz crystal microbalance, surface acoustic wave (SAW), and field effect transistors (MOSFET).

In recent years, other types of electronic noses have been developed that utilize mass spectrometry or ultra fast gas chromatography as a detection system.

The computing system works to combine the responses of all of the sensors, which represents the input for the data treatment. This part of the instrument performs global fingerprint analysis and provides results and representations that can be easily interpreted. Moreover, the electronic nose results can be correlated to those obtained from other techniques (sensory panel, GC, GC/MS).

How to perform an analysis

fingerprint to those contained in its database. Thus they can perform qualitative or quantitative analysis.

Range of applications

Electronic nose instruments are used by Research & Development laboratories, Quality Control laboratories and process & production departments for various purposes:

in R&D laboratories for:

* Formulation or reformulation of products
* Benchmarking with competitive products
* Shelf life and stability studies
* Selection of raw materials
* Packaging interaction effects
* Simplification of consumer preference test

in Quality Control laboratories for at line quality control such as:

* Conformity of raw materials, intermediate and final products
* Batch to batch consistency
* Detection of contamination, spoilage, adulteration
* Origin or vendor selection
* Monitoring of storage conditions.

In process and production departments for:

* Managing raw material variability
* Comparison with a reference product
* Measurement and comparison of the effects of manufacturing process on products
* Following-up cleaning in place process efficiency
* Scale-up monitoring
* Cleaning in place monitoring.

Various application notes describe analysis in areas such as Flavor & Fragrance, Food & Beverage, Packaging, Pharmaceutical, Cosmetic & Perfumes, Chemical companies. More recently they can also address public concerns in terms of olfactive nuisance monitoring with networks of on-field devices.

Deep Web

Deep Web

The deep Web (also called Deepnet, the invisible Web, dark Web or the hidden Web) refers to World Wide Web content that is not part of the surface Web, which is indexed by standard search engines.

Mike Bergman, credited with coining the phrase, has said that searching on the Internet today can be compared to dragging a net across the surface of the ocean; a great deal may be caught in the net, but there is a wealth of information that is deep and therefore missed. Most of the Web's information is buried far down on dynamically generated sites, and standard search engines do not find it. Traditional search engines cannot "see" or retrieve content in the deep Web – those pages do not exist until they are created dynamically as the result of a specific search. The deep Web is several orders of magnitude larger than the surface Web.


Bergman, in a seminal, early paper on the deep Web published in the Journal of Electronic Publishing, mentioned that Jill Ellsworth used the term invisible Web in 1994 to refer to websites that are not registered with any search engine. Bergman cited a January 1996 article by Frank Garcia:

"It would be a site that's possibly reasonably designed, but they didn't bother to register it with any of the search engines. So, no one can find them! You're hidden. I call that the invisible Web."

Another early use of the term invisible Web was by Bruce Mount and Matthew B. Koll of Personal Library Software, in a description of the @1 deep Web tool found in a December 1996 press release.

The first use of the specific term deep Web, now generally accepted, occurred in the aforementioned 2001 Bergman study.


In 2000, it was estimated that the deep Web contained approximately 7,500 terabytes of data and 550 billion individual documents. Estimates based on extrapolations from a study done at University of California, Berkeley, show that the deep Web consists of about 91,000 terabytes. By contrast, the surface Web (which is easily reached by search engines) is only about 167 terabytes; the Library of Congress, in 1997, was estimated to have perhaps 3,000 terabytes.

Deep resources

Deep Web resources may be classified into one or more of the following categories:

* Dynamic content: dynamic pages which are returned in response to a submitted query or accessed only through a form, especially if open-domain input elements (such as text fields) are used; such fields are hard to navigate without domain knowledge.

* Unlinked content: pages which are not linked to by other pages, which may prevent Web crawling programs from accessing the content. This content is referred to as pages without backlinks (or inlinks).

* Private Web: sites that require registration and login (password-protected resources).

* Contextual Web: pages with content varying for different access contexts (e.g., ranges of client IP addresses or previous navigation sequence).

* Limited access content: sites that limit access to their pages in a technical way (e.g., using the Robots Exclusion Standard, CAPTCHAs, or no-cache Pragma HTTP headers which prohibit search engines from browsing them and creating cached copies).

* Scripted content: pages that are only accessible through links produced by JavaScript as well as content dynamically downloaded from Web servers via Flash or AJAX solutions.

* Non-HTML/text content: textual content encoded in multimedia (image or video) files or specific file formats not handled by search engines.


To discover content on the Web, search engines use web crawlers that follow hyperlinks. This technique is ideal for discovering resources on the surface Web but is often ineffective at finding deep Web resources. For example, these crawlers do not attempt to find dynamic pages that are the result of database queries due to the infinite number of queries that are possible.It has been noted that this can be (partially) overcome by providing links to query results, but this could unintentionally inflate the popularity (e.g., PageRank) for a member of the deep Web.

One way to access the deep Web is via federated search based search engines. Search tools such as are being designed to retrieve information from the deep Web. These tools identify and interact with searchable databases, aiming to provide access to deep Web content.

Another way to explore the deep Web is by using human crawlers instead of algorithmic crawlers. In this paradigm, referred to as Web harvesting, humans find interesting links of the deep Web that algorithmic crawlers can't find. This human-based computation technique to discover the deep Web has been used by the StumbleUpon service since February 2002.

In 2005, Yahoo! made a small part of the deep Web searchable by releasing Yahoo! Subscriptions. This search engine searches through a few subscription-only Web sites. Some subscription websites display their full content to search engine robots so they will show up in user searches, but then show users a login or subscription page when they click a link from the search engine results page.

Crawling the deep Web

Researchers have been exploring how the deep Web can be crawled in an automatic fashion. In 2001, Sriram Raghavan and Hector Garcia-Molina presented an architectural model for a hidden-Web crawler that used key terms provided by users or collected from the query interfaces to query a Web form and crawl the deep Web resources. Alexandros Ntoulas, Petros Zerfos, and Junghoo Cho of UCLA created a hidden-Web crawler that automatically generated meaningful queries to issue against search forms.Their crawler generated promising results, but the problem is far from being solved, as the authors recognized. Another effort is DeepPeep, a project of the University of Utah sponsored by the National Science Foundation, which gathered hidden-Web sources (Web forms) in different domains based on novel focused crawler techniques.

Commercial search engines have begun exploring alternative methods to crawl the deep Web. The Sitemap Protocol (first developed by Google) and mod oai are mechanisms that allow search engines and other interested parties to discover deep Web resources on particular Web servers. Both mechanisms allow Web servers to advertise the URLs that are accessible on them, thereby allowing automatic discovery of resources that are not directly linked to the surface Web. Google's deep Web surfacing system pre-computes submissions for each HTML form and adds the resulting HTML pages into the Google search engine index. The surfaced results account for a thousand queries per second to deep Web content.. In this system, the pre-computation of submissions is done using three algorithms: (1) selecting input values for text search inputs that accept keywords, (2) identifying inputs which accept only values of a specific type (e.g., date), and (3) selecting a small number of input combinations that generate URLs suitable for inclusion into the Web search index.

Classifying resources

It is difficult to automatically determine if a Web resource is a member of the surface Web or the deep Web. If a resource is indexed by a search engine, it is not necessarily a member of the surface Web, because the resource could have been found using another method (e.g., the Sitemap Protocol, mod oai, OAIster) instead of traditional crawling. If a search engine provides a backlink for a resource, one may assume that the resource is in the surface Web. Unfortunately, search engines do not always provide all backlinks to resources. Even if a backlink does exist, there is no way to determine if the resource providing the link is itself in the surface Web without crawling all of the Web. Furthermore, a resource may reside in the surface Web, but it has not yet been found by a search engine. Therefore, if we have an arbitrary resource, we cannot know for sure if the resource resides in the surface Web or deep Web without a complete crawl of the Web.

The concept of classifying search results by topic was pioneered by Yahoo! Directory search[citation needed] and is gaining importance as search becomes more relevant in day-to-day decisions. However, most of the work here has been in categorizing the surface Web by topic. For classification of deep Web resources, Ipeirotis et al. presented an algorithm that classifies a deep Web site into the category that generates the largest number of hits for some carefully selected, topically-focused queries. Deep Web directories under development include as OAIster at the University of Michigan, Intute at the University of Manchester, INFOMINE at the University of California at Riverside, and DirectSearch (by Gary Price). This classification poses a challenge while searching the deep Web whereby two levels of categorization are required. The first level is to categorize sites into vertical topics (e.g., health, travel, automobiles) and sub-topics according to the nature of the content underlying their databases.

The more difficult challenge is to categorize and map the information extracted from multiple deep Web sources according to end-user needs. Deep Web search reports cannot display URLs like traditional search reports. End users expect their search tools to not only find what they are looking for quickly, but to be intuitive and user-friendly. In order to be meaningful, the search reports have to offer some depth to the nature of content that underlie the sources or else the end-user will be lost in the sea of URLs that do not indicate what content lies underneath them. The format in which search results are to be presented varies widely by the particular topic of the search and the type of content being exposed. The challenge is to find and map similar data elements from multiple disparate sources so that search results may be exposed in a unified format on the search report irrespective of their source.


The lines between search engine content and the deep Web have begun to blur, as search services start to provide access to part or all of once-restricted content. An increasing amount of deep Web content is opening up to free search as publishers and libraries make agreements with large search engines. In the future, deep Web content may be defined less by opportunity for search than by access fees or other types of authentication.

Content on the deep Web

When we refer to the deep Web, we are usually talking about the following:

* The content of databases. Databases contain information stored in tables created by such programs as Access, Oracle, SQL Server, and MySQL. (There are other types of databases, but we will focus on database tables for the sake of simplicity.) Information stored in databases is accessible only by query. In other words, the database must somehow be searched and the data retrieved and then displayed on a Web page. This is distinct from static, self-contained Web pages, which can be accessed directly. A significant amount of valuable information on the Web is generated from databases.
* Non-text files such as multimedia, images, software, and documents in formats such as Portable Document Format (PDF). For example, see Digital Image Resources on the Deep Web for a good indication of what is out there for images.
* Content available on sites protected by passwords or other restrictions. Some of this is fee-based content, such as subscription content paid for by libraries and available to their users based on various authentication schemes.
* Special content not presented as Web pages, such as full text articles and books
* Dynamically-changing, updated content

This is usually the basic,"traditional" list. In these days of the social Web, let's consider adding new content to our list of deep Web sources. For example:

* Blog postings
* Comments
* Discussions and other communicative activities on social networking sites
* Bookmarks and citations stored on social bookmarking sites

As you can see, based on these few examples, the deep Web is expanding.
Tips for dealing with deep Web content

* Vertical search can solve some of the problems with the deep Web. With vertical search, you can query an index or database focused on a specific topic, industry, type of content, geographical location, language, file type, Web site, piece of data, and so on. For example, consider MedNar and PubMed to search for medical topics. On the social Web, there are search engines for blogs, RSS feeds, Twitter content, and so on. See the tutorial on Vertical Search Engines for more information.
* Use a general search engine to search for a vertical search engine. For example, a Google search on "stock market search" will retrieve sites that allow you to search for current stock prices, market news, etc. This may be thought of as split level searching. For the first level, search for the database site. For the second level, go to the site and search the database itself for the information you want.
* A number of general search engines will search the deep Web for related content subsequent to an initial search. For example, try a search on Google for "World Trade Center" and select the Images tab. This will retrieve many pages of images of the World Trade Center. Look for this type of feature on other search engines.
* Try to figure out which kind of information might be stored in a database.. There is no general rule. But think about large listings of things with a common theme. A few examples of databased content include:
* phone books
* "people finders" such as lists of professionals such as doctors or lawyers
* patents
* laws
* dictionary definitions
* items for sale in a Web store or on Web-based auctions
* digital exhibits
* images and multimedia
* full text articles and books
* Information that is new and dynamically changing in content will appear on the deep Web. Look to the deep Web for late breaking items, such as:
* news
* job postings
* available airline flights, hotel rooms
* stock and bond prices, market averages
* The social Web often jumps on a late-breaking situation with news items and commentary. Blogs, Twitter, and other social networking environments sometimes get out the word before more traditional sources.
* Topical coverage on the deep Web is extremely varied. This presents a challenge, since it is impossible to anticipate what might turn up.

These limitations are, however, being overcome by the new search engine crawlers (like Pipl) being designed today. These new crawlers are designed to identify, interact and retrieve information from deep web resources and searchable databases. Google, for example, has developed the mod oai and Sitemap Protocol in order to increase results from deep web searches of web servers. These new developments will allow the web servers to automatically show the URLs that they can access to search engines.

Another solution that is being developed by several search engines like Alacra, Northern Light and CloserLookSearch are specialty search engines that focus only in particular topics or subject areas. This would allow the search engines to narrow their search and make a more in-depth search of the deep web by querying password-protected and dynamic databases.

KNX (standard)

KNX (standard)

KNX is a standardised (EN 50090,ISO/IEC 14543), OSI-based network communications protocol for intelligent buildings. KNX is the successor to, and convergence of, three previous standards: the European Home Systems Protocol (EHS), BatiBUS, and the European Installation Bus (EIB). The KNX standard is administered by the Konnex Association.

The standard is based on the communication stack of EIB but enlarged with the physical layers, configuration modes and application experience of BatiBUS and EHS.

KNX defines several physical communication media:

* Twisted pair wiring (inherited from the BatiBUS and EIB Instabus standards)
* Powerline networking (inherited from EIB and EHS - similar to that used by X10)
* Radio
* Infrared
* Ethernet (also known as EIBnet/IP or KNXnet/IP)

KNX is designed to be independent of any particular hardware platform. A KNX Device Network can be controlled by anything from an 8-bit microcontroller to a PC, according to the needs of a particular implementation. The most common form of installation is over twisted pair medium.

KNX is approved as an open standard to:

* International standard (ISO/IEC 14543-3)
* European Standard (CENELEC EN 50090 and CEN EN 13321-1)
* China Guo Biao(GB/Z 20965)

KNX has more than 100 members/manufacturers including:

* Bosch
* Miele & Cie KG
* ON Semiconductor
* Schneider Electric Industries S.A.
* Siemens
* Uponor corporation
* Jung

There are three categories of KNX device:

* A-mode or "Automatic mode" devices automatically configure themselves, and are intended to be sold to and installed by the end user.
* E-mode or "Easy mode" devices require basic training to install. Their behaviour is pre-programmed, but has configuration parameters that need to be tailored to the user's requirements.
* S-mode or "System mode" devices are used in the creation of bespoke building automation systems. S-mode devices have no default behaviour, and must be programmed and installed by specialist technicians.

ADSL - Assymetric Digital Subscriber Line

Digital Subscriber Line (DSL) is a technology that brings high bandwidth information to homes and small businesses over the existing 2 wire copper telephone lines. Since DSL works on the existing telephone infrastructure, DSL systems are considered a key means of opening the bottleneck in the of the existing telephone network, as telephone companies seek cost-effective ways of providing much higher speed to their customers. DSL is a technology that assumes digital data does not require change into analog form and back. This gives it two main advantages. Digital data is transmitted to your computer directly as digital data, and this allows the phone company to use a much wider bandwidth for transmitting it to you, thereby giving the user a huge boost in bandwidth compared to analog modems. Not only that, but DSL uses the existing phone line and in most cases does not require an additional phone line. The digital signal can be separated or filtered, so that some of the bandwidth can be used to transmit an analog signal so that normal telephone calls can be made while a computer is connected to the internet. This gives "always-on" Internet access and does not tie up the phone line. No more busy signals, no more dropped connections, and no more waiting for someone in the household to get off the phone.

Because analog transmission only uses a small portion of the available amount of information that could be transmitted over copper wires, the maximum amount of data that you can receive using ordinary modems is about 56 Kbps (thousands of bits per second). With ISDN you can receive up to 128 Kbps. This shows that the ability of your computer to receive information is constrained by the fact that the telephone company filters information that arrives as digital data, puts it into analog form for your telephone line, and requires your modem to change it back into digital. In other words, the analog transmission between your home or business and the phone company is a bandwidth bottleneck. DSL however offers users a choice of speeds ranging from 144 Kbps to 1.5Mbps. This is 2.5 times to 25 times faster than a standard 56 Kbps dial-up modem. This digital service can be used to deliver bandwidth intensive applications like streaming audio/video, online games, application programs, telephone calling, video conferencing and other high-bandwidth services.



The development of biochips is a major thrust of the rapidly growing biotechnology industry, which encompasses a very diverse range of research efforts including genomics, proteomics, and pharmaceuticals, among other activities. Advances in these areas are giving scientists new methods for unraveling the complex biochemical processes occurring inside cells, with the larger goal of understanding and treating human diseases. At the same time, the semiconductor industry has been steadily perfecting the science of microminiaturization. The merging of these two fields in recent years has enabled biotechnologists to begin packing their traditionally bulky sensing tools into smaller and smaller spaces, onto so-called biochips. These chips are essentially miniaturized laboratories that can perform hundreds or thousands of simultaneous biochemical reactions. Biochips enable researchers to quickly screen large numbers of biological analytes for a variety of purposes, from disease diagnosis to detection of bioterrorism agents.

A biochip is a collection of miniaturized test sites (microarrays) arranged on a solid substrate that permits many tests to be performed at the same time in order to achieve higher output and speed. Biochips can also be used to perform techniques such as electrophoresis or PCR using microfluidics technology (Fan, 2009; Cady, 2009).

== History ==oxygen electrode, thereby relating oxygen levels to glucose concentration. This and similar biosensors became known as enzyme electrodes, and are still in use today.

In 1953, Watson and Crick announced their discovery of the now familiar double helix structure of DNA molecules and set the stage for genetics research that continues to the present day (Nelson, 2000). The development of sequencing techniques in 1977 by Gilbert (Maxam, 1977) and Sanger (Sanger, 1977) (working separately) enabled researchers to directly read the genetic codes that provide instructions for protein synthesis. This research showed how hybridization of complementary single oligonucleotide strands could be used as a basis for DNA sensing. Two additional developments enabled the technology used in modern DNA-based biosensors. First, in 1983 Kary Mullis invented the polymerase chain reaction (PCR) technique (Nelson, 2000), a method for amplifying DNA concentrations. This discovery made possible the detection of extremely small quantities of DNA in samples. Second, in 1986 Hood and coworkers devised a method to label DNA molecules with fluorescent tags instead of radiolabels (Smith, 1986), thus enabling hybridization experiments to be observed optically.