Showing posts with label Electronics Notes. Show all posts
Showing posts with label Electronics Notes. Show all posts

Spintronics

Spintronics

Spintronics (a neologism meaning "spin transport electronics"), also known as magnetoelectronics, is an emerging technology that exploits the intrinsic spin of electrons and its associated magnetic moment, in addition to its fundamental electronic charge, in solid-state devices.

History

The research field of Spintronics emerged from experiments on spin-dependent electron transport phenomena in solid-state devices done in the 1980s, including the observation of spin-polarized electron injection from a ferromagnetic metal to a normal metal by Er. Jiveshwar Sharma (Jove) and Johnson and Silsbee (1985), and the discovery of giant magnetoresistance independently by Albert Fert et al. and Peter Grünberg et al. (1988). The origins can be traced back further to the ferromagnet/superconductor tunneling experiments pioneered by Meservey and Tedrow, and initial experiments on magnetic tunnel junctions by Julliere in the 1970s. The use of semiconductors for spintronics can be traced back at least as far as the theoretical proposal of a spin field-effect-transistor by Datta and Das in 1990.

Conventional electronic devices rely on the transport of electrical charge carriers - electrons - in a semiconductor such as silicon. Now, however, physicists are trying to exploit the 'spin' of the electron rather than its charge to create a remarkable new generation of 'spintronic' devices which will be smaller, more versatile and more robust than those currently making up silicon chips and circuit elements. The potential market is worth hundreds of billions of dollars a year. See Spintronics

All spintronic devices act according to the simple scheme: (1) information is stored (written) into spins as a particular spin orientation (up or down), (2) the spins, being attached to mobile electrons, carry the information along a wire, and (3) the information is read at a terminal. Spin orientation of conduction electrons survives for a relatively long time (nanoseconds, compared to tens of femtoseconds during which electron momentum decays), which makes spintronic devices particularly attractive for memory storage and magnetic sensors applications, and, potentially for quantum computing where electron spin would represent a bit (called qubit) of information. See Spintronics

Magnetoelectronics, Spin Electronics, and Spintronics are different names for the same thing: the use of electrons' spins (not just their electrical charge) in information circuits. See Magnetoelectronics, Spin Electronics, and Spintronics

Theory

Electrons are spin-1/2 fermions and therefore constitute a two-state system with spin "up" and spin "down". To make a spintronic device, the primary requirements are to have a system that can generate a current of spin polarized electrons comprising more of one spin species—up or down—than the other (called a spin injector), and a separate system that is sensitive to the spin polarization of the electrons (spin detector). Manipulation of the electron spin during transport between injector and detector (especially in semiconductors) via spin precession can be accomplished using real external magnetic fields or effective fields caused by spin-orbit interaction.

Spin polarization in non-magnetic materials can be achieved either through the Zeeman effect in large magnetic fields and low temperatures, or by non-equilibrium methods. In the latter case, the non-equilibrium polarization will decay over a timescale called the "spin lifetime". Spin lifetimes of conduction electrons in metals are relatively short (typically less than 1 nanosecond) but in semiconductors the lifetimes can be very long (microseconds at low temperatures), especially when the electrons are isolated in local trapping potentials (for instance, at impurities, where lifetimes can be milliseconds).

Metals-based spintronic devices

The simplest method of generating a spin-polarised current in a metal is to pass the current through a ferromagnetic material. The most common application of this effect is a giant magnetoresistance (GMR) device. A typical GMR device consists of at least two layers of ferromagnetic materials separated by a spacer layer. When the two magnetization vectors of the ferromagnetic layers are aligned, the electrical resistance will be lower (so a higher current flows at constant voltage) than if the ferromagnetic layers are anti-aligned. This constitutes a magnetic field sensor.

Two variants of GMR have been applied in devices: (1) current-in-plane (CIP), where the electric current flows parallel to the layers and (2) current-perpendicular-to-plane (CPP), where the electric current flows in a direction perpendicular to the layers.

Other metals-based spintronics devices:

* Tunnel Magnetoresistance (TMR), where CPP transport is achieved by using quantum-mechanical tunneling of electrons through a thin insulator separating ferromagnetic layers.
* Spin Torque Transfer, where a current of spin-polarized electrons is used to control the magnetization direction of ferromagnetic electrodes in the device.

Applications

The storage density of hard drives is rapidly increasing along an exponential growth curve, in part because spintronics-enabled devices like GMR and TMR sensors have increased the sensitivity of the read head which measures the magnetic state of small magnetic domains (bits) on the spinning platter. The doubling period for the areal density of information storage is twelve months, much shorter than Moore's Law, which observes that the number of transistors that can cheaply be incorporated in an integrated circuit doubles every two years.

MRAM, or magnetic random access memory, uses a grid of magnetic storage elements called magnetic tunnel junctions (MTJ's). MRAM is nonvolatile (unlike charge-based DRAM in today's computers) so information is stored even when power is turned off, potentially providing instant-on computing. Motorola has developed a 1st generation 256 kb MRAM based on a single magnetic tunnel junction and a single transistor and which has a read/write cycle of under 50 nanoseconds (Everspin, Motorola's spin-off, has since developeda 4 Mbit version). There are two 2nd generation MRAM techniques currently in development: Thermal Assisted Switching (TAS) which is being developed by Crocus Technology, and Spin Torque Transfer (STT) on which Crocus, Hynix, IBM, and several other companies are working.

Another design in development, called Racetrack memory, encodes information in the direction of magnetization between domain walls of a ferromagnetic metal wire.

Semiconductor-based spintronic devices

In early efforts, spin-polarized electrons are generated via optical orientation using circularly-polarized photons at the bandgap energy incident on semiconductors with appreciable spin-orbit interaction (like GaAs and ZnSe). Although electrical spin injection can be achieved in metallic systems by simply passing a current through a ferromagnet, the large impedance mismatch between ferromagnetic metals and semiconductors prevented efficient injection across metal-semiconductor interfaces. A solution to this problem is to use ferromagnetic semiconductor sources (like manganese-doped gallium arsenide GaMnAs), increasing the interface resistance with a tunnel barrier, or using hot-electron injection.

Spin detection in semiconductors is another challenge, which has been met with the following techniques:

* Faraday/Kerr rotation of transmitted/reflected photons
* Circular polarization analysis of electroluminescence
* Nonlocal spin valve (adapted from Johnson and Silsbee's work with metals)
* Ballistic spin filtering

The latter technique was used to overcome the lack of spin-orbit interaction and materials issues to achieve spin transport in silicon, the most important semiconductor for electronics.

Because external magnetic fields (and stray fields from magnetic contacts) can cause large Hall effects and magnetoresistance in semiconductors (which mimic spin-valve effects), the only conclusive evidence of spin transport in semiconductors is demonstration of spin precession and dephasing in a magnetic field non-colinear to the injected spin orientation. This is called the Hanle effect.

Applications

Advantages of semiconductor-based spintronics applications are potentially lower power use and a smaller footprint than electrical devices used for information processing. Also, applications such as semiconductor lasers using spin-polarized electrical injection have shown threshold current reduction and controllable circularly polarized coherent light output. Future applications may include a spin-based transistor having advantages over MOSFET devices such as steeper sub-threshold slope.

Disease Detection Using Bio-robotics

This seminar deals with the design and the development of a bio-robotic system based on fuzzy logic to diagnose and monitor the neuro-psychophysical conditions of an individual. The system, called DDX, is portable without losing efficiency and accuracy in diagnosis and also provides the ability to transfer diagnosis through a remote communication interface, in order to monitor the daily health of a patient. DDX is a portable system, involving multiple parameters such as reaction time, speed, strength and tremor which are processed by means of fuzzy logic. The resulting output can be visualized through a display or transmitted by a communication interface.

DIGITAL DATA BUS (MIL-STD-1553B)

The MIL-STD-1553B bus is a differential serial bus used in military and space equipment. It is comprised of multiple redundant bus connections and communicates at 1MB per second.

The bus has a single active bus controller (BC) and up to 31remote terminals (RTs). The BC manages all data transfers on the bus using the command and status protocol. The bus controller initiates every transfer by sending a command word and data if required. The selected RT will respond with a status word and data if required.

The 1553B command word contains a five-bit RT address, transmit or receive bit, five-bit sub-address and five-bit word count. This allows for 32 RTs on the bus. However, only 31RTs may be connected, since the RT address (31) is used to indicate a broadcast transfer, i.e. all RTs should accept the following data. Each RT has 30 sub-addresses reserved for data transfers. The other two sub-addresses (0 and 31) are reserved for mode codes used for bus control functions. Data transfers contain up to (32) 16-bit data words. Mode code command words are used for bus control functions such as synchronization.

Block Oriented Instrument Software Design



A new method for writing instrumentation software is proposed. It is based on the abstract description of the instrument operation and combines the advantages of a reconfigurable instrument and interchangeability of the instrumentation modules. The proposed test case is the implementation of a microwave network analyzer for nonlinear systems based on VISA and plug and play instrument drivers.

Modern Instruments or Instrumentation setups are likely to be built-up around generic hardware and custom software. The disadvantage is that the amount of software required to operate such a device is very high. An acceptable development time for a reasonably low number of software bugs can therefore only be obtained if the software is maximally reused from earlier developments. Most attempts used a two-step approach. In the first step transport interface between computer and instrument is abstracted. The first step in this approach has always been quite successful. The first transport abstraction stems from the IEEE-488 interface. Afterward SICL and VISA were developed to support multiple transport busses (IEEE-488, RS-232 and later Ethernet and IEE-1394). These methods use a file as the conceptual model for an instrument. The commands sent to the files are independent of the transmission medium, medium dependency is localized only in the initialization call. Most interfaces that can be used for instrumentation control are, hence, supported by these frameworks.

In the second step the instrumentation command is abstracted to empower interchangeability of similar pieces of instrumentation. For this, the situation always has been much less obvious. Only end-users have something to gain in instrument interchangeability. An abstract model to programming instrumentation setups is proposed which is easy and general enough to be used for complex setups.

BIT FOR INTELLIGENT SYSTEM DESIGN

The increasing complexity of microelectronic circuitry, as witnessed by multi-chip modules and system-on-a-chip and the rapid growth of manufacturing process automation require, that more effective and efficient testing and fault diagnosis techniques be developed to improve system reliability, reduce system downtime, and esemnhance productivity. As a design philosophy, built-in-test (BIT) is receiving increasing attention from the research community. This paper presents an overview of BIT search in several areas of industry, including semiconductor, manufacturing.

ACTUATOR ( AS-i)

ACTUATOR ( AS-i)

In recent years, automation technology has migrated to new methods of transferring information. Increasingly, field-level devices such as sensors and actuators have internal intelligence capabilities and higher communication demands. The AS-i bus system provides the solution for a digital serial interface with a single unshielded two-wire cable which replaces traditional cable harness parallel wiring between masters and slaves.

AS-i technology is compatible with any fieldbus or device network. Low-cost gateways exist to use AS-i with CAN, PROFIBUS, Interbus, FIP, LON, RS-485 and RS-232.

The AS-i uses the Isolation Penetration Technology. The AS-i follows the ISO/OSI model to successfully implement the master/slave communication.

64-Point FT Chip

64-Point FT Chip

A fixed-point 16-bit word-length 64-point FFT/IFFT processor developed primarily for the application in an OFDM based IEEE 802.11a wireless LAN base band processor. The 64-point FFT is realized by decomposing it in to a two dimensional structure of 8-point FFTs. This approach reduces the number of required complex multiplication compared to the conventional radix-2 64-point FFT algorithm. The complex multiplication operations are realized using shift and add operation. Thus, the processor does not use a two-input digital multiplier. It also does not need any RAM or ROM for internal storage of coefficients. The core area of this chip is 6.8mm². The average dynamic power consumption is 41mW at 20Mhz operating frequency and 1.8Volt supply voltage. The processor completes one parallel-to-parallel 64-point FFT computation in 23 cycles; it can be used for any application that requires fast operation as well as low power consumption.

Microelectronic pill

Microelectronic pill

A “Microelectronic pill” is a basically a multichannel sensor used for remote biomedical measurements using micro technology. This has been developed for the internal study and detection of diseases and abnormalities in the gastrointestinal (GI) tract where restricted access prevents the use of traditional endoscope. The measurement parameters for detection include real – time remote recording of temperature, pH, conductivity and dissolved oxygen in the GI tract.

This paper deals with the design of the “Microelectronic pill” which mainly consists of an outer biocompatible capsule encasing 4–channel micro sensors, a control chip, a discrete component radio transmitter and 2 silver oxide cells.

Electronic Nose (E-NOSE)

Electronic Nose (E-NOSE)

An electronic nose is a device intended to detect odors or flavors.

An electronic nose (e-nose) is a device that identifies the specific components of an odor and analyzes its chemical makeup to identify it. An electronic nose consists of a mechanism for chemical detection, such as an array of electronic sensors, and a mechanism for pattern recognition, such as a neural network . Electronic noses have been around for several years but have typically been large and expensive. Current research is focused on making the devices smaller, less expensive, and more sensitive. The smallest version, a nose-on-a-chip is a single computer chip containing both the sensors and the processing components.

An odor is composed of molecules, each of which has a specific size and shape. Each of these molecules has a correspondingly sized and shaped receptor in the human nose. When a specific receptor receives a molecule, it sends a signal to the brain and the brain identifies the smell associated with that particular molecule. Electronic noses based on the biological model work in a similar manner, albeit substituting sensors for the receptors, and transmitting the signal to a program for processing, rather than to the brain. Electronic noses are one example of a growing research area called biomimetics , or biomimicry, which involves human-made applications patterned on natural phenomena.

Electronic noses were originally used for quality control applications in the food, beverage and cosmetics industries. Current applications include detection of odors specific to diseases for medical diagnosis, and detection of pollutants and gas leaks for environmental protection.

Over the last decade, “electronic sensing” or “e-sensing” technologies have undergone important developments from a technical and commercial point of view. The expression “electronic sensing” refers to the capability of reproducing human senses using sensor arrays and pattern recognition systems. Since 1982, research has been conducted to develop technologies, commonly referred to as electronic noses, that could detect and recognize odors and flavors. The stages of the recognition process are similar to human olfaction and are performed for identification, comparison, quantification and other applications. However, hedonic evaluation is a specificity of the human nose given that it is related to subjective opinions. These devices have undergone much development and are now used to fulfill industrial needs.

Other techniques to analyze odors

In industry, aroma assessment is usually performed by human sensory analysis, Chemosensors or by gas chromatography (GC, GC/MS). The latter technique gives information about volatile organic compounds but the correlation between analytical results and actual odor perception is not direct due to potential interactions between several odorous components.

Electronic Nose working principle

The electronic nose was developed in order to mimic human olfaction that functions as a non-separative mechanism: i.e. an odor / flavor is perceived as a global fingerprint.

Electronic Noses include three major parts: a sample delivery system, a detection system, a computing system.

The sample delivery system enables the generation of the headspace (volatile compounds) of a sample, which is the fraction analyzed. The system then injects this headspace into the detection system of the electronic nose. The sample delivery system is essential to guarantee constant operating conditions.

The detection system, which consists of a sensor set, is the “reactive” part of the instrument. When in contact with volatile compounds, the sensors react, which means they experience a change of electrical properties. Each sensor is sensititive to all volatile molecules but each in their specific way. Most electronic noses use sensor-arrays that react to volatile compounds on contact: the adsorption of volatile compounds on the sensor surface causes a physical change of the sensor. A specific response is recorded by the electronic interface transforming the signal into a digital value. Recorded data are then computed based on statistical models.

The more commonly used sensors include metal oxide semiconductors (MOS), conducting polymers (CP), quartz crystal microbalance, surface acoustic wave (SAW), and field effect transistors (MOSFET).

In recent years, other types of electronic noses have been developed that utilize mass spectrometry or ultra fast gas chromatography as a detection system.

The computing system works to combine the responses of all of the sensors, which represents the input for the data treatment. This part of the instrument performs global fingerprint analysis and provides results and representations that can be easily interpreted. Moreover, the electronic nose results can be correlated to those obtained from other techniques (sensory panel, GC, GC/MS).

How to perform an analysis

fingerprint to those contained in its database. Thus they can perform qualitative or quantitative analysis.

Range of applications

Electronic nose instruments are used by Research & Development laboratories, Quality Control laboratories and process & production departments for various purposes:

in R&D laboratories for:

* Formulation or reformulation of products
* Benchmarking with competitive products
* Shelf life and stability studies
* Selection of raw materials
* Packaging interaction effects
* Simplification of consumer preference test

in Quality Control laboratories for at line quality control such as:

* Conformity of raw materials, intermediate and final products
* Batch to batch consistency
* Detection of contamination, spoilage, adulteration
* Origin or vendor selection
* Monitoring of storage conditions.

In process and production departments for:

* Managing raw material variability
* Comparison with a reference product
* Measurement and comparison of the effects of manufacturing process on products
* Following-up cleaning in place process efficiency
* Scale-up monitoring
* Cleaning in place monitoring.

Various application notes describe analysis in areas such as Flavor & Fragrance, Food & Beverage, Packaging, Pharmaceutical, Cosmetic & Perfumes, Chemical companies. More recently they can also address public concerns in terms of olfactive nuisance monitoring with networks of on-field devices.

Adaptive cruise control System
An automotive cruise control system that automatically slows down the car if it is moving too close to the vehicle in front of it. A radar or laser unit located behind the grille determines the speed and distance of the vehicle in front. When the distance is computed to be safe again, the system accelerates the car back to its last speed setting. Also called "active cruise control" and "intelligent cruise control.

Autonomous cruise control is an optional cruise control system appearing on some more upscale vehicles. The system goes under many different trade names according to the manufacture. These systems use either a radar or laser setup allowing the vehicle to slow when approaching another vehicle and accelerate again to the preset speed when traffic allows. ACC technology is widely regarded as a key component of any future generations of smart cars.

Types

Laser-based systems are significantly lower in cost than radar-based systems; however, laser-based ACC systems do not detect and track vehicles well in adverse weather conditions nor do they track extremely dirty (non-reflective) vehicles very well. Laser-based sensors must be exposed, the sensor (a fairly-large black box) is typically found in the lower grille offset to one side of the vehicle.

Radar-based sensors can be hidden behind plastic fascias; however, the fascias may look different from a vehicle without the feature. For example, Mercedes packages the radar behind the upper grille in the center; however, the Mercedes grille on such applications contains a solid plastic panel in front of the radar with painted slats to simulate the slats on the rest of the grille.

Radar-based systems are available on many luxury cars as an option for approx. 1000-3000 USD/euro. Laser-based systems are available on some near luxury and luxury cars as an option for approx. 400-600 USD/euro.

Cooperating systems

Radar-based ACC often feature a Precrash system, which warns the driver and/or provides brake support if there is a high risk of a collision. Also in certain cars it is incorporated with a lane maintaining system which provides power steering assist to reduce steering input burden in corners when the cruise control system is activated.

Examples of vehicles with adaptive cruise control
  • 2005 Acura RL
  • Audi A4 (see a demonstration on YouTube), A5, A6, A8, Q7
  • BMW 7 Series, 5 series, 6 series, 3 series (Active Cruise Control)
  • 2004 Cadillac DTS, STS, XLR
  • 2007 Chrysler 300C
  • 2006 Ford Mondeo, Taurus, S-Max, Galaxy
  • 2003 Honda Inspire Accord, Legend
  • Hyundai Genesis (Smart Cruise Control, delayed)
  • Infiniti M, Q45,QX56, G35, FX35/45/50 and G37
  • 1999 Jaguar XK-R, S-Type, XJ, XF
  • 2000 Lexus LS430/460 (laser and radar), RX (laser and radar), GS, IS, ES 350, and LX 570
  • Lincoln MKS, MKT
  • 1998 Nissan Cima, Nissan Primera T-Spec Models (Intelligent Cruise Control)
  • 1998 Mercedes-Benz S-Class, E-Class, CLS-Class, SL-Class, CL-Class, M-Class, GL-Class, CLK-Class (Distronic, removed in 2009 from certain US models)
  • Range Rover Sport
  • Renault Vel Satis
  • Subaru Legacy & Outback Japan-spec called SI-Cruise
  • 1997 Toyota Celsior, Sienna (XLE Limited Edition), Avalon, Sequoia (Platinum Edition), Prius, Avensis
  • Volkswagen Passat, Phaeton, Touareg, 2009 Golf
  • Volvo S80, V70, XC70, XC60

Radio Frequency Identification (RFID) is an automatic identification method, relying on storing and remotely retrieving data using devices called RFID tags or transponders. An RFID tag is a small object that can be attached to or incorporated into a product, animal, or person. RFID tags contain silicon chips and antennas to enable them to receive and respond to radio-frequency queries from an RFID transceiver. Passive tags require no internal power source, whereas active tags require a power source.

RFID tags can be either passive, semi-passive and active.

Active

Unlike passive and semi-passive RFID tags, active RFID tags (also known as beacons) have their own internal power source which is used to power any ICs and generate the outgoing signal. They are often called beacons because they broadcast their own signal. They may have longer range and larger memories than passive tags, as well as the ability to store additional information sent by the transceiver. To economize power consumption, many beacon concepts operate at fixed intervals. At present, the smallest active tags are about the size of a coin. Many active tags have practical ranges of tens of meters, and a battery life of up to 10 years.


Semi-passive

Semi-passive RFID tags are very similar to passive tags except for the addition of a small battery. This battery allows the tag IC to be constantly powered. This removes the need for the aerial to be designed to collect power from the incoming signal. Aerials can therefore be optimized for the backscattering signal. Semi-passive RFID tags are faster in response and therefore stronger in reading ratio compared to passive tags.

Passive

Passive RFID tags have no internal power supply. The minute electrical current induced in the antenna by the incoming radio frequency signal provides just enough power for the CMOS integrated circuit (IC) in the tag to power up and transmit a response. Most passive tags signal by backscattering the carrier signal from the reader. This means that the aerial (antenna) has to be designed to both collect power from the incoming signal and also to transmit the outbound backscatter signal. The response of a passive RFID tag is not just an ID number (GUID): tag chip can contain nonvolatile EEPROM(Electrically Erasable Programmable Read-Only Memory) for storing data. Lack of an onboard power supply means that the device can be quite small: commercially available products exist that can be embedded under the skin. As of 2006, the smallest such devices measured 0.15 mm × 0.15 mm, and are thinner than a sheet of paper (7.5 micrometers).[4] The addition of the antenna creates a tag that varies from the size of postage stamp to the size of a post card. Passive tags have practical read distances ranging from about 2 mm (ISO 14443) up to a few meters (EPC and ISO 18000-6) depending on the chosen radio frequency and antenna design/size. Due to their simplicity in design they are also suitable for manufacture with a printing process for the antennae. Passive RFID tags do not require batteries, and can be much smaller and have an unlimited life span. Non-silicon tags made from polymer semiconductors are currently being developed by several companies globally. Simple laboratory printed polymer tags operating at 13.56 MHz were demonstrated in 2005 by both PolyIC (Germany) and Philips (The Netherlands). If successfully commercialized, polymer tags will be roll printable, like a magazine, and much less expensive than silicon-based tags.

Because passive tags are cheaper to manufacture and have no battery, the majority of RFID tags in existence are of the passive variety. As of 2005, these tags cost an average of Euro 0.20 ($0.24 USD) at high volumes.

INFORMATION SEARCH, ANALYSIS AND PRESENTATION (ISAP)

One of the primary focuses of technical education is to produce employable graduates/technicians. Through a sound knowledge of the subjects is impact. ISAP play a vital role in employability. Graduates/technicians without a good communication would not be able to face a placement committee ISAP is necessary not only in placement but also for job success.

For candidates who are entering into the carriers for the first time. The following skills are highly recommended and they must be trained in the following areas:

1. Communication Skill.
2. Presentation Skill.
3. Writing Skill.
4. Group Discussion.
5. Case Study.
6. Preparing job hunting material.
7. Resume preparation.
8. Facing the interview.
9. Carrier exploration
10. Use of communication technologies.

Barriers to Communication

There are number of barriers to communication which produce noise and prevent the achievement of desired result some of these are:

1. Badly encoded message.
2. Disturbance in the transmission channel.
3. Poor retention.
4. Inattention by the receiver.
5. Unclassified assumption.
6. Mistrust between sender and receiver.
7. Premature evaluation of the message.
8. Differences in perception.
9. Misinterpretation of the message.
10. Selection of wrong variety of long wage.

Classification of Communication

1. Non-verbal communication
2. Verbal communication

Non-Verbal communication:

Non-Verbal communication refers to all external stimuli other than spoken or written words and including body motion. Characteristics of appearance,characteristics of voice and use of space and distancing.

All these non-verbal clues taken together are also known as body language.

Verbal Communication:

Verbal Communication includes oral communication and written communication.
Body language plays a significant role in oral communication in day-to-day oral communication we kept on interpreting non verbal clues without being aware that we doing so.

Definition

It is defined as documented prose work incorporating the result of an organised analysis of a subject. In its style structure and approach closely resembles a formal report. But in certain respects research papers differ from a formal report is always prompted by a specific need of the organisation.

Research paper and Articles:

Characteristics feature of a research paper are:-
1. A research paper may be written about any subject social, cultural, scientific and technical, etc., But the treatment is scholarly and is supported by evidence.
2. There is a relatively high concentration of a certain writing techniques such as a definitions, classification interpretation and description of a process.
3. Its formal elements are generally those of a report and the writing is characterised by the use of graph cards and scientific technical or specialised vocabulary.
4. The emphasis is an presentation of information accurately and concisely and the writer maintains an attitude of complete impartiality and objectivity.

Types

There are three types of research paper
1. Library research paper of a term. Paper or a theme paper.
2. Scientific paper.
3. Technical paper

Library Research Paper:

There are three preparatory steps in writing such a paper namely.
1. Selecting a subject.
2. Locating a source of information
3. Note marking

Usually such a paper is written under the guidance of an instructor and the student is advised how to go about it.

The scientific paper and technical paper are often used interchangeably to refer to a research paper. When a distinction is made it is on the basic of its contents.

LIGHT EMITTING POLYMER

LIGHT EMITTING POLYMER'

The seminar is about polymers that can emit light when a voltage is applied to it. The structure comprises of a thin film of semiconducting polymer sandwiched between two electrodes (cathode and anode).When electrons and holes are injected from the electrodes, the recombination of these charge carriers takes place, which leads to emission of light .The band gap, ie. The energy difference between valence band and conduction band determines the wavelength (colour) of the emitted light.
They are usually made by ink jet printing process. In this method red green and blue polymer solutions are jetted into well defined areas on the substrate. This is because, PLEDs are soluble in common organic solvents like toluene and xylene .The film thickness uniformity is obtained by multi-passing (slow) is by heads with drive per nozzle technology .The pixels are controlled by using active or passive matrix. The advantages include low cost, small size, no viewing angle restrictions, low power requirement, biodegradability etc. They are poised to replace LCDs used in laptops and CRTs used in desktop computers today. Their future applications include flexible displays which can be folded, wearable displays with interactive features, camouflage etc.

The origins of polymer OLED technology go back to the discovery of conducting polymers in 1977,which earned the co-discoverers- Alan J. Heeger , Alan G. MacDiarmid and Hideki Shirakawa - the 2000 Nobel prize in chemistry. Following this discovery , researchers at Cambridge University UK discovered in 1990 that conducting polymers also exhibit electroluminescence and the light emitting polymer(LEP) was born!.

IRIS SCAN

IRIS SCAN

A method for rapid visual recognition of personal identity is described, based on the failure of statistical test of independence. The most unique phenotypic feature visible in a person’s face is the detailed texture of each eye’s iris: an estimate of its statistical complexity in a sample of the human population reveals variation corresponding to several hundred independent degrees-of-freedom. Morphogenetic randomness in the texture expressed phenotypically in the iris trabeclar meshwork ensures that a test of statistical independence on two coded patterns organizing from different eyes is passed almost certainly, whereas the same test is failed almost certainly when the compared codes originate from the same eye. The visible texture of a person’s iris in a real time video image is encoded into a compact sequence of multi-scale quadrature 2-D Gabor wavelet coefficients, whose most significant bits comprise a 512 – byte “IRIS–CODE” statistical decision theory generates identification decisions from Exclusive-OR comparisons of complete iris code at the rate of 4,000 per second, including calculation of decision confidence levels. The distributions observed empirically in such comparisons imply a theoretical “cross-over” error rate of one in 1,31,000 when a decision criterion is adopted that would equalize the False Accept and False Reject error rates.

Reliable automatic recognition of persons has long been an attractive goal. As in all pattern recognition problems, the key issue is the relation between interclass and intra-class variability: objects can be reliably classified only if the variability among different instances of a given class is less than the variability between different classes. Iris patterns become interesting as an alternative approach to reliable visual recognition of persons when imaging can be done at distances of less than a meter, and especially when there is a need to search very large databases without incurring any false matches despite a huge number of possibilities. The iris has the great mathematical advantage that its pattern variability among different persons is enormous. In addition, as an internal (yet externally visible) organ of the eye, the iris is well protected from the environment and stable over time. As a planar object its image is relatively insensitive to angle of illumination, and changes in viewing angle cause only affine transformations; even the non-affine pattern distortion caused by pupillary dilation is readily reversible. Finally, the ease of localizing eyes in faces, and the distinctive annular shape of the iris, facilitates reliable and precise isolation of this feature and the creation of a size-invariant representation.
Algorithms developed by Dr. John Daugman at Cambridge are today the basis for all iris recognition systems worldwide

Code-division-duplexing

Code-division-duplexing

Reducing interference in a cellular system is the most effective approach to increasing radio capacity and transmission data rate in the wireless environment. Therefore, reducing interference is a difficult and important challenge in wireless communications. In every two-way communication system it is necessary to use separate channels to transmit information in each direction. This is called duplexing. Currently there exist only two duplexing technologies in wireless communications, Frequency division duplexing (FDD) and time division duplexing (TDD). FDD has been the primary technology used in the first three generations of mobile wireless because of its ability to isolate interference. TDD is seemingly a more spectral efficient technology but has found limited use because of interference and coverage problems.

Code-division duplexing (CDD) is an innovative solution that can eliminate all kinds of interference. CDMA is the best multiple access scheme when compared to all others for combating interference. However, the codes in CDMA can be more than one type of code. A set of smart codes can make a high-capacity CDMA system very effective without adding other technologies. The smart code plus TDD is called CDD. This paper will elaborate on a set of smart codes that will make an efficient CDD system a reality. The CDMA system based on this is known as the LAS-CDMA, where LAS is a set of smart codes. LAS-CDMA is a new coding technology that will increase the capacity and spectral efficiency of mobile networks. The advanced technology uses a set of smart codes to restrict interference, a property that adversely affects the efficiency of CDMA networks.

To utilize spectrum efficiently, two transmission techniques need to be considered: one is a multiple access scheme and the other a duplexing system. There are three multiple access schemes namely TDMA, FDMA and CDMA. The industry has already established the best multiple access scheme, code-division multiple access (CDMA), for 3G systems. The next step is to select the best duplexing system. Duplexing systems are used for two-way communications. Presently, there are only two duplexing systems used: frequency-division duplexing (FDD), and time-division duplexing (TDD). The former uses different frequencies to handle incoming and outgoing signals. The latter uses a single frequency but different time slots to handle incoming and outgoing signals.

In the current cellular duplexing systems, FDD has been the appropriate choice, not TDD. Currently, all cellular systems use frequency-division duplexing in an attempt to eliminate interference from adjacent cells. The use of many technologies has limited the effects of interference but still certain types of interference remain. Time-division duplexing has not been used for mobile cellular systems because it is even more susceptible to different forms of interference. TDD can only be used for small confined area systems..

Code-division duplexing is an innovative solution that can eliminate all kinds of interference. Eliminating all types of interference makes CDD the most spectrum efficient duplexing system.

One of the key criteria in evaluating a communication system is its spectral efficiency, or the system capacity, for a given system bandwidth, or sometimes, the total data rate supported by the system. For a given bandwidth, the system capacity for narrow band radio systems is dimension limited, while the system capacity of a traditional CDMA system is interference limited. Traditional CDMA systems are all self-interference system. Three types of interference are usually considered. By ISI we mean InterSymbol Interference, which is created by the multi-path replica of the useful signal itself; MAI, or Mutual Access Interference, which is the interference created by the signals and their multi-path replica from the other users onto the useful signal; and ACI, or Adjacent Cell Interference, which is all the interfering signals from the adjacent cells onto the useful signal. .

Traditional synchronous CDMA systems employ almost exclusively Walsh-Hadamard orthogonal codes, jointly with PN sequence, and Gold codes, Kasami codes, etc. In these systems, due to the difficulty in timing synchronization and the large cross-correlation values around the origin, there exists a “near far” effect, such that in some typical system, fast power control has to be employed in order to keep an uniform received signal level at the base station. On the other hand, in forward channel all the signals’ power must be kept at an uniform level. Since the transmitting power of a user would interfere others and even may interfere itself, if one of the users in the system increases its power unilaterally, all other users power should be simultaneously increased; otherwise the controlled system power regime will be destroyed, and the capacity would be drastically decreased. This is because any radio channel, especially mobile channel, is a random time-varying time dispersion channel due to the multi-path effect, so that the received signal can not be reached at the receiver simultaneously.

Sensitive skin

Sensitive skin is a large-area, flexible array of sensors with data processing capabilities, which can be used to cover the entire surface of a machine or even a part of a human body. Depending on the skin electronics, it endows its carrier with an ability to sense its surroundings via the skin’s proximity, touch, pressure, temperature, chemical/biological, or other sensors. Sensitive skin devices will make possible the use of unsupervised machines operating in unstructured, unpredictable surroundings among people, among many obstacles, outdoors on a crowded street, undersea, or on faraway planets. Sensitive skin will make machines “cautious” and thus friendly to their environment. This will allow us to build machine helpers for the disabled and elderly, bring sensing to human prosthetics, and widen the scale of machines’ use in service industry. With their ability to produce and process massive data flow, sensitive skin devices will make yet another advance in the information revolution. This paper surveys the state of the art and research issues that need to be resolved in order to make sensitive skin a reality.

Electromagnetic bomb(E-Bomb)

An electromagnetic bomb or E-bomb is a weapon designed to disable electronics with an electromagnetic pulse (EMP) that can couple with electrical/electronic systems to produce damaging current and voltage surges by electromagnetic induction. The effects are usually not noticeable beyond 10 km of the blast radius unless the device is nuclear or specifically designed to produce an electromagnetic pulse. Small nuclear weapons detonated at high altitudes can produce a strong enough signal to disrupt or damage electronics many miles from the focus of the explosion. During a nuclear EMP, the magnetic flux lines of the Earth alter the dispersion of energy so that it radiates very little to the North, but spreads out East, West, and South of the blast. The signal is divided into several time components, and can result in thousands of volts per meter of electromagnetic energy ranging from extreme negative to extreme positive polarities. This energy can travel long distances on power lines and through the air.

Effects

These weapons are not directly responsible for the loss of lives, but can disable some of the electronic systems on which industrialized nations are highly dependent.

Devices that are susceptible to EMP damage, from most to least vulnerable:

1. Integrated circuits (ICs), CPUs, silicon chips.
2. Transistors and diodes.
3. Inductors, electric motors
4. Vacuum tubes: also known as thermionic valves, gold-coated tubes can easily survive and are commonly found in "hardened" electronics like MIG fighter jets' control systems.

Transistor technology is likely to fail and old vacuum equipment survive. However, different types of transistors and ICs show different sensitivity to electromagnetism; bipolar ICs and transistors are much less sensitive than FETs and especially MOSFETs. To protect sensitive electronics, a Faraday cage must be placed around the item. Some makeshift Faraday cages have been suggested, such as aluminium foil, although such a cage would be rendered useless if any conductors passed through, such as power cords or antennas. A Faraday cage is meant to harmlessly route the signal around the electronics inside, but the conductors on the inside must be insulated from spurious currents that are induced as the signal passes around the surface of the cage. Hardened buildings employ the use of special EM gasketing on doors, special attention to conductive surfaces on the outside, and optical isolators on antennas. The electrical supply to a hardened building must be located at a surprising depth underground in order not to "couple" with the signal, and if the electrical supply is connected to a standard power grid, the EMP will send a large surge (large enough to burn out lightning arrestors) into the power supplies of sensitive electronics.

A comprehensive 2008 report of many of the details of probable EMP effects on the equipment and infrastructure of the United States and other industrialized countries is available in the Critical National Infrastructures Report written by the scientific members of the United States EMP Commission.

History

The electromagnetic pulse was first observed during high-altitude nuclear weapon detonations.

Electromagnetic weapons are still mostly classified and research surrounding them is highly secret. Military speculators and experts generally think that E-bombs use explosively pumped flux compression generator technology as their power source, though a relatively small (10 kt) nuclear bomb, exploded between 30 and 300 miles in the atmosphere could send out enough power to damage electronics from coast to coast in the US. The US Army Corps of Engineers issued a publicly available pamphlet in the late 1990s that discusses in detail how to harden a facility against "HEMP" - high frequency electromagnetic pulse. It describes how water pipes, antennas, electrical lines, and windows allow EMP to enter a building.

According to some reports, the U.S. Navy used experimental E-bombs during the 1991 Gulf War. These bombs utilized warheads that converted the energy of conventional explosives into a pulse of radio energy. CBS News also reported that the U.S. dropped an E-bomb on Iraqi TV during the 2003 invasion of Iraq, but this has not been confirmed.

The Soviet Union conducted significant research into producing nuclear weapons specially designed for upper atmospheric detonations, a decision that was later followed by the United States and the United Kingdom. Only the Soviets ultimately produced any significant quantity of such warheads, most of which were disarmed following the Reagan-era arms talks. EMP-specialized nuclear weapon designs belong to the third generation of nuclear weapons.

Anyone who's been through a prolonged power outage knows that it's an extremely trying experience. Within an hour of losing electricity, you develop a healthy appreciation of all the electrical devices you rely on in life. A couple hours later, you start pacing around your house. After a few days without lights, electric heat or TV, your stress level shoots through the roof.

But in the grand scheme of things, that's nothing. If an outage hits an entire city, and there aren't adequate emergency resources, people may die from exposure, companies may suffer huge productivity losses and millions of dollars of food may spoil. If a power outage hit on a much larger scale, it could shut down the electronic networks that keep governments and militaries running. We are utterly dependent on power, and when it's gone, things get very bad, very fast.

An electromagnetic bomb, or e-bomb, is a weapon designed to take advantage of this dependency. But instead of simply cutting off power in an area, an e-bomb would actually destroy most machines that use electricity. Generators would be useless, cars wouldn't run, and there would be no chance of making a phone call. In a matter of seconds, a big enough e-bomb could thrust an entire city back 200 years or cripple a military unit.

The U.S. military has been pursuing the idea of an e-bomb for decades, and many believe it now has such a weapon in its arsenal. On the other end of the scale, terrorist groups could be building low-tech e-bombs to inflict massive damage on the United States.

The Basic Idea
The basic idea of an e-bomb -- or more broadly, an electromagnetic pulse (EMP) weapon -- is pretty simple. These sorts of weapons are designed to overwhelm electrical circuitry with an intense electromagnetic field.

If you've read How Radio Works or How Electromagnets Work, then you know an electromagnetic field in itself is nothing special. The radio signals that transmit AM, FM, television and cell phone calls are all electromagnetic energy, as is ordinary light, microwaves and x-rays.

For our purposes, the most important thing to understand about electromagnetism is that electric current generates magnetic fields and changing magnetic fields can induce electric current. This page from How Radio Works explains that a simple radio transmitter generates a magnetic field by fluctuating electrical current in a circuit. This magnetic field, in turn, can induce an electrical current in another conductor, such as a radio receiver antenna. If the fluctuating electrical signal represents particular information, the receiver can decode it.

A low intensity radio transmission only induces sufficient electrical current to pass on a signal to a receiver. But if you greatly increased the intensity of the signal (the magnetic field), it would induce a much larger electrical current. A big enough current would fry the semiconductor components in the radio, disintegrating it beyond repair.

Picking up a new radio would be the least of your concerns, of course. The intense fluctuating magnetic field could induce a massive current in just about any other electrically conductive object -- for example phone lines, power lines and even metal pipes. These unintentional antennas would pass the current spike on to any other electrical components down the line (say, a network of computers hooked up to phone lines). A big enough surge could burn out semiconductor devices, melt wiring, fry batteries and even explode transformers.

There are a number of possible ways of generating and "delivering" such a magnetic field. In the next section, we'll look at a few possible EMP weaponry concepts.

E-Bomb Effects
The United States is drawn to EMP technology because it is potentially non-lethal, but is still highly destructive. An E-bomb attack would leave buildings standing and spare lives, but it could destroy a sizeable military.

There is a range of possible attack scenarios. Low-level electromagnetic pulses would temporarily jam electronics systems, more intense pulses would corrupt important computer data and very powerful bursts would completely fry electric and electronic equipment.

In modern warfare, the various levels of attack could accomplish a number of important combat missions without racking up many casualties. For example, an e-bomb could effectively neutralize:

* vehicle control systems
* targeting systems, on the ground and on missiles and bombs
* communications systems
* navigation systems
* long and short-range sensor systems

EMP weapons could be especially useful in an invasion of Iraq, because a pulse might effectively neutralize underground bunkers. Most of Iraq's underground bunkers are hard to reach with conventional bombs and missiles. A nuclear blast could effectively demolish many of these bunkers, but this would take a devastating toll on surrounding areas. An electromagnetic pulse could pass through the ground, knocking out the bunker's lights, ventilation systems, communications -- even electric doors. The bunker would be completely uninhabitable.

U.S. forces are also highly vulnerable to EMP attack, however. In recent years, the U.S. military has added sophisticated electronics to the full range of its arsenal. This electronic technology is largely built around consumer-grade semiconductor devices, which are highly sensitive to any power surge. More rudimentary vacuum tube technology would actually stand a better chance of surviving an e-bomb attack.

A widespread EMP attack in any country would compromise a military's ability to organize itself. Ground troops might have perfectly functioning non-electric weapons (like machine guns), but they wouldn't have the equipment to plan an attack or locate the enemy. Effectively, an EMP attack could reduce any military unit into a guerilla-type army.

While EMP weapons are generally considered non-lethal, they could easily kill people if they were directed towards particular targets. If an EMP knocked out a hospital's electricity, for example, any patient on life support would die immediately. An EMP weapon could also neutralize vehicles, including aircraft, causing catastrophic accidents.

In the end, the most far-reaching effect of an e-bomb could be psychological. A full-scale EMP attack in a developed country would instantly bring modern life to a screeching halt. There would be plenty of survivors, but they would find themselves in a very different world.

Three-dimensional integrated circuit(3-D ICs)

In electronics, a three-dimensional integrated circuit (3D IC, 3D-IC, or 3-D IC) is a chip with two or more layers of active electronic components, integrated both vertically and horizontally into a single circuit. The semiconductor industry is hotly pursuing this promising technology in many different forms, but it is not yet widely used; consequently, the definition is still somewhat fluid.

3D ICs vs. 3D packaging

3D packaging saves space by stacking separate chips in a single package. This packaging, known as System in Package (SiP) or Chip Stack MCM, does not integrate the chips into a single circuit. The chips in the package communicate with off-chip signaling, much as if they were mounted in separate packages on a normal circuit board. In contrast, a 3D IC is a single chip. All components on the layers communicate with on-chip signaling, whether vertically or horizontally. Essentially, a 3D IC bears the same relation to a 3D package that an SoC bears to a circuit board.

Manufacturing technologies

As of 2008 there are four ways to build a 3D IC:

Monolithic – Electronic components and their connections (wiring) are built in layers on a single semiconductor wafer, which is then diced into 3D ICs. There is only one substrate, hence no need for aligning, thinning, bonding, or through-silicon vias. Applications of this method are currently limited because creating normal transistors requires enough heat to destroy any existing wiring.

Wafer-on-Wafer – Electronic components are built on two or more semiconductor wafers, which are then aligned, bonded, and diced into 3D ICs. Each wafer may be thinned before or after bonding. Vertical connections are either built into the wafers before bonding or else created in the stack after bonding. These “through-silicon vias” (TSVs) pass through the silicon substrate(s) between active layers and/or between an active layer and an external bond pad.

Die-on-Wafer – Electronic components are built on two semiconductor wafers. One wafer is diced; the singulated dies are aligned and bonded onto die sites of the second wafer. As in the wafer-on-wafer method, thinning and TSV creation are performed either before or after bonding. Additional dies may be added to the stacks before dicing.

Die-on-Die – Electronic components are built on multiple dies, which are then aligned and bonded. Thinning and TSV creation may be done before or after bonding.

Benefits

3D ICs offer many significant benefits, including:

Footprint – More functionality fits into a small space. This extends Moore’s Law and enables a new generation of tiny but powerful devices.

Speed – The average wire length becomes much shorter. Because propagation delay is proportional to the square of the wire length, overall performance increases.

Power – Keeping a signal on-chip reduces its power consumption by ten to a hundred times. Shorter wires also reduce power consumption by producing less parasitic capacitance. Reducing the power budget leads to less heat generation, extended battery life, and lower cost of operation.

Design – The vertical dimension adds a higher order of connectivity and opens a world of new design possibilities.

Heterogeneous integration – Circuit layers can be built with different processes, or even on different types of wafers. This means that components can be optimized to a much greater degree than if they were built together on a single wafer. Even more interesting, components with completely incompatible manufacturing could be combined in a single device.

Circuit security - The stacked structure hinders attempts to reverse engineer the circuitry. Sensitive circuits may also be divided among the layers in such a way as to obscure the function of each layer.

Bandwidth - 3D integration allows large numbers of vertical vias between the layers. This allows construction of wide bandwidth buses between functional blocks in different layers. A typical example would be a processor+memory 3D stack, with the cache memory stacked on top of the processor. This arrangement allows a bus much wider than the typical 128 or 256 bits between the cache and processor. Wide buses in turn alleviate the memory wall problem.

Asynchronous circuit

Asynchronous circuit

An asynchronous circuit is a circuit in which the parts are largely autonomous. They are not governed by a clock circuit or global clock signal, but instead need only wait for the signals that indicate completion of instructions and operations. These signals are specified by simple data transfer protocols. This digital logic design is contrasted with a synchronous circuit which operates according to clock timing signals.

Theoretical foundations

Petri Nets are an attractive and powerful model for reasoning about asynchronous circuits. However Petri nets have been criticized by Carl Hewitt for their lack of physical realism (see Petri net#Subsequent models of concurrency). Subsequent to Petri nets other models of concurrency have been developed that can model asynchronous circuits including the Actor model and process calculi.

The term asynchronous logic is used to describe a variety of design styles, which use different assumptions about circuit properties. These vary from the bundled delay model - which uses 'conventional' data processing elements with completion indicated by a locally generated delay model - to delay-insensitive design - where arbitrary delays through circuit elements can be accommodated. The latter style tends to yield circuits which are larger and slower than synchronous (or bundled data) implementations, but which are insensitive to layout and parametric variations and are thus "correct by design."

Benefits

Different classes of asynchronous circuitry offer different advantages. Below is a list of the advantages offered by Quasi Delay Insensitive Circuits, generally agreed to be the most "pure" form of asynchronous logic that retains computational universality. Less pure forms of asynchronous circuitry offer better performance at the cost of compromising one or more of these advantages:

* Robust handling of metastability of arbiters.
* Early Completion of a circuit when it is known that the inputs which have not yet arrived are irrelevant.
* Possibly lower power consumption because no transistor ever transitions unless it is performing useful computation (clock gating in synchronous designs is an imperfect approximation of this ideal). Also, clock drivers can be removed which can significantly reduce power consumption. However, when using certain encodings, asynchronous circuits may require more area, which can result in increased power consumption if the underlying process has poor leakage properties (for example, deep submicrometer processes used prior to the introduction of high-K dielectrics).
* Freedom from the ever-worsening difficulties of distributing a high-fanout, timing-sensitive clock signal.
* Better modularity and composability.
* Far fewer assumptions about the manufacturing process are required (most assumptions are timing assumptions).
* Circuit speed is adapted on the fly to changing temperature and voltage conditions rather than being locked at the speed mandated by worst-case assumptions.
* Immunity to transistor-to-transistor variability in the manufacturing process, which is one of the most serious problems facing the semiconductor industry as dies shrink.
* Less severe electromagnetic interference. Synchronous circuits create a great deal of EMI in the frequency band at (or very near) their clock frequency and its harmonics; asynchronous circuits generate EMI patterns which are much more evenly spread across the spectrum.
* In asynchronous circuits, local signaling eliminates the need for global synchronization which exploits some potential advantages in comparison with synchronous ones. They have shown potential specifications in low power consumption, design reuse, improved noise immunity and electromagnetic compatibility. Asynchronous circuits are more tolerant to process variations and external voltage fluctuations.

Disadvantages

* Increased Complexity
* More Difficult to Design
* the performance analysis of asynchronous circuits is a complicated problem


Applications

Asynchronous CPU

Asynchronous CPUs are one of several ideas for radically changing CPU design.

Unlike a conventional processor, a clockless processor (asynchronous CPU) has no central clock to coordinate the progress of data through the pipeline. Instead, stages of the CPU are coordinated using logic devices called "pipeline controls" or "FIFO sequencers." Basically, the pipeline controller clocks the next stage of logic when the existing stage is complete. In this way, a central clock is unnecessary. It may actually be even easier to implement high performance devices in asynchronous, as opposed to clocked, logic:

* components can run at different speeds on an asynchronous CPU; all major components of a clocked CPU must remain synchronized with the central clock;
* a traditional CPU cannot "go faster" than the expected worst-case performance of the slowest stage/instruction/component. When an asynchronous CPU completes an operation more quickly than anticipated, the next stage can immediately begin processing the results, rather than waiting for synchronization with a central clock. An operation might finish faster than normal because of attributes of the data being processed (e.g., multiplication can be very fast when multiplying by 0 or 1, even when running code produced by a naive compiler), or because of the presence of a higher voltage or bus speed setting, or a lower ambient temperature, than 'normal' or expected.

Asynchronous logic proponents believe these capabilities would have these benefits:

* lower power dissipation for a given performance level, and
* highest possible execution speeds.

The biggest disadvantage of the clockless CPU is that most CPU design tools assume a clocked CPU (i.e., a synchronous circuit). Many tools "enforce synchronous design practices". Making a clockless CPU (designing an asynchronous circuit) involves modifying the design tools to handle clockless logic and doing extra testing to ensure the design avoids metastable problems. The group that designed the aforementioned AMULET, for example, developed a tool called LARD to cope with the complex design of AMULET3.

Despite the difficulty of doing so, numerous asynchronous CPUs have been built, including:

* the ORDVAC (?) and the (identical) ILLIAC I (1951),
* the ILLIAC II (1962);
* The Caltech Asynchronous Microprocessor, the world-first asynchronous microprocessor (1988);
* the ARM-implementing AMULET (1993 and 2000);
* the asynchronous implementation of MIPS R3000, dubbed MiniMIPS (1998);
* the SEAforth multi-core processor (2008) from Charles H. Moore.

The ILLIAC II was the first completely asynchronous, speed independent processor design ever built; it was the most powerful computing machine known to man at the time.

DEC PDP-16 Register Transfer Modules (ca. 1973) allowed the experimenter to construct asynchronous, 16-bit processing elements. Delays for each module were fixed and based on the module's worst-case timing.

The Caltech Asynchronous Microprocessor (1988) was the first asynchronous microprocessor (1988). Caltech designed and manufactured the world's first fully Quasi Delay Insensitive processor. During demonstrations, the researchers amazed viewers by loading a simple program which ran in a tight loop, pulsing one of the output lines after each instruction. This output line was connected to an oscilloscope. When a cup of hot coffee was placed on the chip, the pulse rate (the effective "clock rate") naturally slowed down to adapt to the worsening performance of the heated transistors. When liquid nitrogen was poured on the chip, the instruction rate shot up with no additional intervention. Additionally, at lower temperatures, the voltage supplied to the chip could be safely increased, which also improved the instruction rate -- again, with no additional configuration.