FluidFM: Combining AFM and nanofluidics for single cell applications

The Atomic Force Microscope (AFM) is a key tool for nanotechnology. This instrument has become the most widely used tool for imaging, measuring and manipulating matter at the nanoscale and in turn has inspired a variety of other scanning probe techniques. Originally the AFM was used to image the topography of surfaces, but by modifying the tip it is possible to measure other quantities (for example, electric and magnetic properties, chemical potentials, friction and so on), and also to perform various types of spectroscopy and analysis. Increasingly, the AFM is also becoming a tool for nanofabrication.

Relatively new is the use of AFM in cell biology. We wrote about this recently in a Spotlight that described a novel method to probe the mechanical properties of living and dead bacteria via AFM indentation experimentations ("Dead or alive – nanotechnology technique tells the difference ").
Researchers in Switzerland have now demonstrated novel cell biology applications using hollow force-controlled AFM cantilevers – a new device they have called FluidFM.

"The core of the invention is to have fixed already existing microchanneled cantilevers to an opportunely drilled AFM probeholder" Tomaso Zambelli tells Nanowerk. "In this way, the FluidFM is not restricted to air but can work in liquid environments. Since it combines a nanofluidics circuit, every soluble agent can be added to the solution to be dispensed. Moreover, the force feedback allows to approach very soft objects like cells without damaging them."

As cell biology is moving towards single cell technologies and applications, single cell injection or extraction techniques are in high demand. Apart from this, however, the FluidFM could also be used for nanofabrication applications such as depositing a conductive polymer wire between to microelectrodes, or to etch ultrafine structures out of solid materials using acids as the spray agent. The team has reported their findings in a recent paper in Nano Letters ("FluidFM: Combining Atomic Force Microscopy and Nanofluidics in a Universal Liquid Delivery System for Single Cell Applications and Beyond").

Zambelli originally realized that the technology of the atomic force microscope that is normally used only to image cells could be transformed into a microinjection system. The result of the development by Zambelli and his colleagues in the Laboratory of Biosensors and Bioelectronics at the Institute of Biomedical technology at ETH Zurich and in the Swiss Center for Electronics and Microtechnology (CSEM) in Neuchâtel was the "fluid force microscope", currently the smallest automated nanosyringe currently in existence.

"Our FluidFM even operates under water or in other liquids – a precondition for being able to use the instrument to study cells" says Zambelli.

The force detection system of the FluidFM is so sensitive that the interactions between tip and sample can be reduced to the piconewton range, thereby allowing to bring the hollow cantilever into gentle but close contact with cells without puncturing or damaging the cell membrane.

On the other hand, if membrane perforation for intracellular injection is desired, this is simply achieved by selecting a higher force set point taking advantage of the extremely sharp tip (radius of curvature on the order of tens of nanometers).

To enable solutions to be injected into the cell through the needle, scientists at CSEM installed a microchannel in the cantilever. Substances such as medicinal active ingredients, DNA, and RNA can be injected into a cell through the tip. At the same time, samples can also be taken from a cell through the needle for subsequent analysis.

According to Zambelli, while this approach is similar to microinjection using glass pipettes, there are a number of essential differences.

"Microinjection uses optical microscopy to control the position of the glass pipette tip both in the xy plane and in the z direction (via image focusing)" he explains. "As consequence of the limited resolution of optical microscopy, subcellular domains cannot be addressed and tip contact with the cell membrane cannot be discriminated from tip penetration of the membrane. Cells are often lethally damaged and skilled personnel are required for microinjection."

"The limited resolution of this method and the absence of mechanical information contrast strongly with the high resolution imaging and the direct control of applied forces that are possible with AFM. Precise force feedback reduces potential damage to the cell; the cantilever geometry minimizes both the normal contact forces on the cell and the lateral vibrations of the tip that can tear the cell membrane during microinjection; the spatial resolution is determined by the submicrometer aperture so that injection into subcellular domains becomes easily achievable."

Experiments conducted by the Swiss team demonstrate the potential of the FluidFM in the field of single-cell biology through precise stimulation of selected cell domains with whatever soluble agents at a well-defined time.

"We confidently expect that the inclusion of an electrode in the microfluidics circuit will allow a similar approach toward patch-clamping with force controlled gigaseal formation," says Zambelli. "We will also explore other strategies at the single-cell level, such as the controlled perforation of the cell membrane for local extraction of cytoplasm

"Zambelli and his colleagues are convinced that their technology has great commercial potential. Rejecting offers from well-known manufacturers of atomic force microscopes for the sale of the patent for the FluidFM, they have founded Cytosurge LLC, a company dedicated to commercially develop the instrument.

Today, Zambelli's laboratory contains two prototypes of the instrument, which are being tested in collaboration with biologists.



Do viruses and all the other nasties in cyberspace matter? Do they really do much harm? Imagine that no one has updated your anti-virus software for a few months. When they do, you find that your accounts spreadsheets are infected with a new virus that changes figures at random. Naturally you keep backups. But you might have been backing up infected files for months. How do you know which figures to trust? Now imagine that a new email virus has been released. Your company is receiving so many emails that you decide to shut down your email gateway altogether and miss an urgent order from a big customer.

Imagine that a friend emails you some files he found on the Internet. You open them and trigger a virus that mails confidential documents to everyone in your address book including your competitors. Finally, imagine that you accidentally send another company, a report that carries a virus. Will they feel safe to do business with you again? Today new viruses sweep the planet in hours and virus scares are major news. A computer virus is a computer program that can spread across computers and networks by making copies of itself, usually without the user’s knowledge. Viruses can have harmful side effects. These can range from displaying irritating messages to deleting all the files on your computer.

A virus program has to be run before it can infect your computer. Viruses have ways of making sure that this happens. They can attach themselves to other programs or hide in code that is run automatically when you open certain types of files. The virus can copy itself to other files or disks and make changes on your computer. Virus side effects, often called the payload, are the aspect of mostinterest to users. Password-protecting the documents on a particular day, mailing information about the user and machine to an address somewhere are some of the harmful side effects of viruses. Various kinds of viruses include macro virus, parasitic or file virus, Boot virus, E-mails are the biggest source of viruses. Usually they come as attachments with emails.

The Internet caused the spreading of viruses around the globe. The threat level depends on the particular code used in the WebPages and the security measures taken by service providers and by you. One solution to prevent the viruses is anti-virus softwares. Anti-virus software can detect viruses, prevent access to infected files and often eliminate the infection.

Computer viruses are starting to affect mobile phones too. The virus is rare and is unlikely to cause much damage. Anti-virus experts expect that as mobile phones become more sophisticated they will be targeted by virus writers. Some firms are already working on anti-virus software for mobile phones. VBS/Timo-A, Love Bug,Timofonica,CABIR,aka ACE-? and UNAVAILABLE are some of the viruses that affect the mobile phones

What is a virus?
A computer virus is a computer program that can spread across computers and networks by making copies of itself, usually without the user’s knowledge. Viruses can have harmful side-effects. These can range from displaying irritating messages to deleting all the files on your computer.
Evolution of virus
In the mid-1980s Basit and Amjad Alvi of Lahore, Pakistan discovered that people were pirating their software. They responded by writing the first computer virus, a program that would put a copy of itself and a copyright message on any floppy disk copies their customers made. From these simple beginnings, an entire virus counter-culture has emerged. Today new viruses sweep the planet in hours and virus scares are major news
How does a virus infect computers?

A virus program has to be run before it can infect your computer. Viruses have ways of making sure that this happens. They can attach themselves to other programs or hide in code that is run automatically when you open certain types of files. You might receive an infected file on a disk, in an email attachment, or in a download from the internet. As soon as you launch the file, the virus code runs. Then the virus can copy itself to other files or disks and make changes on your computer.
Who writes viruses?
Virus writers don’t gain in financial or career terms; they rarely achieve real fame; and, unlike hackers, they don’t usually target particular victims, since viruses spread too indiscriminately. Virus writers tend to be male, under 25 and single. Viruses also give their writers powers in cyberspace that they could never hope to have in the real world.
Virus side effects(Payload)
Virus side-effects are often called the payload. Viruses can disable our computer hardware, Can change the figures of an accounts spreadsheets at random, Adversely affects our email contacts and business domain, Can attack on web servers…
 Messages -WM97/Jerk displays the message ‘I think (user’s name) is a big stupid jerk!’  Denying access -WM97/NightShade password-protects the current document on Friday 13th. Data theft- Troj/LoveLet-A emails information about the user and machine to an address in the Philippines
. Corrupting data -XM/Compatable makes changes to the data in Excel spreadsheets. Deleting data -Michelangelo overwrites parts of the hard disk on March 6th.
 Disabling Hardware -CIH or Chernobyl (W95/CIH-10xx)
 attempts to overwrite the BIOS on April 26th, making the machine unusable.
 Crashing servers-Melissa or Explore Zip, which spread via email, can generate so much mail that servers crash.
There is a threat to confidentiality too. Melissa can forward documents, which may contain sensitive information, to anyone in your address book. Viruses can seriously damage your credibility. If you send infected documents to customers, they may refuse to do business with you or demand compensation. Sometimes you risk embarrassment as well as a damaged business reputation. WM/Polypost, for example, places copies of your documents in your name on alt.sex usenet newsgroups.
Trojan horses
Trojan horses are programs that do things that are not described in their specifications The user runs what they think is a legitimate program, allowing it to carry out hidden, often harmful, functions. For example, Troj/Zulu claims to be a program for fixing the ‘millennium bug’ but actually overwrites the hard disk. Trojan horses are sometimes used as a means of infecting a user with a computer virus.
Backdoor Trojans
A backdoor Trojan is a program that allows someone to take control of another user’s PC via the internet. Like other Trojans, a backdoor Trojan poses as legitimate or desirable software. When it is run (usually on a Windows 95/98 PC), it adds itself to the PC’s startup routine. The Trojan can then monitor the PC until it makes a connection to the internet. Once the PC is on-line, the person who sent the Trojan can use software on their computer to open and close programs on the infected computer, modify files and even send items to the printer. Subseven and Back Orifice are among the best known backdoor Trojans.
Worms are similar to viruses but do not need a carrier (like a macro or a boot sector).They are subtype of viruses. Worms simply create exact copies of themselves and use communications between computers to spread. Many viruses, such as Kakworm (VBS/Kakworm) or Love Bug (VBS/LoveLet-A), behave like worms and use email to forward themselves to other users.
Boot sector viruses
Boot sector viruses were the first type of virus to appear. They spread by modifying the boot sector, which contains the program that enables your computer to start up. When you switch on, the hardware looks for the boot sector program – which is usually on the hard disk, but can be on floppy or CD – and runs it. This program then loads the rest of the operating system into memory. A boot sector virus replaces the original boot sector with its own, modified version (and usually hides the original somewhere else on the hard disk). When you next start up, the infected boot sector is used and the virus becomes active. You can only become infected if you boot up your computer from an infected disk, e.g. a floppy disk that has an infected boot sector. Many boot sector viruses are now quite old.
Those written for DOS machines do not usually spread on Windows 95, 98, Me, NT or 2000 computers, though they can sometimes stop them from starting up properly. Boot viruses infect System Boot Sectors (SBS) and Master Boot Sectors (MBS). The MBS is located on all physical hard drives. It contains, among other data, information about the partition table (information about how a physical disk is divided into logical disks), and a short program that can interpret the partition information to find out where the SBS is located. The MBS is operating system independent. The SBS contains, among other data, a program whose purpose is to find and run an operating system. Because floppy diskettes are exchanged more frequently than program files boot viruses are able to propagate more effectively than file viruses.Form -A virus that is still widespread ten years after it first appeared.
The original version triggers on the 18th of each month and produces a click when keys are pressed on the keyboard. Parity Boot - A virus that may randomly display the message ‘PARITY CHECK’ and freeze the operating system. The message resembles a genuine error message displayed when the computer’s memory is faulty.
Parasitic virus (File virus)
Parasitic viruses, also known as file viruses, attach themselves to programs (or ‘executables’) and Acts as a part of the program .When you start a program infected with a file virus, the virus is launched first. To hide itself, the virus then runs the original program. The operating system on your computer sees the virus as part of the program you were trying to run and gives it the same rights. These rights allow the virus to copy itself, install itself in memory or release its payload. these viruses Infects over networks.
The internet has made it easier than ever to distribute programs, giving these viruses new opportunities to spread.
 Jerusalem- On Friday 13th deletes every program run on the computer.
 CIH (Chernobyl) - On the 26th of certain months, this virus will overwrite part of the BIOS chip, making the computer unusable. The virus also overwrites the hard disk.
 Remote Explorer - WNT/RemExp (Remote Explorer) infects Windows NT executables. It was the first virus that could run as a service, i.e. run on NT systems even when no-one is logged in. Parasitic viruses infects executables by companion, link, overwrite, insert, prep end, append techniques

a) Companion virus
A companion virus does not modify its host directly. Instead it maneuvers the operating system to execute itself instead of the host file. Sometimes this is done by renaming the host file into some other name, and then grant the virus file the name of the original program. Or the virus infects an .EXE file by creating a .COM file with the same name in the same directory. DOS will always execute a .COM file first if only the program name is given, so if you type “EDIT” on a DOS prompt, and there is an EDIT.COM and EDIT.EXE in the same directory, the EDIT.COM is executed.
b) Linking Virus
A link virus makes changes in the low-level workings of the file system, so that program names do no longer point to the original program, but to a copy of the virus. It makes it possible to have only one instance of the virus, which all program names point to.

Limits and Fits, Tolerance Dimensioning

Definitions:nominal size: The size designation used for generalidentification. The nominal size of a shaft and a hole are thesame. This value is often expressed as a fraction.basic size: The exact theoretical size of a part. This isthe value from which limit dimensions are computed. Basic size isa four decimal place equivalent to the nominal size. The number ofsignificant digits imply the accuracy of the dimension.

example: nominal size = 1 1/4basic size = 1.2500

design size: The ideal size for each component (shaft andhole) based upon a selected fit. The difference between the designsize of the shaft and the design size of the hole is equal to theallowance of the fit. The design size of a part corresponds to the Maximum Material Condition (MMC). That is, the largest shaft permitted by the limits and the smallest hole. Emphasis is placed upon the design size in the writing of the actual limit dimension, so the design size is placed in the top position of the pair.

tolerance: The total amount by which a dimension is allowed to vary. For fractional linear dimensions we have assumed a bilateral tolerance of 1/64 inch. For the fit of a shaft/holecombination, the tolerance is considered to be unilateral, that is, it is only applied in one direction from design size of the part. Standards for limits and fits state that tolerances are appliedsuch that the hole size can only vary larger from design size and the shaft size smaller.
basic hole system: Most common system for limit dimensions. In this system the design size of the hole is taken to be equivalent to the basic size for the pair (see above). This means that the lower (in size) limit of the hole dimension is equal to design size. The basic hole system is more frequently used since most hole generating devices are of fixed size (for example, drills, reams, etc.) When designing using purchased components with fixed outer diameters (bearings, bushings, etc.) a basic shaft system may be used.

allowance: The allowance is the intended difference in the sizes of mating parts. This allowance may be: positive (indicated with a "+" symbol), which means there is intended clearance between parts; negative("-"), for intentional interference: or "zero allowance" if the two parts are intended to be the "same size".This last case is common to selective assembly.

The extreme permissible values of a dimension are known as limits. The degree of tightness or looseness between two mating parts that are intended to act together is known as the fit of the parts. The character of the fit depends upon the use of the parts. Thus, the fit between members that move or rotate relative to each other, such as a shaft rotating in a bearing, is considerably different from the fit that is designed to prevent any relative motion between two parts, such as a wheel attached to an axle.

In selecting and specifying limits and fits for various applications, the interests of interchangeable manufacturing require that (1) standard definitions of terms relating to limits and fits be used; (2) preferred basic sizes be selected wherever possible to be reduce material and tool costs; (3) limits be based upon a series of preferred tolerances and allowances; and (4) a uniform system of applying tolerances (bilateral or unilateral) be used.

Introduction to CAN (Controlled Area Network)


CAN was originally developed by the German company, Robert Bosch, for use in cars, to provide a cost-effective communications bus for in-car electronics and as alternative to expensive, cumbersome and unreliable wiring looms and connectors. The car industry continues to use CAN for an increasing number of applications, but because of its proven reliability and robustness, CAN is now also being used in many other control applications. Intra-vehicular communication:
A typical vehicle has a large number of electronic control systems
The growth of automotive electronics is a result of:Customers wish for better comfort and better safety.Government requirements for improved emission controlReduced fuel consumption
Some of such control systemsEngine timingGearbox and carburetor throttle controlAnti-block systems (ABS)Acceleration skid control (ASC)

The complexity of the functions implemented by these electronic control systems necessitates communication between them.In addition, a number of systems are being developed which will cover more than one device. For exampleASC requires the interplay of the engine timing and carburetor control in order to reduce torque when drive wheel slippage occurs.In the electronic gearbox control, the ease of gear changing can be improved by a brief adjustment to ignition timing

How do we connect these control devices?
With conventional systems, data is exchanged by means of dedicated signal lines.
But this is becoming increasingly difficult and expensive as control functions become ever more complex.

In the case of complex control systems in particular, the number of connections cannot be increased much further.

Solution: Use Field bus networks for connecting the control devices


Field buses are communication technologies and products used in vehicular, automation and process control industries.
Proprietary Field busesProprietary Field buses are an intellectual property of a particular company or body.

Open Field busesFor a Field bus to be Open, it must satisfy the following criteria.The full Field bus Specification must be published and available at a reasonable price. Critical ASIC components must be available, also at a reasonable price. Well defined validation process, open to all of the Field bus users.

Field bus Advantages:

I.Reduces the complexity of the control system in terms of hardware outlay.
II.Resulting in the reduced complexity of the control system, project design engineering is made simpler, more efficient and conversely less expensive.

III.By selecting a recognized and well established system, this will make the Fieldbus equipment in you plant or plants interchangeable between suppliers.

IV.The need to be concerned about connections, compatibility and other potential problems is eradicated.

What constitutes a Field bus?The specification of a Field bus should ideally cover all of the seven layers of the OSI model as shown below,.

FEATURES OF CANCAN features are as follows:-

CAN is a robust protocol – essential for automotive applicationsSO 11898 and SAE/J2411 are open standardsWell documented and fully supported worldwideChoice of three CAN physical layer optionsHigh-speed (HS)for high data ratesFault-tolerant (FT)for additional robustnessSingle-wire (SW)for minimum wiringAny node can access the bus when the bus is quiet.Non- destructive bit-wise arbitration to allow 100% use of bandwidth without loss of data.Variable message priority based on 11-bit (or 29 bit) packet identifier.Peer- to-Peer and multi-cast receptionAutomatic error detection, signaling and retries.Data packets 8 bytes long


Can is a fast serial Bus that designed to provide An efficientReliable and Very economical link between sensors & actuatorsCan uses a twisted pair cables to communicate at speed up to 1 Mbit/s up to 40 devices.Originally CAN is developed to simplify the wiring in automobiles.CAN field buses are now used in machine and factory automation products as well.

CAN History

In the early 1980s, engineers at Bosch were evaluating existing serial bus systems regarding their possible use in passenger cars. Because none of the available network protocols were able to fulfill the requirements of the automotive engineers, Uwe Kiencke started the development of a new serial bus system in 1983.

The new bus protocol was mainly supposed to add new functionality – the reduction of wiring harnesses was just a by-product, but not the driving force behind the development of CAN. Engineers from Mercedes-Benz got involved early on in the specification phase of the new serial bus system, and so did Intel as the potential main semiconductor vendor. Professor Dr. Wolfhard Lawrenz from the University of Applied Science in Braunschweig-Wolfenbüttel, Germany, who had been hired as a consultant, gave the new network protocol the name ‘Controller Area Network’. Professor Dr. Horst Wettstein from the University of Karlsruhe also provided academic assistance. In February of 1986, CAN was born: at the SAE congress in Detroit, the new bus system developed by Bosch was introduced as ‘Automotive Serial Controller Area Network’. Uwe Kiencke, Siegfried Dais and Martin Litschel introduced the multi-master network protocol. It was based on a non-destructive arbitration mechanism, which would grant bus access to the message with the highest priority without any delays.

There was no central bus master. Furthermore, the fathers of CAN – the individuals mentioned above plus Bosch employees Wolfgang Borst, Wolfgang Botzenhard, Otto Karl, Helmut Schilling, and Jan Unruh – had implemented several error detection mechanisms. The error handling also included the automatic disconnection of faulty bus nodes in order to keep up the communication between the remaining nodes. The transmitted messages were not identified by the node address of the transmitter or the receiver of the message (as in almost all other bus systems), but rather by their content. The identifier representing the content of the message also had the function of specifying the priority of the message within the system.
A lot of presentations and publications describing this innovative communication protocol followed, until in mid 1987 – two months ahead of schedule – Intel delivered the first CAN controller chip, the 82526.

It was the very first hardware implementation of the CAN protocol. In only four years, an idea had become reality. Shortly thereafter, Philips Semiconductors introduced the 82C200. These two earliest ancestors of the CAN controllers were quite different concerning acceptance filtering and message handling. On one hand, the FullCAN concept favored by Intel required less CPU load from the connected micro-controller than the BasicCAN implementation chosen by Philips. On the other hand, the FullCAN device was limited regarding the number of messages that could be received. The BasicCAN controller also required less silicon. In today’s CAN controllers, the ‘grandchildren’, very often different concepts of acceptance filtering and message handling have been implemented in the same module, making the misleading terms BasicCAN and FullCAN obsolete.


Communication is identical for all implementations of CAN. However, there are two principal hardware implementations. The two implementations are known as Basic CAN and Full CAN.
Basic CAN In Basic CAN configurations there is a tight link between the CAN controller and the associated microcontroller. The microcontroller, which will have other system related functions to administer, will be interrupted to deal with every CAN message.

Full CANFull CAN devices contain additional hardware to provide a message "server" that automatically receives and transmits CAN messages without interrupting the associated microcontroller. Full CAN devices carry out extensive acceptance filtering on incoming messages, service simultaneous requests, and generally reduce the load on the microcontroller.
Network SizesThe number of nodes that can exist on a single network is, theoretically, limited only by the number of available identifiers. However, the drive capabilities of currently available devices impose greater restrictions. Depending on the device types, up to 32 or 64 nodes per network is normal, but at least one manufacturer now provides devices that will allow networks of 110 nodes.



Data messages transmitted from any node on a CAN bus do not contain addresses of either the transmitting node, or of any intended receiving node.Instead, the content of the message (e.g. Revolutions per Minute, Hopper Full, X-ray Dosage, etc.) is labeled by an identifier that is unique throughout the network. All other nodes on the network receive the message and each performs an acceptance test on the identifier to determine if the message, and thus its content, is relevant to that particular node. If the message is relevant, it will be processed; otherwise it is ignored. The unique identifier also determines the priority of the message. The lower the numerical value of the identifier, the higher the priority. In situations where two or more nodes attempt to transmit at the same time, a non-destructive arbitration technique guarantees that messages are sent in order of priority and that no messages are lost.

Bit encodingCAN use Non Return to Zero (NRZ) encoding (with bit-stuffing) for data communication on a differential two wire bus. The use of NRZ encoding ensures compact messages with a minimum number of transitions and high resilience to external disturbance.
The physical busThe two wire bus is usually a twisted pair (shielded or unshielded). Flat pair (telephone type) cable also performs well but generates more noise itself, and may be more susceptible to external sources of noise.

The CAN protocol is an international standard defined in the ISO 11898. Beside the CAN protocol itself the conformance test for the CAN protocol is defined in the ISO 16845, which guarantees the interchangeability of the CAN chips.

CAN is based on the “broadcast communication mechanism”, which is based on a message-oriented transmission protocol. It defines message contents rather than stations and station addresses. Every message has a message identifier, which is unique within the whole network since it defines content and also the priority of the message.

This is important when several stations compete for bus access (bus arbitration), as a result of the content-oriented addressing. This allows for a modular concept and also permits the reception of multiple data and the synchronization of distributed processes. Also, data transmission is not based on the availability of specific types of stations, which allows simple servicing and upgrading of the network.

Message formats

CAN distinguishes four message formats: data, remote, error, and overload frames. Here we limit the discussion to the data frame. A data frame begins with the start-of-frame (SOF) bit. It is followed by an eleven-bit identifier and the remote transmission request (RTR) bit. The identifier and the RTR bit form the arbitration field. The control field consists of six bits and indicates how many bytes of data follow in the data field. The data field can be zero to eight bytes. The data field is followed by the cyclic redundancy checksum (CRC) field, which enables the receiver to check if the received bit sequence was corrupted. The two-bit acknowledgment (ACK) field is used by the transmitter to receive an acknowledgment of a valid frame from any receiver.

The end of a message frame is signalled through a seven-bit end-offrame (EOF). There is also an extended data frame with a twenty-nine-bit identifier (instead of eleven bits).The CAN protocol was internationally standardized in 1993 as ISO 11898-1. The development of CAN was mainly motivated by the need for new functionality, but it also reduced the need for wiring. The use of CAN in the automotive industry has caused mass production of CAN controllers. Today, CAN controllers are integrated on many microcontrollers and available at a low cost.


Any node can access the bus when the bus is quiet
Non-destructive bit-wise arbitration to allow 100% use of the bandwidth without loss of data
Variable message priority based on 11-bit (or 29 bit) packet identifier
Peer-to-peer and multi-cast reception
Automatic error detection, signalling and retries
Data packets 8 bytes long
Error detection and error handling are important for the performance of CAN. Because of complementary error detection mechanisms, the probability of having an undetected error is very small. Error detection is done in five different Vehicle Applications of Controller Area Network 7 ways in CAN: bit monitoring and bit stuffing, as well as frame check, ACK check, and CRC. Bit monitoring simply means that each transmitter monitors the bus level, and signals a bit error if the level does not agree with the transmitted signal. (Bit monitoring is not done during the arbitration phase.)

After having transmitted five identical bits, a node will always transmit the opposite bit. This extra bit is neglected by the receiver. The procedure is called bit stuffing, and it can be used to detect errors. The frame check consists of checking that the fixed bits of the frame have the values they are supposed to have, e.g., EOF consists of seven recessive bits. During the ACK in the message frame, all receivers are supposed to send a dominant level. If the transmitter, which transmits a recessive level, does not detect the dominant level, then an error is signalled by the ACK check mechanism.

Finally, the CRC is that every receiver calculates a checksum based on the message and compares it with the CRC field of the message. Every receiver node obviously tries to detect errors within each message. If an error is detected, it leads to an immediate and automatic retransmission of the incorrect message. In comparison to other network protocols, this mechanism leads to high data integrity and a short error recovery time. CAN thus provides elaborate procedure for error handling, including retransmission and reinitialization. The procedures have to be studied carefully for each application to ensure that the automated error handling is in line with the system requirements.


CAN networks can be used as an embedded communication system for microcontrollers as well as an open communication system for intelligent devices. The CAN serial bus system, originally developed for use in automobiles, is increasingly being used in industrial field bus systems, the similarities are remarkable. In both cases some of the major requirements are: low cost, the ability to function in a difficult electrical environment, a high degree of real-time capability and ease of use.Some users, for example in the field of medical engineering, opted for CAN because they have to meet particularly stringent safety requirements.

Similar problems are faced by manufacturers of other equipment with very high safety or reliability requirements (e. g. robots, lifts and transportation systems). CAN controllers and interface chips are physically small. They are available as low-cost, off-the-shelf components. They will operate at high, real-time speeds, and in harsh environments. All these properties have led to CAN also being used in a wide range of applications other than the car industry. The benefits of reduced cost and improved reliability that the car industry gains by using CAN are now available to manufacturers of a wide range of products.

For example:

• Marine control and navigation systems

•Elevator control systems

• Agricultural machinery

• Production line control systems

• Machine tools

• large optical telescopes

• Photo copiers

• Medical systems

• Paper making and processing machinery


The Controller Area Network (CAN) is a serial bus communications protocol developed by Bosch in the early 1980s. It defines a standard for efficient and reliable communication between sensor, actuator, controller, and other nodes in real-time applications. CAN is the de facto standard in a large variety of networked embedded control systems.

The early CAN development was mainly supported by the vehicle industry: CAN is found in a variety of passenger cars, trucks, boats, spacecraft, and other types of vehicles. The protocol is also widely used today in industrial automation and other areas of networked embedded control, with applications in diverse products such as production machinery, medical equipment, building automation, weaving machines & wheelchairs.In the automotive industry, embedded control has grown from stand-alone systems to highly integrated and networked control systems. By networking electro-mechanical subsystems, it becomes possible to modularize functionalities and hardware, which facilitates reuse and adds capabilities. Fig. 1 shows an example of an electronic control unit (ECU) mounted on a diesel engine of a Scania truck. The ECU handles the control of engine, turbofan, etc. but also the CAN communication. Combining networks and mechatronic modules makes it possible to reduce both the cabling and the number. The work of K. H. Johansson was partially supported by the European Commission through the ARTIST2 Network of Excellence on Embedded Systems Design, by the Swedish Research Council, and by the Swedish Foundation for Strategic Research through an Individual Grant for the Advancement of Research Leaders.
The work of M. T¨orngren was partially supported by the European Commission through ARTIST2 and by the Swedish Foundation for Strategic Research through the project SAVE of connectors, which facilitates production and increases reliability. Introducing networks in vehicles also makes it possible to more efficiently carry out diagnostics and to coordinate the operation of the separate subsystems.

Heavy Vehicles

Most existing vehicle model libraries are designed primarily for cars. Heavy vehicles have a number of sub-systems which are not present in passenger cars. Particularly the engine/transmission system includes de-vices like an exhaust brake and possibly a retarder. Further, the cooling system also has a more prominent role than in cars, and coolant is often used both by the engine and the transmission.

Signalling Bus

A key issue in an architecture which contains both physical plant and controller models is the handling of electrical signals. The controllers need to exchange data among themselves and they need to exchange signals with sensors and actuators. For our applications the actual signalling behaviour is not that important, an ideal communications model is sufficient. For the communication between a plant and its controller, standard library in-ports and out-ports are used. The communication between the controllers was a tougher case. Two implementations of the same controller may not have the same signalling needs, thus it must be possible to change the set of signals sent between control units. Separate input and output ports for all links between control units in the vehicle would create an un-decipherable graphical mess. Some type of signalling bus is needed. Both the standard library bus connectors and the type of bus used in the vehicle modelling architecture proposal by Tiller Etal were evaluated. We did not find enough information about the inter-controller communication in the Tiller paper to implement that system. Our main problem was to find a way of having compatible connectors in all controllers, without modifying the code of every controller when a signal was added to the bus. The standard library bus does not solve that problem, since it requires all signals to be declared in the connector. Eventually we chose a simpler solution based on a common connector called ”CAN” with a replace-able variable, called ”protocol”, which contains all the signals. The protocol variable can easily be redeclared into a type which contains exactly the signals broad-cast on the bus in a particular model. Different implementations of the CAN connector are used for different signal buses in the vehicle.

Most of our control units are implemented through external function calls, thus the drawback of having no convenient graphical way of converting a signal from inport/ outport to bus format is minor.

Modern Communication Services

Modern Communication Services

Society is becoming more informationally and visually oriented every day. Personal computing facilitates easy access, manipulation, storage, and exchange of information. These processes require reliable transmission of data information. Communicating documents by images and the use of high resolution graphics terminals provide a more natural and informative mode of human interaction than just voice and data. Video teleconferencing enhances group interaction at a distance. High definition entertainment video improves the quality of picture at the expense of higher transmission bit-rates, which may require new transmission means other than the present overcrowded radio spectrum. A modern Telecommunications network (such as the broadband network) must provide all these different services (multi-services) to the user.
Differences between traditional (telephony) and modern communication services

Conventional telephony communicates using:

* the voice medium only
* connects only two telephones per call
* uses circuits of fixed bit rate

In contrast, modern communication services depart from the conventional telephony service in these three essential aspects. Modern communication services can be:

* Multimedia
* point to point, and
* multi-rate

These aspects are examined Individually in the following three sub-sections.

* Multi-media: A multi-media call may communicate audio, data, still images, or full-motion video, or any combination of these media. Each medium has different demands for communication qualities, such as:
o bandwidth requirement
o signal latency within the network, and
o signal fidelity upon delivery by the network

Moreover, the information content of each medium may affect the information generated by other media. For example, voice could be transcribed into data via voice recognition and data commands may control the way voice and video are presented. These interactions most often occur at the communication terminals, but may also occur within the network .

* Multi-point: A multi-point call involves the setup of connections among more than two people. These connections can be multi-media. They can be one way or two way communications. These connections may be reconfigured many times within the duration of a call. A few examples will be used to contrast point-to-point communications versus multi-point communications. Traditional voice calls are predominantly two party calls, requiring a point-to-point connection using only the voice medium. To access pictorial information in a remote database would require a point-to-point connection that sends low bit-rate queries to the database, and high bit-rate video from the database. Entertainment video applications are largely point-to-multi-point connections, requiring one way communication of full motion video and audio from the program source to the viewers. Video teleconferencing involves connections among many parties, communicating voice, video, as well as data. Thus offering future services requires flexible management of the connection and media requests of a multi-point, multi-media communication call .
* Multi-rate A multi-rate service network is one which allocates transmission capacity flexibly to connections. A multi-media network has to support a broad range of bit-rates demanded by connections, not only because there are many communication media, but also because a communication medium may be encoded by algorithms with different bit-rates. For example, audio signals can be encoded with bit-rates ranging from less than 1 kbit/s to hundreds of kbit/s, using different encoding algorithms with a wide range of complexity and quality of audio reproduction. Similarly, full motion video signals may be encoded with bit-rates ranging from less than 1 Mbit/s to hundreds of Mbit/s. Thus a network transporting both video and audio signals may have to integrate traffic with a very broad range of bit-rates.

Light tree

Light tree

The concept of light tree is introduced in a wavelength routed optical network, which employs wavelength -division multiplexing (WDM).
Depending on the underlying physical topology networks can be classified into three generations:

a).First Generation: these networks do not employ fiber optic technology; instead they employ copper-based or microwave technology. E.g. Ethernet.
b).Second Generation: these networks use optical fibers for data transmission but switching is performed in electronic domain. E.g. FDDI.
c).Third Generation: in these networks both data transmission and switching is performed in optical domain. E.g. WDM.

WDM wide area networks employ tunable lasers and filters at access nodes and optical/electronic switches at routing nodes. An access node may transmit signals on different wavelengths, which are coupled into the fiber using wavelength multiplexers. An optical signal passing through an optical wavelength-routing switch (WRS) may be routed from an output fiber without undergoing opto-electronic conversion.

A light path is an all-optical channel, which may be used to carry circuit switched traffic, and it may span multiple fiber links. Assigning a particular wavelength to it sets these up. In the absence of wavelength converters, a light path would occupy the same wavelength continuity constraint.

A light path can create logical (or virtual) neighbors out of nodes that may be geographically far apart from each other. A light path carries not only the direct traffic between the nodes it interconnects, but also the traffic from nodes upstream of the source to nodes upstream of the destination. A major objective of light path communication is to reduce the number of hops a packet has to traverse.

Under light path communication, the network employs an equal number of transmitters and receivers because each light path operates on a point-to-point basis. However this approach is not able to fully utilize all of the wavelengths on all of the fiber links in the network, also it is not able to fully exploit all the switching capability of each WRS.
A light tree is a point to point multipoint all optical channel, which may span multiple fiber links. Hence, a light tree enables single-hop communication between a source node and a set of destination nodes. Thus, a light tree based virtual topology can significantly reduce the hop distance, thereby increasing the network throughput.

1. Multicast -capable wavelength routing switches (MWRS) at every node in the netwok.
2. More optical amplifiers in the network.



LIDAR (Light Detection and Ranging) is an optical remote sensing technology that measures properties of scattered light to find range and/or other information of a distant target. The prevalent method to determine distance to an object or surface is to use laser pulses. Like the similar radar technology, which uses radio waves, which is light that is not in the visible spectrum, the range to an object is determined by measuring the time delay between transmission of a pulse and detection of the reflected signal. LIDAR technology has application in Geomatics, archaeology, geography, geology, geomorphology, seismology, remote sensing and atmospheric physics.[1] Other terms for LIDAR include ALSM (Airborne Laser Swath Mapping) and laser altimetry. The acronym LADAR (Laser Detection and Ranging) is often used in military contexts. The term laser radar is also in use but is misleading because it uses laser light and not the radiowaves that are the basis of conventional radar.

General description

The primary difference between lidar and radar is that with lidar, much shorter wavelengths of the electromagnetic spectrum are used, typically in the ultraviolet, visible, or near infrared. In general it is possible to image a feature or object only about the same size as the wavelength, or larger. Thus lidar is highly sensitive to aerosols and cloud particles and has many applications in atmospheric research and meteorology.

An object needs to produce a dielectric discontinuity in order to reflect the transmitted wave. At radar (microwave or radio) frequencies, a metallic object produces a significant reflection. However non-metallic objects, such as rain and rocks produce weaker reflections and some materials may produce no detectable reflection at all, meaning some objects or features are effectively invisible at radar frequencies. This is especially true for very small objects (such as single molecules and aerosols).

Lasers provide one solution to these problems. The beam densities and coherency are excellent. Moreover the wavelengths are much smaller than can be achieved with radio systems, and range from about 10 micrometers to the UV (ca. 250 nm). At such wavelengths, the waves are "reflected" very well from small objects. This type of reflection is called backscattering. Different types of scattering are used for different lidar applications, most common are Rayleigh scattering, Mie scattering and Raman scattering as well as fluorescence. The wavelengths are ideal for making measurements of smoke and other airborne particles (aerosols), clouds, and air molecules.

A laser typically has a very narrow beam which allows the mapping of physical features with very high resolution compared with radar. In addition, many chemical compounds interact more strongly at visible wavelengths than at microwaves, resulting in a stronger image of these materials. Suitable combinations of lasers can allow for remote mapping of atmospheric contents by looking for wavelength-dependent changes in the intensity of the returned signal.

Lidar has been used extensively for atmospheric research and meteorology. With the deployment of the GPS in the 1980's precision positioning of aircraft became possible. GPS based surveying technology has made airborne surveying and mapping applications possible and practical. Many have been developed, using downward-looking lidar instruments mounted in aircraft or satellites. A recent example is the NASA Experimental Advanced Research Lidar.

LIDAR is an acronym for LIght Detection And Ranging.
What can you do with LIDAR?

* Measure distance
* Measure speed
* Measure rotation
* Measure chemical composition and concentration

of a remote target where the target can be a clearly defined object, such as a vehicle, or a diffuse object such as a smoke plume or clouds.


Other than those applications mentioned above, there are a wide variety of applications of LIDAR.


LiDAR has many applications in the field of archaeology including aiding in the planning of field campaigns, mapping features beneath forest canopy, and providing an overview of broad, continuous features that may be indistinguishable on the ground. LiDAR can also provide archaeologists with the ability to create high-resolution digital elevation models (DEMs) of archaeological sites that can reveal micro-topography that are otherwise hidden by vegetation. LiDAR-derived products can be easily integrated into a Geographic Information System (GIS) for analysis and interpretation. For example at Fort Beausejour - Fort Cumberland National Historic Site, Canada, previously undiscovered archaeological features have been mapped that are related to the siege of the Fort in 1755. Features that could not be distinguished on the ground or through aerial photography were identified by overlaying hillshades of the DEM created with artificial illumination from various angles. With LiDAR the ability to produce high-resolution datasets quickly and relatively cheaply can be an advantage. Beyond efficiency, its ability to penetrate forest canopy has led to the discovery of features that were not distinguishable through traditional geo-spatial methods and are difficult to reach through field surveys.


The first LIDARs were used for studies of atmospheric composition, structure, clouds, and aerosols. Initially based on ruby lasers, LIDARs for meteorological applications were constructed shortly after the invention of the laser and represent one of the first applications of laser technology.

Elastic backscatter LIDAR is the simplest type of lidar and is typically used for studies of aerosols and clouds. The backscattered wavelength is identical to the transmitted wavelength, and the magnitude of the received signal at a given range depends on the backscatter coefficient of scatterers at that range and the extinction coefficients of the scatterers along the path to that range. The extinction coefficient is typically the quantity of interest.

Differential Absorption LIDAR (DIAL) is used for range-resolved measurements of a particular gas in the atmosphere, such as ozone, carbon dioxide, or water vapor. The LIDAR transmits two wavelengths: an "on-line" wavelength that is absorbed by the gas of interest and an off-line wavelength that is not absorbed. The differential absorption between the two wavelengths is a measure of the concentration of the gas as a function of range. DIAL LIDARs are essentially dual-wavelength elastic backscatter LIDARS.

Raman LIDAR is also used for measuring the concentration of atmospheric gases, but can also be used to retrieve aerosol parameters as well. Raman LIDAR exploits inelastic scattering to single out the gas of interest from all other atmospheric constituents. A small portion of the energy of the transmitted light is deposited in the gas during the scattering process, which shifts the scattered light to a longer wavelength by an amount that is unique to the species of interest. The higher the concentration of the gas, the stronger the magnitude of the backscattered signal.

Doppler LIDAR is used to measure wind speed along the beam by measuring the frequency shift of the backscattered light. Scanning LIDARs, such as NASA's HARLIE LIDAR, have been used to measure atmospheric wind velocity in a large three dimensional cone. ESA's wind mission ADM-Aeolus will be equipped with a Doppler LIDAR system in order to provide global measurements of vertical wind profiles. A doppler LIDAR system was used in the 2008 Summer Olympics to measure wind fields during the yacht competition. Doppler LIDAR systems are also now beginning to be successfully applied in the renewable energy sector to acquire wind speed, turbulence, wind veer and wind shear data. Both pulsed and continuous wave systems are being used. Pulsed systems using signal timing to obtain vertical distance resolution, whereas continuous wave systems rely on detector focusing.


In geology and seismology a combination of aircraft-based LIDAR and GPS have evolved into an important tool for detecting faults and measuring uplift. The output of the two technologies can produce extremely accurate elevation models for terrain that can even measure ground elevation through trees. This combination was used most famously to find the location of the Seattle Fault in Washington, USA. This combination is also being used to measure uplift at Mt. St. Helens by using data from before and after the 2004 uplift. Airborne LIDAR systems monitor glaciers and have the ability to detect subtle amounts of growth or decline. A satellite based system is NASA's ICESat which includes a LIDAR system for this purpose. NASA's Airborne Topographic Mapper is also used extensively to monitor glaciers and perform coastal change analysis.

Physics and Astronomy

A world-wide network of observatories uses lidars to measure the distance to reflectors placed on the moon, allowing the moon's position to be measured with mm precision and tests of general relativity to be done. MOLA, the Mars Orbiting Laser Altimeter, used a LIDAR instrument in a Mars-orbiting satellite (the NASA Mars Global Surveyor) to produce a spectacularly precise global topographic survey of the red planet.

In September, 2008, NASA's Phoenix Lander used LIDAR to detect snow in the atmosphere of Mars.

In atmospheric physics, LIDAR is used as a remote detection instrument to measure densities of certain constituents of the middle and upper atmosphere, such as potassium, sodium, or molecular nitrogen and oxygen. These measurements can be used to calculate temperatures. LIDAR can also be used to measure wind speed and to provide information about vertical distribution of the aerosol particles.

At the JET nuclear fusion research facility, in the UK near Abingdon, Oxfordshire, LIDAR Thomson Scattering is used to determine Electron Density and Temperature profiles of the plasma.

Biology and conservation

LIDAR has also found many applications in forestry. Canopy heights, biomass measurements, and leaf area can all be studied using airborne LIDAR systems. Similarly, LIDAR is also used by many industries, including Energy and Railroad, and the Department of Transportation as a faster way of surveying. Topographic maps can also be generated readily from LIDAR, including for recreational use such as in the production of orienteering maps.

In oceanography, LiDAR is used for estimation of phytoplankton fluorescence and generally biomass in the surface layers of the ocean. Another application is airborne lidar bathymetry of sea areas too shallow for hydrographic vessels.

Redwood ecology

The Save-the-Redwoods League is undertaking a project to map the tall redwoods on California's northern coast. LIDAR allows research scientists to not only measure the height of previously unmapped trees but to determine the biodiversity of the redwood forest. Dr. Stephen Sillett who is working with the League on the North Coast LIDAR project claims this technology will be useful in directing future efforts to preserve and protect ancient redwood trees.

Military and law enforcement

One situation where LIDAR has notable non-scientific application is in traffic speed law enforcement, for vehicle speed measurement, as a technology alternative to radar guns. The technology for this application is small enough to be mounted in a hand held camera "gun" and permits a particular vehicle's speed to be determined from a stream of traffic. Unlike RADAR which relies on doppler shifts to directly measure speed, police lidar relies on the principle of time-of-flight to calculate speed. The equivalent radar based systems are often not able to isolate particular vehicles from the traffic stream and are generally too large to be hand held. LIDAR has the distinct advantage of being able to pick out one vehicle in a cluttered traffic situation as long as the operator is aware of the limitations imposed by the range and beam divergence. Contrary to popular belief LIDAR does not suffer from “sweep” error when the operator uses the equipment correctly and when the LIDAR unit is equipped with algorithms that are able to detect when this has occurred. A combination of signal strength monitoring, receive gate timing, target position prediction and pre-filtering of the received signal wavelength prevents this from occurring. Should the beam illuminate sections of the vehicle with different reflectivity or the aspect of the vehicle changes during measurement that causes the received signal strength to be changed then the LIDAR unit will reject the measurement thereby producing speed readings of high integrity. For LIDAR units to be used in law enforcement applications a rigorous approval procedure is usually completed before deployment. Jelly-bean shaped vehicles are usually equipped with a vertical registration plate that, when illuminated causes a high integrity reflection to be returned to the LIDAR, many reflections and an averaging technique in the speed measurement process increase the integrity of the speed reading. In locations that do not require that a front or rear registration plate is fitted headlamps and rear-reflectors provide an almost ideal retro-reflective surface overcoming the reflections from uneven or non-compliant reflective surfaces thereby eliminating “sweep” error. It is these mechanisms which cause concern that LIDAR is somehow unreliable. Most traffic LIDAR systems send out a stream of approximately 100 pulses over the span of three-tenths of a second. A "black box," proprietary statistical algorithm picks and chooses which progressively shorter reflections to retain from the pulses over the short fraction of a second.

Military applications are not yet known to be in place and are possibly classified, but a considerable amount of research is underway in their use for imaging. Their higher resolution makes them particularly good for collecting enough detail to identify targets, such as tanks. Here the name LADAR is more common.

Five LIDAR units produced by the German company Sick AG were used for short range detection on Stanley, the autonomous car that won the 2005 DARPA Grand Challenge.


Lidar has been used to create Adaptive Cruise Control (ACC) systems for automobiles. Systems such as those by Siemens and Hella use a lidar device mounted in the front of the vehicle to monitor the distance between the vehicle and any vehicle in front of it. Often, the lasers are placed onto the bumper. In the event the vehicle in front slows down or is too close, the ACC applies the brakes to slow the vehicle. When the road ahead is clear, the ACC allows the vehicle to speed up to speed preset by the driver.


3-D imaging is done with both scanning and non-scanning systems. "3-D gated viewing laser radar" is a non-scanning laser radar system that applies the so-called gated viewing technique. The gated viewing technique applies a pulsed laser and a fast gated camera. There are ongoing military research programmes in Sweden, Denmark, the USA and the UK with 3-D gated viewing imaging at several kilometers range with a range resolution and accuracy better than ten centimeters.

Coherent Imaging Lidar is possible using Synthetic Array Heterodyne Detection which is a form of Optical heterodyne detection that enables a staring single element receiver to act as though it were an imaging array.

Imaging LIDAR can also be performed using arrays of high speed detectors and modulation sensitive detectors arrays typically built on single chips using CMOS and hybrid CMOS / CCD fabrication techniques. In these devices each pixel performs some local processing such as demodulation or gating at high speed down converting the signals to video rate so that the array may be read like a camera. Using this technique many thousands of pixels / channels may be acquired simultaneously. In practical systems the limitation is light budget rather than parallel acquisition.

LIDAR has been used in the recording of a music video without cameras. The video for the song "House of Cards" by Radiohead is believed to be the first use of real-time 3D laser scanning to record a music video.

3D Mapping

Airborne LIDAR sensors are used by companies in the Remote Sensing area to create point clouds of the earth ground for further processing (e.g. used in forestry).


Radar is an object detection system that uses electromagnetic waves to identify the range, altitude, direction, or speed of both moving and fixed objects such as aircraft, ships, motor vehicles, weather formations, and terrain. The term RADAR was coined in 1941 as an acronym for radio detection and ranging. The term has since entered the English language as a standard word, radar, losing the capitalization. Radar was originally called RDF (Radio Direction Finder, now used as a totally different device) in the United Kingdom.

A radar system has a transmitter that emits microwaves or radio waves. These waves are in phase when emitted, and when they come into contact with an object are scattered in all directions. The signal is thus partly reflected back and it has a slight change of wavelength (and thus frequency) if the target is moving. The receiver is usually, but not always, in the same location as the transmitter. Although the signal returned is usually very weak, the signal can be amplified through use of electronic techniques in the receiver and in the antenna configuration. This enables radar to detect objects at ranges where other emissions, such as sound or visible light, would be too weak to detect. Radar uses include meteorological detection of precipitation, measuring ocean surface waves, air traffic control, police detection of speeding traffic, determining the speed of basesballs and by the military.

RAdio Detection And Ranging ,in short RADAR relies on sending and receiving electromagnetic radiation, usually in the form of radio waves (see Radio) or microwaves. Electromagnetic radiation is energy that moves in waves at or near the speed of light. The characteristics of electromagnetic waves depend on their wavelength. Gamma rays and X rays have very short wavelengths. Visible light is a tiny slice of the electromagnetic spectrum with wavelengths longer than X rays, but shorter than microwaves. Radar systems use long-wavelength electromagnetic radiation in the microwave and radio ranges. Because of their long wavelengths, radio waves and microwaves tend to reflect better than shorter wavelength radiation, which tends to scatter or be absorbed before it gets to the target. Radio waves at the long-wavelength end of the spectrum will even reflect off of the atmospheres ionosphere, a layer of electrically-charged particles in the earths atmosphere. The challenges for radar are stealth technology,clutter,jamming. It has certain applications like the traffic control,maritime navigation,millitary safety,air traffic control,meteorology etc.


Several inventors, scientists, and engineers contributed to the development of radar. The first to use radio waves to detect "the presence of distant metallic objects" was Christian Hülsmeyer, who in 1904 demonstrated the feasibility of detecting the presence of a ship in dense fog, but not its distance.He received Reichspatent Nr. 165546 for his pre-radar device in April 1904, and later patent 169154 for a related amendment for ranging. He also received a patent[9] in England for his telemobiloscope on September 22, 1904.

In August 1917 Nikola Tesla first established principles regarding frequency and power level for the first primitive radar units. He stated, " by their [standing electromagnetic waves] use we may produce at will, from a sending station, an electrical effect in any particular region of the globe; [with which] we may determine the relative position or course of a moving object, such as a vessel at sea, the distance traversed by the same, or its speed."

Before the Second World War developments by the Americans, the Germans, the French, the Soviets, and the British led to the modern version of radar. In 1934 the French Émile Girardeau stated he was building a radar system "conceived according to the principles stated by Tesla" and obtained a patent (French Patent n° 788795 in 1934) for a working dual radar system, a part of which was installed on the Normandie liner in 1935. The same year, American Dr. Robert M. Page tested the first monopulse radar and the Soviet military engineer P.K.Oschepkov, in collaboration with Leningrad Electrophysical Institute, produced an experimental apparatus RAPID capable of detecting an aircraft within 3 km of a receiver.[16] Hungarian Zoltán Bay produced a working model by 1936 at the Tungsram laboratory in the same vein.

However, it was the British who were the first to fully exploit it as a defence against aircraft attack. This was spurred on by fears that the Germans were developing death rays. Following a study of the possibility of propagating electromagnetic energy and the likely effect, the British scientists asked by the Air Ministry to investigate concluded that a death ray was impractical but detection of aircraft appeared feasible. Robert Watson-Watt demonstrated to his superiors the capabilities of a working prototype and patented the device in 1935 (British Patent GB593017) It served as the basis for the Chain Home network of radars to defend Great Britain.

The war precipitated research to find better resolution, more portability and more features for radar. The post-war years have seen the use of radar in fields as diverse as air traffic control, weather monitoring, astrometry and road speed control.


The radar dish, or antenna, transmits pulses of radio waves or microwaves which bounce off any object in their path. The object returns a tiny part of the wave's energy to a dish or antenna which is usually located at the same site as the transmitter. The time it takes for the reflected waves to return to the dish enables a computer to calculate how far away the object is, its radial velocity and other characteristics.


Electromagnetic waves reflect (scatter) from any large change in the dielectric or diamagnetic constants. This means that a solid object in air or a vacuum, or other significant change in atomic density between the object and what is surrounding it, will usually scatter radar (radio) waves. This is particularly true for electrically conductive materials, such as metal and carbon fiber, making radar particularly well suited to the detection of aircraft and ships. Radar absorbing material, containing resistive and sometimes magnetic substances, is used on military vehicles to reduce radar reflection. This is the radio equivalent of painting something a dark color.

Radar waves scatter in a variety of ways depending on the size (wavelength) of the radio wave and the shape of the target. If the wavelength is much shorter than the target's size, the wave will bounce off in a way similar to the way light is reflected by a mirror. If the wavelength is much longer than the size of the target, the target is polarized (positive and negative charges are separated), like a dipole antenna. This is described by Rayleigh scattering, an effect that creates the Earth's blue sky and red sunsets. When the two length scales are comparable, there may be resonances. Early radars used very long wavelengths that were larger than the targets and received a vague signal, whereas some modern systems use shorter wavelengths (a few centimeters or shorter) that can image objects as small as a loaf of bread.

Short radio waves reflect from curves and corners, in a way similar to glint from a rounded piece of glass. The most reflective targets for short wavelengths have 90° angles between the reflective surfaces. A structure consisting of three flat surfaces meeting at a single corner, like the corner on a box, will always reflect waves entering its opening directly back at the source. These so-called corner reflectors are commonly used as radar reflectors to make otherwise difficult-to-detect objects easier to detect, and are often found on boats in order to improve their detection in a rescue situation and to reduce collisions.

For similar reasons, objects attempting to avoid detection will angle their surfaces in a way to eliminate inside corners and avoid surfaces and edges perpendicular to likely detection directions, which leads to "odd" looking stealth aircraft. These precautions do not completely eliminate reflection because of diffraction, especially at longer wavelengths. Half wavelength long wires or strips of conducting material, such as chaff, are very reflective but do not direct the scattered energy back toward the source. The extent to which an object reflects or scatters radio waves is called its radar cross section.


In the transmitted radar signal, the electric field is perpendicular to the direction of propagation, and this direction of the electric field is the polarization of the wave. Radars use horizontal, vertical, linear and circular polarization to detect different types of reflections. For example, circular polarization is used to minimize the interference caused by rain. Linear polarization returns usually indicate metal surfaces. Random polarization returns usually indicate a fractal surface, such as rocks or soil, and are used by navigation radars.


Radar systems must overcome unwanted signals in order to focus only on the actual targets of interest. These unwanted signals may originate from internal and external sources, both passive and active. The ability of the radar system to overcome these unwanted signals defines its signal-to-noise ratio (SNR). SNR is defined as the ratio of a signal power to the noise power within the desired signal.

In less technical terms, SNR compares the level of a desired signal (such as targets) to the level of background noise. The higher a system's SNR, the better it is in isolating actual targets from the surrounding noise signals.


Signal noise is an internal source of random variations in the signal, which is generated by all electronic components. Noise typically appears as random variations superimposed on the desired echo signal received in the radar receiver. The lower the power of the desired signal, the more difficult it is to discern it from the noise (similar to trying to hear a whisper while standing near a busy road). Noise figure is a measure of the noise produced by a receiver compared to an ideal receiver, and this needs to be minimized.

Noise is also generated by external sources, most importantly the natural thermal radiation of the background scene surrounding the target of interest. In modern radar systems, due to the high performance of their receivers, the internal noise is typically about equal to or lower than the external scene noise. An exception is if the radar is aimed upwards at clear sky, where the scene is so "cold" that it generates very little thermal noise.

There will be also flicker noise due to electrons transit, but depending on 1/f, will be much lower than thermal noise when the frequency is high. Hence, in pulse radar, the system will be always heterodyne. See intermediate frequency.


Clutter refers to radio frequency (RF) echoes returned from targets which are uninteresting to the radar operators. Such targets include natural objects such as ground, sea, precipitation (such as rain, snow or hail), sand storms, animals (especially birds), atmospheric turbulence, and other atmospheric effects, such as ionosphere reflections and meteor trails. Clutter may also be returned from man-made objects such as buildings and, intentionally, by radar countermeasures such as chaff.

Some clutter may also be caused by a long radar waveguide between the radar transceiver and the antenna. In a typical plan position indicator (PPI) radar with a rotating antenna, this will usually be seen as a "sun" or "sunburst" in the centre of the display as the receiver responds to echoes from dust particles and misguided RF in the waveguide. Adjusting the timing between when the transmitter sends a pulse and when the receiver stage is enabled will generally reduce the sunburst without affecting the accuracy of the range, since most sunburst is caused by a diffused transmit pulse reflected before it leaves the antenna.

While some clutter sources may be undesirable for some radar applications (such as storm clouds for air-defence radars), they may be desirable for others (meteorological radars in this example). Clutter is considered a passive interference source, since it only appears in response to radar signals sent by the radar.

There are several methods of detecting and neutralizing clutter. Many of these methods rely on the fact that clutter tends to appear static between radar scans. Therefore, when comparing subsequent scans echoes, desirable targets will appear to move and all stationary echoes can be eliminated. Sea clutter can be reduced by using horizontal polarization, while rain is reduced with circular polarization (note that meteorological radars wish for the opposite effect, therefore using linear polarization the better to detect precipitation). Other methods attempt to increase the signal-to-clutter ratio.

Constant False Alarm Rate (CFAR, a form of Automatic Gain Control, or AGC) is a method relying on the fact that clutter returns far outnumber echoes from targets of interest. The receiver's gain is automatically adjusted to maintain a constant level of overall visible clutter. While this does not help detect targets masked by stronger surrounding clutter, it does help to distinguish strong target sources. In the past, radar AGC was electronically controlled and affected the gain of the entire radar receiver. As radars evolved, AGC became computer-software controlled, and affected the gain with greater granularity, in specific detection cells.

Clutter may also originate from multipath echoes from valid targets due to ground reflection, atmospheric ducting or ionospheric reflection/refraction. This clutter type is especially bothersome, since it appears to move and behave like other normal (point) targets of interest, thereby creating a ghost. In a typical scenario, an aircraft echo is multipath-reflected from the ground below, appearing to the receiver as an identical target below the correct one. The radar may try to unify the targets, reporting the target at an incorrect height, or - worse - eliminating it on the basis of jitter or a physical impossibility. These problems can be overcome by incorporating a ground map of the radar's surroundings and eliminating all echoes which appear to originate below ground or above a certain height. In newer Air Traffic Control (ATC) radar equipment, algorithms are used to identify the false targets by comparing the current pulse returns, to those adjacent, as well as calculating return improbabilities due to calculated height, distance, and radar timing.


Radar jamming refers to radio frequency signals originating from sources outside the radar, transmitting in the radar's frequency and thereby masking targets of interest. Jamming may be intentional, as with an electronic warfare (EW) tactic, or unintentional, as with friendly forces operating equipment that transmits using the same frequency range. Jamming is considered an active interference source, since it is initiated by elements outside the radar and in general unrelated to the radar signals.

Jamming is problematic to radar since the jamming signal only needs to travel one-way (from the jammer to the radar receiver) whereas the radar echoes travel two-ways (radar-target-radar) and are therefore significantly reduced in power by the time they return to the radar receiver. Jammers therefore can be much less powerful than their jammed radars and still effectively mask targets along the line of sight from the jammer to the radar (Mainlobe Jamming). Jammers have an added effect of affecting radars along other lines of sight, due to the radar receiver's sidelobes (Sidelobe Jamming).

Mainlobe jamming can generally only be reduced by narrowing the mainlobe solid angle, and can never fully be eliminated when directly facing a jammer which uses the same frequency and polarization as the radar. Sidelobe jamming can be overcome by reducing receiving sidelobes in the radar antenna design and by using an omnidirectional antenna to detect and disregard non-mainlobe signals. Other anti-jamming techniques are frequency hopping and polarization. See Electronic counter-counter-measures for details.

Interference has recently become a problem for C-band (5.66 GHz) meteorological radars with the proliferation of 5.4 GHz band WiFi equipment.

Radar engineering

A radars components are:

* A transmitter that generates the radio signal with an oscillator such as a klystron or a magnetron and controls its duration by a modulator.
* A waveguide that links the transmitter and the antenna.
* A duplexer that serves as a switch between the antenna and the transmitter or the receiver for the signal when the antenna is used in both situations.
* A receiver. Knowing the shape of the desired received signal (a pulse), an optimal receiver can be designed using a matched filter.
* An electronic section that controls all those devices and the antenna to perform the radar scan ordered by a software.
* A link to end users.

Antenna design

Radio signals broadcast from a single antenna will spread out in all directions, and likewise a single antenna will receive signals equally from all directions. This leaves the radar with the problem of deciding where the target object is located.

Early systems tended to use omni-directional broadcast antennas, with directional receiver antennas which were pointed in various directions. For instance the first system to be deployed, Chain Home, used two straight antennas at right angles for reception, each on a different display. The maximum return would be detected with an antenna at right angles to the target, and a minimum with the antenna pointed directly at it (end on). The operator could determine the direction to a target by rotating the antenna so one display showed a maximum while the other shows a minimum.

One serious limitation with this type of solution is that the broadcast is sent out in all directions, so the amount of energy in the direction being examined is a small part of that transmitted. To get a reasonable amount of power on the "target", the transmitting aerial should also be directional.

Parabolic reflector

More modern systems use a steerable parabolic "dish" to create a tight broadcast beam, typically using the same dish as the receiver. Such systems often combine two radar frequencies in the same antenna in order to allow automatic steering, or radar lock.

Parabolic reflectors can be either symmetric parabolas or spoiled parabolas:

* Symmetric parabolic antennas produce a narrow "pencil" beam in both the X and Y dimensions and consequently have a higher gain. The NEXRAD Pulse-Doppler weather radar uses a symmetric antenna to perform detailed volumetric scans of the atmosphere.
* Spoiled parabolic antennas produce a narrow beam in one dimension and a relatively wide beam in the other. This feature is useful if target detection over a wide range of angles is more important than target location in three dimensions. Most 2D surveillance radars use a spoiled parabolic antenna with a narrow azimuthal beamwidth and wide vertical beamwidth. This beam configuration allows the radar operator to detect an aircraft at a specific azimuth but at an indeterminate height. Conversely, so-called "nodder" height finding radars use a dish with a narrow vertical beamwidth and wide azimuthal beamwidth to detect an aircraft at a specific height but with low azimuthal precision.

Types of scan

* Primary Scan: A scanning technique where the main antenna aerial is moved to produce a scanning beam, examples include circular scan, sector scan etc
* Secondary Scan: A scanning technique where the antenna feed is moved to produce a scanning beam, examples include conical scan, unidirectional sector scan, lobe switching etc.
* Palmer Scan: A scanning technique that produces a scanning beam by moving the main antenna and its feed. A Palmer Scan is a combination of a Primary Scan and a Secondary Scan.