Showing posts with label Seminars. Show all posts
Showing posts with label Seminars. Show all posts
This is a network protocol which enables a client in a NAT (or multiple NATs) to find out its public address, the type of NAT behind it and the internet side port associated by the NAT with a particular local port and this whole process aids to set up UDP communication between two hosts that are both behind NAT routers. STUN stands for Simple Traversal of UDP (User Datagram Protocol) through NATs (Network Address Translators).
Protocol overview
STUN is a client-server protocol. Any VoIP phone or software package includes a STUN client, which sends a request to the STUN server. As a reply the public IP address of the NAT router and the port was opened by the NAT to allow incoming traffic back in to the network is sent to the STUN client. Such a response also helps the STUN client to identify the NAT being used as different types of NATs handle incoming UDP packets vividly. Its compatible with Full Cone, Restricted Cone, and Port Restricted Cone. (Restricted Cone or Port Restricted Cone NATs, allows packets from the endpoint through to the client from the NAT once the client has send a packet to the endpoint). Symmetric NAT (also known as bi-directional NAT) which is frequently found in the networks of large companies does not work with STUN as the IP addresses of the STUN server and the endpoint is different, and therefore the NAT mapping the STUN server is different from the mapping that the endpoint uses to send packets through to the client. Network address translation could give you more information on this.
After the client discovers its external addresses communication with its peers occurs. When the NATs are full cone,either side can initiate communication and if they are restricted cone or restricted port cone both sides must start transmitting together. The techniques described in the STUN RFC does not necessarily require using the STUN protocol; they can be used in the design of any UDP protocol. STUN comes in handy in the cases of Protocols like SIP which use UDP packets for the transfer of sound/video/text signaling traffic across the Internet. As both endpoints are often behind NAT, a connection cannot be set up in the traditional way. The STUN server communicates on UDP port 3478 but the server will hint clients to perform tests on alternate IP and port number too (STUN servers have two IP addresses).
Labels: Computer Science, Seminar Topics, Seminars
The demand for faster processors, memory and I/O is a familiar refrain in market applications ranging from personal computers and servers to networking systems and from video games to office automation equipment. Once information is digitized, the speed at which it is processed becomes the foremost determinate of product success. Faster system speed leads to faster processing. Faster processing leads to faster system performance. Faster system performance results in greater success in the marketplace. This obvious logic has led a generation of processor and memory designers to focus on one overriding objective - squeezing more speed from processors and memory devices. Processor designers have responded with faster clock rates and super pipelined architectures that use level 1 and level 2 caches to feed faster execution units even faster. Memory designers have responded with dual data rate memories that allow data access on both the leading and trailing clock edges doubling data access. I/O developers have responded by designing faster and wider I/O channels and introducing new protocols to meet anticipated I/O needs. Today, processors hit the market with 2+ GHz clock rates, memory devices provide sub5 ns access times and standard I/O buses are 32- and 64-bit wide, with new higher speed protocols on the horizon.Increased processor speeds, faster memories, and wider I/O channels are not always practical answers to the need for speed. The main problem is integration of more and faster system elements. Faster execution units, faster memories and wider, faster I/O buses lead to crowding of more high-speed signal lines onto the physical printed circuit board. One aspect of the integration problem is the physical problems posed by speed.
Hyper Transport technology has been designed to provide system architects with significantly more bandwidth, low-latency responses, lower pin counts, compatibility with legacy PC buses, extensibility to new SNA buses, and transparency to operating system software, with little impact on peripheral drivers.
Labels: Computer Science, Seminar Topics, Seminars
FluidFM: Combining AFM and nanofluidics for single cell applications
The Atomic Force Microscope (AFM) is a key tool for nanotechnology. This instrument has become the most widely used tool for imaging, measuring and manipulating matter at the nanoscale and in turn has inspired a variety of other scanning probe techniques. Originally the AFM was used to image the topography of surfaces, but by modifying the tip it is possible to measure other quantities (for example, electric and magnetic properties, chemical potentials, friction and so on), and also to perform various types of spectroscopy and analysis. Increasingly, the AFM is also becoming a tool for nanofabrication.
Relatively new is the use of AFM in cell biology. We wrote about this recently in a Spotlight that described a novel method to probe the mechanical properties of living and dead bacteria via AFM indentation experimentations ("Dead or alive – nanotechnology technique tells the difference ").
Researchers in Switzerland have now demonstrated novel cell biology applications using hollow force-controlled AFM cantilevers – a new device they have called FluidFM.
"The core of the invention is to have fixed already existing microchanneled cantilevers to an opportunely drilled AFM probeholder" Tomaso Zambelli tells Nanowerk. "In this way, the FluidFM is not restricted to air but can work in liquid environments. Since it combines a nanofluidics circuit, every soluble agent can be added to the solution to be dispensed. Moreover, the force feedback allows to approach very soft objects like cells without damaging them."
As cell biology is moving towards single cell technologies and applications, single cell injection or extraction techniques are in high demand. Apart from this, however, the FluidFM could also be used for nanofabrication applications such as depositing a conductive polymer wire between to microelectrodes, or to etch ultrafine structures out of solid materials using acids as the spray agent. The team has reported their findings in a recent paper in Nano Letters ("FluidFM: Combining Atomic Force Microscopy and Nanofluidics in a Universal Liquid Delivery System for Single Cell Applications and Beyond").
Zambelli originally realized that the technology of the atomic force microscope that is normally used only to image cells could be transformed into a microinjection system. The result of the development by Zambelli and his colleagues in the Laboratory of Biosensors and Bioelectronics at the Institute of Biomedical technology at ETH Zurich and in the Swiss Center for Electronics and Microtechnology (CSEM) in Neuchâtel was the "fluid force microscope", currently the smallest automated nanosyringe currently in existence.
"Our FluidFM even operates under water or in other liquids – a precondition for being able to use the instrument to study cells" says Zambelli.
The force detection system of the FluidFM is so sensitive that the interactions between tip and sample can be reduced to the piconewton range, thereby allowing to bring the hollow cantilever into gentle but close contact with cells without puncturing or damaging the cell membrane.
On the other hand, if membrane perforation for intracellular injection is desired, this is simply achieved by selecting a higher force set point taking advantage of the extremely sharp tip (radius of curvature on the order of tens of nanometers).
To enable solutions to be injected into the cell through the needle, scientists at CSEM installed a microchannel in the cantilever. Substances such as medicinal active ingredients, DNA, and RNA can be injected into a cell through the tip. At the same time, samples can also be taken from a cell through the needle for subsequent analysis.
According to Zambelli, while this approach is similar to microinjection using glass pipettes, there are a number of essential differences.
"Microinjection uses optical microscopy to control the position of the glass pipette tip both in the xy plane and in the z direction (via image focusing)" he explains. "As consequence of the limited resolution of optical microscopy, subcellular domains cannot be addressed and tip contact with the cell membrane cannot be discriminated from tip penetration of the membrane. Cells are often lethally damaged and skilled personnel are required for microinjection."
"The limited resolution of this method and the absence of mechanical information contrast strongly with the high resolution imaging and the direct control of applied forces that are possible with AFM. Precise force feedback reduces potential damage to the cell; the cantilever geometry minimizes both the normal contact forces on the cell and the lateral vibrations of the tip that can tear the cell membrane during microinjection; the spatial resolution is determined by the submicrometer aperture so that injection into subcellular domains becomes easily achievable."
Experiments conducted by the Swiss team demonstrate the potential of the FluidFM in the field of single-cell biology through precise stimulation of selected cell domains with whatever soluble agents at a well-defined time.
"We confidently expect that the inclusion of an electrode in the microfluidics circuit will allow a similar approach toward patch-clamping with force controlled gigaseal formation," says Zambelli. "We will also explore other strategies at the single-cell level, such as the controlled perforation of the cell membrane for local extraction of cytoplasm
"Zambelli and his colleagues are convinced that their technology has great commercial potential. Rejecting offers from well-known manufacturers of atomic force microscopes for the sale of the patent for the FluidFM, they have founded Cytosurge LLC, a company dedicated to commercially develop the instrument.
Today, Zambelli's laboratory contains two prototypes of the instrument, which are being tested in collaboration with biologists.
The Atomic Force Microscope (AFM) is a key tool for nanotechnology. This instrument has become the most widely used tool for imaging, measuring and manipulating matter at the nanoscale and in turn has inspired a variety of other scanning probe techniques. Originally the AFM was used to image the topography of surfaces, but by modifying the tip it is possible to measure other quantities (for example, electric and magnetic properties, chemical potentials, friction and so on), and also to perform various types of spectroscopy and analysis. Increasingly, the AFM is also becoming a tool for nanofabrication.
Relatively new is the use of AFM in cell biology. We wrote about this recently in a Spotlight that described a novel method to probe the mechanical properties of living and dead bacteria via AFM indentation experimentations ("Dead or alive – nanotechnology technique tells the difference ").
Researchers in Switzerland have now demonstrated novel cell biology applications using hollow force-controlled AFM cantilevers – a new device they have called FluidFM.
"The core of the invention is to have fixed already existing microchanneled cantilevers to an opportunely drilled AFM probeholder" Tomaso Zambelli tells Nanowerk. "In this way, the FluidFM is not restricted to air but can work in liquid environments. Since it combines a nanofluidics circuit, every soluble agent can be added to the solution to be dispensed. Moreover, the force feedback allows to approach very soft objects like cells without damaging them."
As cell biology is moving towards single cell technologies and applications, single cell injection or extraction techniques are in high demand. Apart from this, however, the FluidFM could also be used for nanofabrication applications such as depositing a conductive polymer wire between to microelectrodes, or to etch ultrafine structures out of solid materials using acids as the spray agent. The team has reported their findings in a recent paper in Nano Letters ("FluidFM: Combining Atomic Force Microscopy and Nanofluidics in a Universal Liquid Delivery System for Single Cell Applications and Beyond").
Zambelli originally realized that the technology of the atomic force microscope that is normally used only to image cells could be transformed into a microinjection system. The result of the development by Zambelli and his colleagues in the Laboratory of Biosensors and Bioelectronics at the Institute of Biomedical technology at ETH Zurich and in the Swiss Center for Electronics and Microtechnology (CSEM) in Neuchâtel was the "fluid force microscope", currently the smallest automated nanosyringe currently in existence.
"Our FluidFM even operates under water or in other liquids – a precondition for being able to use the instrument to study cells" says Zambelli.
The force detection system of the FluidFM is so sensitive that the interactions between tip and sample can be reduced to the piconewton range, thereby allowing to bring the hollow cantilever into gentle but close contact with cells without puncturing or damaging the cell membrane.
On the other hand, if membrane perforation for intracellular injection is desired, this is simply achieved by selecting a higher force set point taking advantage of the extremely sharp tip (radius of curvature on the order of tens of nanometers).
To enable solutions to be injected into the cell through the needle, scientists at CSEM installed a microchannel in the cantilever. Substances such as medicinal active ingredients, DNA, and RNA can be injected into a cell through the tip. At the same time, samples can also be taken from a cell through the needle for subsequent analysis.
According to Zambelli, while this approach is similar to microinjection using glass pipettes, there are a number of essential differences.
"Microinjection uses optical microscopy to control the position of the glass pipette tip both in the xy plane and in the z direction (via image focusing)" he explains. "As consequence of the limited resolution of optical microscopy, subcellular domains cannot be addressed and tip contact with the cell membrane cannot be discriminated from tip penetration of the membrane. Cells are often lethally damaged and skilled personnel are required for microinjection."
"The limited resolution of this method and the absence of mechanical information contrast strongly with the high resolution imaging and the direct control of applied forces that are possible with AFM. Precise force feedback reduces potential damage to the cell; the cantilever geometry minimizes both the normal contact forces on the cell and the lateral vibrations of the tip that can tear the cell membrane during microinjection; the spatial resolution is determined by the submicrometer aperture so that injection into subcellular domains becomes easily achievable."
Experiments conducted by the Swiss team demonstrate the potential of the FluidFM in the field of single-cell biology through precise stimulation of selected cell domains with whatever soluble agents at a well-defined time.
"We confidently expect that the inclusion of an electrode in the microfluidics circuit will allow a similar approach toward patch-clamping with force controlled gigaseal formation," says Zambelli. "We will also explore other strategies at the single-cell level, such as the controlled perforation of the cell membrane for local extraction of cytoplasm
"Zambelli and his colleagues are convinced that their technology has great commercial potential. Rejecting offers from well-known manufacturers of atomic force microscopes for the sale of the patent for the FluidFM, they have founded Cytosurge LLC, a company dedicated to commercially develop the instrument.
Today, Zambelli's laboratory contains two prototypes of the instrument, which are being tested in collaboration with biologists.
Virus
Do viruses and all the other nasties in cyberspace matter? Do they really do much harm? Imagine that no one has updated your anti-virus software for a few months. When they do, you find that your accounts spreadsheets are infected with a new virus that changes figures at random. Naturally you keep backups. But you might have been backing up infected files for months. How do you know which figures to trust? Now imagine that a new email virus has been released. Your company is receiving so many emails that you decide to shut down your email gateway altogether and miss an urgent order from a big customer.
Imagine that a friend emails you some files he found on the Internet. You open them and trigger a virus that mails confidential documents to everyone in your address book including your competitors. Finally, imagine that you accidentally send another company, a report that carries a virus. Will they feel safe to do business with you again? Today new viruses sweep the planet in hours and virus scares are major news. A computer virus is a computer program that can spread across computers and networks by making copies of itself, usually without the user’s knowledge. Viruses can have harmful side effects. These can range from displaying irritating messages to deleting all the files on your computer.
A virus program has to be run before it can infect your computer. Viruses have ways of making sure that this happens. They can attach themselves to other programs or hide in code that is run automatically when you open certain types of files. The virus can copy itself to other files or disks and make changes on your computer. Virus side effects, often called the payload, are the aspect of mostinterest to users. Password-protecting the documents on a particular day, mailing information about the user and machine to an address somewhere are some of the harmful side effects of viruses. Various kinds of viruses include macro virus, parasitic or file virus, Boot virus, E-mails are the biggest source of viruses. Usually they come as attachments with emails.
The Internet caused the spreading of viruses around the globe. The threat level depends on the particular code used in the WebPages and the security measures taken by service providers and by you. One solution to prevent the viruses is anti-virus softwares. Anti-virus software can detect viruses, prevent access to infected files and often eliminate the infection.
Computer viruses are starting to affect mobile phones too. The virus is rare and is unlikely to cause much damage. Anti-virus experts expect that as mobile phones become more sophisticated they will be targeted by virus writers. Some firms are already working on anti-virus software for mobile phones. VBS/Timo-A, Love Bug,Timofonica,CABIR,aka ACE-? and UNAVAILABLE are some of the viruses that affect the mobile phones
BASIC CONCEPTS
What is a virus?
A computer virus is a computer program that can spread across computers and networks by making copies of itself, usually without the user’s knowledge. Viruses can have harmful side-effects. These can range from displaying irritating messages to deleting all the files on your computer.
Evolution of virus
In the mid-1980s Basit and Amjad Alvi of Lahore, Pakistan discovered that people were pirating their software. They responded by writing the first computer virus, a program that would put a copy of itself and a copyright message on any floppy disk copies their customers made. From these simple beginnings, an entire virus counter-culture has emerged. Today new viruses sweep the planet in hours and virus scares are major news
How does a virus infect computers?
A virus program has to be run before it can infect your computer. Viruses have ways of making sure that this happens. They can attach themselves to other programs or hide in code that is run automatically when you open certain types of files. You might receive an infected file on a disk, in an email attachment, or in a download from the internet. As soon as you launch the file, the virus code runs. Then the virus can copy itself to other files or disks and make changes on your computer.
Who writes viruses?
Virus writers don’t gain in financial or career terms; they rarely achieve real fame; and, unlike hackers, they don’t usually target particular victims, since viruses spread too indiscriminately. Virus writers tend to be male, under 25 and single. Viruses also give their writers powers in cyberspace that they could never hope to have in the real world.
Virus side effects(Payload)
Virus side-effects are often called the payload. Viruses can disable our computer hardware, Can change the figures of an accounts spreadsheets at random, Adversely affects our email contacts and business domain, Can attack on web servers…
Messages -WM97/Jerk displays the message ‘I think (user’s name) is a big stupid jerk!’ Denying access -WM97/NightShade password-protects the current document on Friday 13th. Data theft- Troj/LoveLet-A emails information about the user and machine to an address in the Philippines
. Corrupting data -XM/Compatable makes changes to the data in Excel spreadsheets. Deleting data -Michelangelo overwrites parts of the hard disk on March 6th.
Disabling Hardware -CIH or Chernobyl (W95/CIH-10xx)
attempts to overwrite the BIOS on April 26th, making the machine unusable.
Crashing servers-Melissa or Explore Zip, which spread via email, can generate so much mail that servers crash.
There is a threat to confidentiality too. Melissa can forward documents, which may contain sensitive information, to anyone in your address book. Viruses can seriously damage your credibility. If you send infected documents to customers, they may refuse to do business with you or demand compensation. Sometimes you risk embarrassment as well as a damaged business reputation. WM/Polypost, for example, places copies of your documents in your name on alt.sex usenet newsgroups.
VIRUSES AND VIRUS LIKE PROGRAMMES
Trojan horses
Trojan horses are programs that do things that are not described in their specifications The user runs what they think is a legitimate program, allowing it to carry out hidden, often harmful, functions. For example, Troj/Zulu claims to be a program for fixing the ‘millennium bug’ but actually overwrites the hard disk. Trojan horses are sometimes used as a means of infecting a user with a computer virus.
Backdoor Trojans
A backdoor Trojan is a program that allows someone to take control of another user’s PC via the internet. Like other Trojans, a backdoor Trojan poses as legitimate or desirable software. When it is run (usually on a Windows 95/98 PC), it adds itself to the PC’s startup routine. The Trojan can then monitor the PC until it makes a connection to the internet. Once the PC is on-line, the person who sent the Trojan can use software on their computer to open and close programs on the infected computer, modify files and even send items to the printer. Subseven and Back Orifice are among the best known backdoor Trojans.
Worms
Worms are similar to viruses but do not need a carrier (like a macro or a boot sector).They are subtype of viruses. Worms simply create exact copies of themselves and use communications between computers to spread. Many viruses, such as Kakworm (VBS/Kakworm) or Love Bug (VBS/LoveLet-A), behave like worms and use email to forward themselves to other users.
Boot sector viruses
Boot sector viruses were the first type of virus to appear. They spread by modifying the boot sector, which contains the program that enables your computer to start up. When you switch on, the hardware looks for the boot sector program – which is usually on the hard disk, but can be on floppy or CD – and runs it. This program then loads the rest of the operating system into memory. A boot sector virus replaces the original boot sector with its own, modified version (and usually hides the original somewhere else on the hard disk). When you next start up, the infected boot sector is used and the virus becomes active. You can only become infected if you boot up your computer from an infected disk, e.g. a floppy disk that has an infected boot sector. Many boot sector viruses are now quite old.
Those written for DOS machines do not usually spread on Windows 95, 98, Me, NT or 2000 computers, though they can sometimes stop them from starting up properly. Boot viruses infect System Boot Sectors (SBS) and Master Boot Sectors (MBS). The MBS is located on all physical hard drives. It contains, among other data, information about the partition table (information about how a physical disk is divided into logical disks), and a short program that can interpret the partition information to find out where the SBS is located. The MBS is operating system independent. The SBS contains, among other data, a program whose purpose is to find and run an operating system. Because floppy diskettes are exchanged more frequently than program files boot viruses are able to propagate more effectively than file viruses.Form -A virus that is still widespread ten years after it first appeared.
The original version triggers on the 18th of each month and produces a click when keys are pressed on the keyboard. Parity Boot - A virus that may randomly display the message ‘PARITY CHECK’ and freeze the operating system. The message resembles a genuine error message displayed when the computer’s memory is faulty.
Parasitic virus (File virus)
Parasitic viruses, also known as file viruses, attach themselves to programs (or ‘executables’) and Acts as a part of the program .When you start a program infected with a file virus, the virus is launched first. To hide itself, the virus then runs the original program. The operating system on your computer sees the virus as part of the program you were trying to run and gives it the same rights. These rights allow the virus to copy itself, install itself in memory or release its payload. these viruses Infects over networks.
The internet has made it easier than ever to distribute programs, giving these viruses new opportunities to spread.
Jerusalem- On Friday 13th deletes every program run on the computer.
CIH (Chernobyl) - On the 26th of certain months, this virus will overwrite part of the BIOS chip, making the computer unusable. The virus also overwrites the hard disk.
Remote Explorer - WNT/RemExp (Remote Explorer) infects Windows NT executables. It was the first virus that could run as a service, i.e. run on NT systems even when no-one is logged in. Parasitic viruses infects executables by companion, link, overwrite, insert, prep end, append techniques
a) Companion virus
A companion virus does not modify its host directly. Instead it maneuvers the operating system to execute itself instead of the host file. Sometimes this is done by renaming the host file into some other name, and then grant the virus file the name of the original program. Or the virus infects an .EXE file by creating a .COM file with the same name in the same directory. DOS will always execute a .COM file first if only the program name is given, so if you type “EDIT” on a DOS prompt, and there is an EDIT.COM and EDIT.EXE in the same directory, the EDIT.COM is executed.
b) Linking Virus
A link virus makes changes in the low-level workings of the file system, so that program names do no longer point to the original program, but to a copy of the virus. It makes it possible to have only one instance of the virus, which all program names point to.
Do viruses and all the other nasties in cyberspace matter? Do they really do much harm? Imagine that no one has updated your anti-virus software for a few months. When they do, you find that your accounts spreadsheets are infected with a new virus that changes figures at random. Naturally you keep backups. But you might have been backing up infected files for months. How do you know which figures to trust? Now imagine that a new email virus has been released. Your company is receiving so many emails that you decide to shut down your email gateway altogether and miss an urgent order from a big customer.
Imagine that a friend emails you some files he found on the Internet. You open them and trigger a virus that mails confidential documents to everyone in your address book including your competitors. Finally, imagine that you accidentally send another company, a report that carries a virus. Will they feel safe to do business with you again? Today new viruses sweep the planet in hours and virus scares are major news. A computer virus is a computer program that can spread across computers and networks by making copies of itself, usually without the user’s knowledge. Viruses can have harmful side effects. These can range from displaying irritating messages to deleting all the files on your computer.
A virus program has to be run before it can infect your computer. Viruses have ways of making sure that this happens. They can attach themselves to other programs or hide in code that is run automatically when you open certain types of files. The virus can copy itself to other files or disks and make changes on your computer. Virus side effects, often called the payload, are the aspect of mostinterest to users. Password-protecting the documents on a particular day, mailing information about the user and machine to an address somewhere are some of the harmful side effects of viruses. Various kinds of viruses include macro virus, parasitic or file virus, Boot virus, E-mails are the biggest source of viruses. Usually they come as attachments with emails.
The Internet caused the spreading of viruses around the globe. The threat level depends on the particular code used in the WebPages and the security measures taken by service providers and by you. One solution to prevent the viruses is anti-virus softwares. Anti-virus software can detect viruses, prevent access to infected files and often eliminate the infection.
Computer viruses are starting to affect mobile phones too. The virus is rare and is unlikely to cause much damage. Anti-virus experts expect that as mobile phones become more sophisticated they will be targeted by virus writers. Some firms are already working on anti-virus software for mobile phones. VBS/Timo-A, Love Bug,Timofonica,CABIR,aka ACE-? and UNAVAILABLE are some of the viruses that affect the mobile phones
BASIC CONCEPTS
What is a virus?
A computer virus is a computer program that can spread across computers and networks by making copies of itself, usually without the user’s knowledge. Viruses can have harmful side-effects. These can range from displaying irritating messages to deleting all the files on your computer.
Evolution of virus
In the mid-1980s Basit and Amjad Alvi of Lahore, Pakistan discovered that people were pirating their software. They responded by writing the first computer virus, a program that would put a copy of itself and a copyright message on any floppy disk copies their customers made. From these simple beginnings, an entire virus counter-culture has emerged. Today new viruses sweep the planet in hours and virus scares are major news
How does a virus infect computers?
A virus program has to be run before it can infect your computer. Viruses have ways of making sure that this happens. They can attach themselves to other programs or hide in code that is run automatically when you open certain types of files. You might receive an infected file on a disk, in an email attachment, or in a download from the internet. As soon as you launch the file, the virus code runs. Then the virus can copy itself to other files or disks and make changes on your computer.
Who writes viruses?
Virus writers don’t gain in financial or career terms; they rarely achieve real fame; and, unlike hackers, they don’t usually target particular victims, since viruses spread too indiscriminately. Virus writers tend to be male, under 25 and single. Viruses also give their writers powers in cyberspace that they could never hope to have in the real world.
Virus side effects(Payload)
Virus side-effects are often called the payload. Viruses can disable our computer hardware, Can change the figures of an accounts spreadsheets at random, Adversely affects our email contacts and business domain, Can attack on web servers…
Messages -WM97/Jerk displays the message ‘I think (user’s name) is a big stupid jerk!’ Denying access -WM97/NightShade password-protects the current document on Friday 13th. Data theft- Troj/LoveLet-A emails information about the user and machine to an address in the Philippines
. Corrupting data -XM/Compatable makes changes to the data in Excel spreadsheets. Deleting data -Michelangelo overwrites parts of the hard disk on March 6th.
Disabling Hardware -CIH or Chernobyl (W95/CIH-10xx)
attempts to overwrite the BIOS on April 26th, making the machine unusable.
Crashing servers-Melissa or Explore Zip, which spread via email, can generate so much mail that servers crash.
There is a threat to confidentiality too. Melissa can forward documents, which may contain sensitive information, to anyone in your address book. Viruses can seriously damage your credibility. If you send infected documents to customers, they may refuse to do business with you or demand compensation. Sometimes you risk embarrassment as well as a damaged business reputation. WM/Polypost, for example, places copies of your documents in your name on alt.sex usenet newsgroups.
VIRUSES AND VIRUS LIKE PROGRAMMES
Trojan horses
Trojan horses are programs that do things that are not described in their specifications The user runs what they think is a legitimate program, allowing it to carry out hidden, often harmful, functions. For example, Troj/Zulu claims to be a program for fixing the ‘millennium bug’ but actually overwrites the hard disk. Trojan horses are sometimes used as a means of infecting a user with a computer virus.
Backdoor Trojans
A backdoor Trojan is a program that allows someone to take control of another user’s PC via the internet. Like other Trojans, a backdoor Trojan poses as legitimate or desirable software. When it is run (usually on a Windows 95/98 PC), it adds itself to the PC’s startup routine. The Trojan can then monitor the PC until it makes a connection to the internet. Once the PC is on-line, the person who sent the Trojan can use software on their computer to open and close programs on the infected computer, modify files and even send items to the printer. Subseven and Back Orifice are among the best known backdoor Trojans.
Worms
Worms are similar to viruses but do not need a carrier (like a macro or a boot sector).They are subtype of viruses. Worms simply create exact copies of themselves and use communications between computers to spread. Many viruses, such as Kakworm (VBS/Kakworm) or Love Bug (VBS/LoveLet-A), behave like worms and use email to forward themselves to other users.
Boot sector viruses
Boot sector viruses were the first type of virus to appear. They spread by modifying the boot sector, which contains the program that enables your computer to start up. When you switch on, the hardware looks for the boot sector program – which is usually on the hard disk, but can be on floppy or CD – and runs it. This program then loads the rest of the operating system into memory. A boot sector virus replaces the original boot sector with its own, modified version (and usually hides the original somewhere else on the hard disk). When you next start up, the infected boot sector is used and the virus becomes active. You can only become infected if you boot up your computer from an infected disk, e.g. a floppy disk that has an infected boot sector. Many boot sector viruses are now quite old.
Those written for DOS machines do not usually spread on Windows 95, 98, Me, NT or 2000 computers, though they can sometimes stop them from starting up properly. Boot viruses infect System Boot Sectors (SBS) and Master Boot Sectors (MBS). The MBS is located on all physical hard drives. It contains, among other data, information about the partition table (information about how a physical disk is divided into logical disks), and a short program that can interpret the partition information to find out where the SBS is located. The MBS is operating system independent. The SBS contains, among other data, a program whose purpose is to find and run an operating system. Because floppy diskettes are exchanged more frequently than program files boot viruses are able to propagate more effectively than file viruses.Form -A virus that is still widespread ten years after it first appeared.
The original version triggers on the 18th of each month and produces a click when keys are pressed on the keyboard. Parity Boot - A virus that may randomly display the message ‘PARITY CHECK’ and freeze the operating system. The message resembles a genuine error message displayed when the computer’s memory is faulty.
Parasitic virus (File virus)
Parasitic viruses, also known as file viruses, attach themselves to programs (or ‘executables’) and Acts as a part of the program .When you start a program infected with a file virus, the virus is launched first. To hide itself, the virus then runs the original program. The operating system on your computer sees the virus as part of the program you were trying to run and gives it the same rights. These rights allow the virus to copy itself, install itself in memory or release its payload. these viruses Infects over networks.
The internet has made it easier than ever to distribute programs, giving these viruses new opportunities to spread.
Jerusalem- On Friday 13th deletes every program run on the computer.
CIH (Chernobyl) - On the 26th of certain months, this virus will overwrite part of the BIOS chip, making the computer unusable. The virus also overwrites the hard disk.
Remote Explorer - WNT/RemExp (Remote Explorer) infects Windows NT executables. It was the first virus that could run as a service, i.e. run on NT systems even when no-one is logged in. Parasitic viruses infects executables by companion, link, overwrite, insert, prep end, append techniques
a) Companion virus
A companion virus does not modify its host directly. Instead it maneuvers the operating system to execute itself instead of the host file. Sometimes this is done by renaming the host file into some other name, and then grant the virus file the name of the original program. Or the virus infects an .EXE file by creating a .COM file with the same name in the same directory. DOS will always execute a .COM file first if only the program name is given, so if you type “EDIT” on a DOS prompt, and there is an EDIT.COM and EDIT.EXE in the same directory, the EDIT.COM is executed.
b) Linking Virus
A link virus makes changes in the low-level workings of the file system, so that program names do no longer point to the original program, but to a copy of the virus. It makes it possible to have only one instance of the virus, which all program names point to.
Labels: Seminar Topics, Seminars, Virus
Limits and Fits, Tolerance Dimensioning
Definitions:nominal size: The size designation used for generalidentification. The nominal size of a shaft and a hole are thesame. This value is often expressed as a fraction.basic size: The exact theoretical size of a part. This isthe value from which limit dimensions are computed. Basic size isa four decimal place equivalent to the nominal size. The number ofsignificant digits imply the accuracy of the dimension.
example: nominal size = 1 1/4basic size = 1.2500
design size: The ideal size for each component (shaft andhole) based upon a selected fit. The difference between the designsize of the shaft and the design size of the hole is equal to theallowance of the fit. The design size of a part corresponds to the Maximum Material Condition (MMC). That is, the largest shaft permitted by the limits and the smallest hole. Emphasis is placed upon the design size in the writing of the actual limit dimension, so the design size is placed in the top position of the pair.
tolerance: The total amount by which a dimension is allowed to vary. For fractional linear dimensions we have assumed a bilateral tolerance of 1/64 inch. For the fit of a shaft/holecombination, the tolerance is considered to be unilateral, that is, it is only applied in one direction from design size of the part. Standards for limits and fits state that tolerances are appliedsuch that the hole size can only vary larger from design size and the shaft size smaller.
basic hole system: Most common system for limit dimensions. In this system the design size of the hole is taken to be equivalent to the basic size for the pair (see above). This means that the lower (in size) limit of the hole dimension is equal to design size. The basic hole system is more frequently used since most hole generating devices are of fixed size (for example, drills, reams, etc.) When designing using purchased components with fixed outer diameters (bearings, bushings, etc.) a basic shaft system may be used.
allowance: The allowance is the intended difference in the sizes of mating parts. This allowance may be: positive (indicated with a "+" symbol), which means there is intended clearance between parts; negative("-"), for intentional interference: or "zero allowance" if the two parts are intended to be the "same size".This last case is common to selective assembly.
The extreme permissible values of a dimension are known as limits. The degree of tightness or looseness between two mating parts that are intended to act together is known as the fit of the parts. The character of the fit depends upon the use of the parts. Thus, the fit between members that move or rotate relative to each other, such as a shaft rotating in a bearing, is considerably different from the fit that is designed to prevent any relative motion between two parts, such as a wheel attached to an axle.
In selecting and specifying limits and fits for various applications, the interests of interchangeable manufacturing require that (1) standard definitions of terms relating to limits and fits be used; (2) preferred basic sizes be selected wherever possible to be reduce material and tool costs; (3) limits be based upon a series of preferred tolerances and allowances; and (4) a uniform system of applying tolerances (bilateral or unilateral) be used.
Definitions:nominal size: The size designation used for generalidentification. The nominal size of a shaft and a hole are thesame. This value is often expressed as a fraction.basic size: The exact theoretical size of a part. This isthe value from which limit dimensions are computed. Basic size isa four decimal place equivalent to the nominal size. The number ofsignificant digits imply the accuracy of the dimension.
example: nominal size = 1 1/4basic size = 1.2500
design size: The ideal size for each component (shaft andhole) based upon a selected fit. The difference between the designsize of the shaft and the design size of the hole is equal to theallowance of the fit. The design size of a part corresponds to the Maximum Material Condition (MMC). That is, the largest shaft permitted by the limits and the smallest hole. Emphasis is placed upon the design size in the writing of the actual limit dimension, so the design size is placed in the top position of the pair.
tolerance: The total amount by which a dimension is allowed to vary. For fractional linear dimensions we have assumed a bilateral tolerance of 1/64 inch. For the fit of a shaft/holecombination, the tolerance is considered to be unilateral, that is, it is only applied in one direction from design size of the part. Standards for limits and fits state that tolerances are appliedsuch that the hole size can only vary larger from design size and the shaft size smaller.
basic hole system: Most common system for limit dimensions. In this system the design size of the hole is taken to be equivalent to the basic size for the pair (see above). This means that the lower (in size) limit of the hole dimension is equal to design size. The basic hole system is more frequently used since most hole generating devices are of fixed size (for example, drills, reams, etc.) When designing using purchased components with fixed outer diameters (bearings, bushings, etc.) a basic shaft system may be used.
allowance: The allowance is the intended difference in the sizes of mating parts. This allowance may be: positive (indicated with a "+" symbol), which means there is intended clearance between parts; negative("-"), for intentional interference: or "zero allowance" if the two parts are intended to be the "same size".This last case is common to selective assembly.
The extreme permissible values of a dimension are known as limits. The degree of tightness or looseness between two mating parts that are intended to act together is known as the fit of the parts. The character of the fit depends upon the use of the parts. Thus, the fit between members that move or rotate relative to each other, such as a shaft rotating in a bearing, is considerably different from the fit that is designed to prevent any relative motion between two parts, such as a wheel attached to an axle.
In selecting and specifying limits and fits for various applications, the interests of interchangeable manufacturing require that (1) standard definitions of terms relating to limits and fits be used; (2) preferred basic sizes be selected wherever possible to be reduce material and tool costs; (3) limits be based upon a series of preferred tolerances and allowances; and (4) a uniform system of applying tolerances (bilateral or unilateral) be used.
Modern Communication Services
Society is becoming more informationally and visually oriented every day. Personal computing facilitates easy access, manipulation, storage, and exchange of information. These processes require reliable transmission of data information. Communicating documents by images and the use of high resolution graphics terminals provide a more natural and informative mode of human interaction than just voice and data. Video teleconferencing enhances group interaction at a distance. High definition entertainment video improves the quality of picture at the expense of higher transmission bit-rates, which may require new transmission means other than the present overcrowded radio spectrum. A modern Telecommunications network (such as the broadband network) must provide all these different services (multi-services) to the user.
Differences between traditional (telephony) and modern communication services
Conventional telephony communicates using:
* the voice medium only
* connects only two telephones per call
* uses circuits of fixed bit rate
In contrast, modern communication services depart from the conventional telephony service in these three essential aspects. Modern communication services can be:
* Multimedia
* point to point, and
* multi-rate
These aspects are examined Individually in the following three sub-sections.
* Multi-media: A multi-media call may communicate audio, data, still images, or full-motion video, or any combination of these media. Each medium has different demands for communication qualities, such as:
o bandwidth requirement
o signal latency within the network, and
o signal fidelity upon delivery by the network
Moreover, the information content of each medium may affect the information generated by other media. For example, voice could be transcribed into data via voice recognition and data commands may control the way voice and video are presented. These interactions most often occur at the communication terminals, but may also occur within the network .
* Multi-point: A multi-point call involves the setup of connections among more than two people. These connections can be multi-media. They can be one way or two way communications. These connections may be reconfigured many times within the duration of a call. A few examples will be used to contrast point-to-point communications versus multi-point communications. Traditional voice calls are predominantly two party calls, requiring a point-to-point connection using only the voice medium. To access pictorial information in a remote database would require a point-to-point connection that sends low bit-rate queries to the database, and high bit-rate video from the database. Entertainment video applications are largely point-to-multi-point connections, requiring one way communication of full motion video and audio from the program source to the viewers. Video teleconferencing involves connections among many parties, communicating voice, video, as well as data. Thus offering future services requires flexible management of the connection and media requests of a multi-point, multi-media communication call .
* Multi-rate A multi-rate service network is one which allocates transmission capacity flexibly to connections. A multi-media network has to support a broad range of bit-rates demanded by connections, not only because there are many communication media, but also because a communication medium may be encoded by algorithms with different bit-rates. For example, audio signals can be encoded with bit-rates ranging from less than 1 kbit/s to hundreds of kbit/s, using different encoding algorithms with a wide range of complexity and quality of audio reproduction. Similarly, full motion video signals may be encoded with bit-rates ranging from less than 1 Mbit/s to hundreds of Mbit/s. Thus a network transporting both video and audio signals may have to integrate traffic with a very broad range of bit-rates.
Society is becoming more informationally and visually oriented every day. Personal computing facilitates easy access, manipulation, storage, and exchange of information. These processes require reliable transmission of data information. Communicating documents by images and the use of high resolution graphics terminals provide a more natural and informative mode of human interaction than just voice and data. Video teleconferencing enhances group interaction at a distance. High definition entertainment video improves the quality of picture at the expense of higher transmission bit-rates, which may require new transmission means other than the present overcrowded radio spectrum. A modern Telecommunications network (such as the broadband network) must provide all these different services (multi-services) to the user.
Differences between traditional (telephony) and modern communication services
Conventional telephony communicates using:
* the voice medium only
* connects only two telephones per call
* uses circuits of fixed bit rate
In contrast, modern communication services depart from the conventional telephony service in these three essential aspects. Modern communication services can be:
* Multimedia
* point to point, and
* multi-rate
These aspects are examined Individually in the following three sub-sections.
* Multi-media: A multi-media call may communicate audio, data, still images, or full-motion video, or any combination of these media. Each medium has different demands for communication qualities, such as:
o bandwidth requirement
o signal latency within the network, and
o signal fidelity upon delivery by the network
Moreover, the information content of each medium may affect the information generated by other media. For example, voice could be transcribed into data via voice recognition and data commands may control the way voice and video are presented. These interactions most often occur at the communication terminals, but may also occur within the network .
* Multi-point: A multi-point call involves the setup of connections among more than two people. These connections can be multi-media. They can be one way or two way communications. These connections may be reconfigured many times within the duration of a call. A few examples will be used to contrast point-to-point communications versus multi-point communications. Traditional voice calls are predominantly two party calls, requiring a point-to-point connection using only the voice medium. To access pictorial information in a remote database would require a point-to-point connection that sends low bit-rate queries to the database, and high bit-rate video from the database. Entertainment video applications are largely point-to-multi-point connections, requiring one way communication of full motion video and audio from the program source to the viewers. Video teleconferencing involves connections among many parties, communicating voice, video, as well as data. Thus offering future services requires flexible management of the connection and media requests of a multi-point, multi-media communication call .
* Multi-rate A multi-rate service network is one which allocates transmission capacity flexibly to connections. A multi-media network has to support a broad range of bit-rates demanded by connections, not only because there are many communication media, but also because a communication medium may be encoded by algorithms with different bit-rates. For example, audio signals can be encoded with bit-rates ranging from less than 1 kbit/s to hundreds of kbit/s, using different encoding algorithms with a wide range of complexity and quality of audio reproduction. Similarly, full motion video signals may be encoded with bit-rates ranging from less than 1 Mbit/s to hundreds of Mbit/s. Thus a network transporting both video and audio signals may have to integrate traffic with a very broad range of bit-rates.
Light tree
Definition
The concept of light tree is introduced in a wavelength routed optical network, which employs wavelength -division multiplexing (WDM).
Depending on the underlying physical topology networks can be classified into three generations:
a).First Generation: these networks do not employ fiber optic technology; instead they employ copper-based or microwave technology. E.g. Ethernet.
b).Second Generation: these networks use optical fibers for data transmission but switching is performed in electronic domain. E.g. FDDI.
c).Third Generation: in these networks both data transmission and switching is performed in optical domain. E.g. WDM.
WDM wide area networks employ tunable lasers and filters at access nodes and optical/electronic switches at routing nodes. An access node may transmit signals on different wavelengths, which are coupled into the fiber using wavelength multiplexers. An optical signal passing through an optical wavelength-routing switch (WRS) may be routed from an output fiber without undergoing opto-electronic conversion.
A light path is an all-optical channel, which may be used to carry circuit switched traffic, and it may span multiple fiber links. Assigning a particular wavelength to it sets these up. In the absence of wavelength converters, a light path would occupy the same wavelength continuity constraint.
A light path can create logical (or virtual) neighbors out of nodes that may be geographically far apart from each other. A light path carries not only the direct traffic between the nodes it interconnects, but also the traffic from nodes upstream of the source to nodes upstream of the destination. A major objective of light path communication is to reduce the number of hops a packet has to traverse.
Under light path communication, the network employs an equal number of transmitters and receivers because each light path operates on a point-to-point basis. However this approach is not able to fully utilize all of the wavelengths on all of the fiber links in the network, also it is not able to fully exploit all the switching capability of each WRS.
A light tree is a point to point multipoint all optical channel, which may span multiple fiber links. Hence, a light tree enables single-hop communication between a source node and a set of destination nodes. Thus, a light tree based virtual topology can significantly reduce the hop distance, thereby increasing the network throughput.
Requirements:
1. Multicast -capable wavelength routing switches (MWRS) at every node in the netwok.
2. More optical amplifiers in the network.
Definition
The concept of light tree is introduced in a wavelength routed optical network, which employs wavelength -division multiplexing (WDM).
Depending on the underlying physical topology networks can be classified into three generations:
a).First Generation: these networks do not employ fiber optic technology; instead they employ copper-based or microwave technology. E.g. Ethernet.
b).Second Generation: these networks use optical fibers for data transmission but switching is performed in electronic domain. E.g. FDDI.
c).Third Generation: in these networks both data transmission and switching is performed in optical domain. E.g. WDM.
WDM wide area networks employ tunable lasers and filters at access nodes and optical/electronic switches at routing nodes. An access node may transmit signals on different wavelengths, which are coupled into the fiber using wavelength multiplexers. An optical signal passing through an optical wavelength-routing switch (WRS) may be routed from an output fiber without undergoing opto-electronic conversion.
A light path is an all-optical channel, which may be used to carry circuit switched traffic, and it may span multiple fiber links. Assigning a particular wavelength to it sets these up. In the absence of wavelength converters, a light path would occupy the same wavelength continuity constraint.
A light path can create logical (or virtual) neighbors out of nodes that may be geographically far apart from each other. A light path carries not only the direct traffic between the nodes it interconnects, but also the traffic from nodes upstream of the source to nodes upstream of the destination. A major objective of light path communication is to reduce the number of hops a packet has to traverse.
Under light path communication, the network employs an equal number of transmitters and receivers because each light path operates on a point-to-point basis. However this approach is not able to fully utilize all of the wavelengths on all of the fiber links in the network, also it is not able to fully exploit all the switching capability of each WRS.
A light tree is a point to point multipoint all optical channel, which may span multiple fiber links. Hence, a light tree enables single-hop communication between a source node and a set of destination nodes. Thus, a light tree based virtual topology can significantly reduce the hop distance, thereby increasing the network throughput.
Requirements:
1. Multicast -capable wavelength routing switches (MWRS) at every node in the netwok.
2. More optical amplifiers in the network.
LIDAR

LIDAR (Light Detection and Ranging) is an optical remote sensing technology that measures properties of scattered light to find range and/or other information of a distant target. The prevalent method to determine distance to an object or surface is to use laser pulses. Like the similar radar technology, which uses radio waves, which is light that is not in the visible spectrum, the range to an object is determined by measuring the time delay between transmission of a pulse and detection of the reflected signal. LIDAR technology has application in Geomatics, archaeology, geography, geology, geomorphology, seismology, remote sensing and atmospheric physics.[1] Other terms for LIDAR include ALSM (Airborne Laser Swath Mapping) and laser altimetry. The acronym LADAR (Laser Detection and Ranging) is often used in military contexts. The term laser radar is also in use but is misleading because it uses laser light and not the radiowaves that are the basis of conventional radar.
General description
The primary difference between lidar and radar is that with lidar, much shorter wavelengths of the electromagnetic spectrum are used, typically in the ultraviolet, visible, or near infrared. In general it is possible to image a feature or object only about the same size as the wavelength, or larger. Thus lidar is highly sensitive to aerosols and cloud particles and has many applications in atmospheric research and meteorology.
An object needs to produce a dielectric discontinuity in order to reflect the transmitted wave. At radar (microwave or radio) frequencies, a metallic object produces a significant reflection. However non-metallic objects, such as rain and rocks produce weaker reflections and some materials may produce no detectable reflection at all, meaning some objects or features are effectively invisible at radar frequencies. This is especially true for very small objects (such as single molecules and aerosols).
Lasers provide one solution to these problems. The beam densities and coherency are excellent. Moreover the wavelengths are much smaller than can be achieved with radio systems, and range from about 10 micrometers to the UV (ca. 250 nm). At such wavelengths, the waves are "reflected" very well from small objects. This type of reflection is called backscattering. Different types of scattering are used for different lidar applications, most common are Rayleigh scattering, Mie scattering and Raman scattering as well as fluorescence. The wavelengths are ideal for making measurements of smoke and other airborne particles (aerosols), clouds, and air molecules.
A laser typically has a very narrow beam which allows the mapping of physical features with very high resolution compared with radar. In addition, many chemical compounds interact more strongly at visible wavelengths than at microwaves, resulting in a stronger image of these materials. Suitable combinations of lasers can allow for remote mapping of atmospheric contents by looking for wavelength-dependent changes in the intensity of the returned signal.
Lidar has been used extensively for atmospheric research and meteorology. With the deployment of the GPS in the 1980's precision positioning of aircraft became possible. GPS based surveying technology has made airborne surveying and mapping applications possible and practical. Many have been developed, using downward-looking lidar instruments mounted in aircraft or satellites. A recent example is the NASA Experimental Advanced Research Lidar.
LIDAR is an acronym for LIght Detection And Ranging.
What can you do with LIDAR?
* Measure distance
* Measure speed
* Measure rotation
* Measure chemical composition and concentration
of a remote target where the target can be a clearly defined object, such as a vehicle, or a diffuse object such as a smoke plume or clouds.
Applications
Other than those applications mentioned above, there are a wide variety of applications of LIDAR.
Archaeology
LiDAR has many applications in the field of archaeology including aiding in the planning of field campaigns, mapping features beneath forest canopy, and providing an overview of broad, continuous features that may be indistinguishable on the ground. LiDAR can also provide archaeologists with the ability to create high-resolution digital elevation models (DEMs) of archaeological sites that can reveal micro-topography that are otherwise hidden by vegetation. LiDAR-derived products can be easily integrated into a Geographic Information System (GIS) for analysis and interpretation. For example at Fort Beausejour - Fort Cumberland National Historic Site, Canada, previously undiscovered archaeological features have been mapped that are related to the siege of the Fort in 1755. Features that could not be distinguished on the ground or through aerial photography were identified by overlaying hillshades of the DEM created with artificial illumination from various angles. With LiDAR the ability to produce high-resolution datasets quickly and relatively cheaply can be an advantage. Beyond efficiency, its ability to penetrate forest canopy has led to the discovery of features that were not distinguishable through traditional geo-spatial methods and are difficult to reach through field surveys.
Meteorology
The first LIDARs were used for studies of atmospheric composition, structure, clouds, and aerosols. Initially based on ruby lasers, LIDARs for meteorological applications were constructed shortly after the invention of the laser and represent one of the first applications of laser technology.
Elastic backscatter LIDAR is the simplest type of lidar and is typically used for studies of aerosols and clouds. The backscattered wavelength is identical to the transmitted wavelength, and the magnitude of the received signal at a given range depends on the backscatter coefficient of scatterers at that range and the extinction coefficients of the scatterers along the path to that range. The extinction coefficient is typically the quantity of interest.
Differential Absorption LIDAR (DIAL) is used for range-resolved measurements of a particular gas in the atmosphere, such as ozone, carbon dioxide, or water vapor. The LIDAR transmits two wavelengths: an "on-line" wavelength that is absorbed by the gas of interest and an off-line wavelength that is not absorbed. The differential absorption between the two wavelengths is a measure of the concentration of the gas as a function of range. DIAL LIDARs are essentially dual-wavelength elastic backscatter LIDARS.
Raman LIDAR is also used for measuring the concentration of atmospheric gases, but can also be used to retrieve aerosol parameters as well. Raman LIDAR exploits inelastic scattering to single out the gas of interest from all other atmospheric constituents. A small portion of the energy of the transmitted light is deposited in the gas during the scattering process, which shifts the scattered light to a longer wavelength by an amount that is unique to the species of interest. The higher the concentration of the gas, the stronger the magnitude of the backscattered signal.
Doppler LIDAR is used to measure wind speed along the beam by measuring the frequency shift of the backscattered light. Scanning LIDARs, such as NASA's HARLIE LIDAR, have been used to measure atmospheric wind velocity in a large three dimensional cone. ESA's wind mission ADM-Aeolus will be equipped with a Doppler LIDAR system in order to provide global measurements of vertical wind profiles. A doppler LIDAR system was used in the 2008 Summer Olympics to measure wind fields during the yacht competition. Doppler LIDAR systems are also now beginning to be successfully applied in the renewable energy sector to acquire wind speed, turbulence, wind veer and wind shear data. Both pulsed and continuous wave systems are being used. Pulsed systems using signal timing to obtain vertical distance resolution, whereas continuous wave systems rely on detector focusing.
Geology
In geology and seismology a combination of aircraft-based LIDAR and GPS have evolved into an important tool for detecting faults and measuring uplift. The output of the two technologies can produce extremely accurate elevation models for terrain that can even measure ground elevation through trees. This combination was used most famously to find the location of the Seattle Fault in Washington, USA. This combination is also being used to measure uplift at Mt. St. Helens by using data from before and after the 2004 uplift. Airborne LIDAR systems monitor glaciers and have the ability to detect subtle amounts of growth or decline. A satellite based system is NASA's ICESat which includes a LIDAR system for this purpose. NASA's Airborne Topographic Mapper is also used extensively to monitor glaciers and perform coastal change analysis.
Physics and Astronomy
A world-wide network of observatories uses lidars to measure the distance to reflectors placed on the moon, allowing the moon's position to be measured with mm precision and tests of general relativity to be done. MOLA, the Mars Orbiting Laser Altimeter, used a LIDAR instrument in a Mars-orbiting satellite (the NASA Mars Global Surveyor) to produce a spectacularly precise global topographic survey of the red planet.
In September, 2008, NASA's Phoenix Lander used LIDAR to detect snow in the atmosphere of Mars.
In atmospheric physics, LIDAR is used as a remote detection instrument to measure densities of certain constituents of the middle and upper atmosphere, such as potassium, sodium, or molecular nitrogen and oxygen. These measurements can be used to calculate temperatures. LIDAR can also be used to measure wind speed and to provide information about vertical distribution of the aerosol particles.
At the JET nuclear fusion research facility, in the UK near Abingdon, Oxfordshire, LIDAR Thomson Scattering is used to determine Electron Density and Temperature profiles of the plasma.
Biology and conservation
LIDAR has also found many applications in forestry. Canopy heights, biomass measurements, and leaf area can all be studied using airborne LIDAR systems. Similarly, LIDAR is also used by many industries, including Energy and Railroad, and the Department of Transportation as a faster way of surveying. Topographic maps can also be generated readily from LIDAR, including for recreational use such as in the production of orienteering maps.
In oceanography, LiDAR is used for estimation of phytoplankton fluorescence and generally biomass in the surface layers of the ocean. Another application is airborne lidar bathymetry of sea areas too shallow for hydrographic vessels.
Redwood ecology
The Save-the-Redwoods League is undertaking a project to map the tall redwoods on California's northern coast. LIDAR allows research scientists to not only measure the height of previously unmapped trees but to determine the biodiversity of the redwood forest. Dr. Stephen Sillett who is working with the League on the North Coast LIDAR project claims this technology will be useful in directing future efforts to preserve and protect ancient redwood trees.
Military and law enforcement
One situation where LIDAR has notable non-scientific application is in traffic speed law enforcement, for vehicle speed measurement, as a technology alternative to radar guns. The technology for this application is small enough to be mounted in a hand held camera "gun" and permits a particular vehicle's speed to be determined from a stream of traffic. Unlike RADAR which relies on doppler shifts to directly measure speed, police lidar relies on the principle of time-of-flight to calculate speed. The equivalent radar based systems are often not able to isolate particular vehicles from the traffic stream and are generally too large to be hand held. LIDAR has the distinct advantage of being able to pick out one vehicle in a cluttered traffic situation as long as the operator is aware of the limitations imposed by the range and beam divergence. Contrary to popular belief LIDAR does not suffer from “sweep” error when the operator uses the equipment correctly and when the LIDAR unit is equipped with algorithms that are able to detect when this has occurred. A combination of signal strength monitoring, receive gate timing, target position prediction and pre-filtering of the received signal wavelength prevents this from occurring. Should the beam illuminate sections of the vehicle with different reflectivity or the aspect of the vehicle changes during measurement that causes the received signal strength to be changed then the LIDAR unit will reject the measurement thereby producing speed readings of high integrity. For LIDAR units to be used in law enforcement applications a rigorous approval procedure is usually completed before deployment. Jelly-bean shaped vehicles are usually equipped with a vertical registration plate that, when illuminated causes a high integrity reflection to be returned to the LIDAR, many reflections and an averaging technique in the speed measurement process increase the integrity of the speed reading. In locations that do not require that a front or rear registration plate is fitted headlamps and rear-reflectors provide an almost ideal retro-reflective surface overcoming the reflections from uneven or non-compliant reflective surfaces thereby eliminating “sweep” error. It is these mechanisms which cause concern that LIDAR is somehow unreliable. Most traffic LIDAR systems send out a stream of approximately 100 pulses over the span of three-tenths of a second. A "black box," proprietary statistical algorithm picks and chooses which progressively shorter reflections to retain from the pulses over the short fraction of a second.
Military applications are not yet known to be in place and are possibly classified, but a considerable amount of research is underway in their use for imaging. Their higher resolution makes them particularly good for collecting enough detail to identify targets, such as tanks. Here the name LADAR is more common.
Five LIDAR units produced by the German company Sick AG were used for short range detection on Stanley, the autonomous car that won the 2005 DARPA Grand Challenge.
Vehicles
Lidar has been used to create Adaptive Cruise Control (ACC) systems for automobiles. Systems such as those by Siemens and Hella use a lidar device mounted in the front of the vehicle to monitor the distance between the vehicle and any vehicle in front of it. Often, the lasers are placed onto the bumper. In the event the vehicle in front slows down or is too close, the ACC applies the brakes to slow the vehicle. When the road ahead is clear, the ACC allows the vehicle to speed up to speed preset by the driver.
Imaging
3-D imaging is done with both scanning and non-scanning systems. "3-D gated viewing laser radar" is a non-scanning laser radar system that applies the so-called gated viewing technique. The gated viewing technique applies a pulsed laser and a fast gated camera. There are ongoing military research programmes in Sweden, Denmark, the USA and the UK with 3-D gated viewing imaging at several kilometers range with a range resolution and accuracy better than ten centimeters.
Coherent Imaging Lidar is possible using Synthetic Array Heterodyne Detection which is a form of Optical heterodyne detection that enables a staring single element receiver to act as though it were an imaging array.
Imaging LIDAR can also be performed using arrays of high speed detectors and modulation sensitive detectors arrays typically built on single chips using CMOS and hybrid CMOS / CCD fabrication techniques. In these devices each pixel performs some local processing such as demodulation or gating at high speed down converting the signals to video rate so that the array may be read like a camera. Using this technique many thousands of pixels / channels may be acquired simultaneously. In practical systems the limitation is light budget rather than parallel acquisition.
LIDAR has been used in the recording of a music video without cameras. The video for the song "House of Cards" by Radiohead is believed to be the first use of real-time 3D laser scanning to record a music video.
3D Mapping
Airborne LIDAR sensors are used by companies in the Remote Sensing area to create point clouds of the earth ground for further processing (e.g. used in forestry).


General description
The primary difference between lidar and radar is that with lidar, much shorter wavelengths of the electromagnetic spectrum are used, typically in the ultraviolet, visible, or near infrared. In general it is possible to image a feature or object only about the same size as the wavelength, or larger. Thus lidar is highly sensitive to aerosols and cloud particles and has many applications in atmospheric research and meteorology.
An object needs to produce a dielectric discontinuity in order to reflect the transmitted wave. At radar (microwave or radio) frequencies, a metallic object produces a significant reflection. However non-metallic objects, such as rain and rocks produce weaker reflections and some materials may produce no detectable reflection at all, meaning some objects or features are effectively invisible at radar frequencies. This is especially true for very small objects (such as single molecules and aerosols).
Lasers provide one solution to these problems. The beam densities and coherency are excellent. Moreover the wavelengths are much smaller than can be achieved with radio systems, and range from about 10 micrometers to the UV (ca. 250 nm). At such wavelengths, the waves are "reflected" very well from small objects. This type of reflection is called backscattering. Different types of scattering are used for different lidar applications, most common are Rayleigh scattering, Mie scattering and Raman scattering as well as fluorescence. The wavelengths are ideal for making measurements of smoke and other airborne particles (aerosols), clouds, and air molecules.
A laser typically has a very narrow beam which allows the mapping of physical features with very high resolution compared with radar. In addition, many chemical compounds interact more strongly at visible wavelengths than at microwaves, resulting in a stronger image of these materials. Suitable combinations of lasers can allow for remote mapping of atmospheric contents by looking for wavelength-dependent changes in the intensity of the returned signal.
Lidar has been used extensively for atmospheric research and meteorology. With the deployment of the GPS in the 1980's precision positioning of aircraft became possible. GPS based surveying technology has made airborne surveying and mapping applications possible and practical. Many have been developed, using downward-looking lidar instruments mounted in aircraft or satellites. A recent example is the NASA Experimental Advanced Research Lidar.
LIDAR is an acronym for LIght Detection And Ranging.
What can you do with LIDAR?
* Measure distance
* Measure speed
* Measure rotation
* Measure chemical composition and concentration
of a remote target where the target can be a clearly defined object, such as a vehicle, or a diffuse object such as a smoke plume or clouds.
Applications
Other than those applications mentioned above, there are a wide variety of applications of LIDAR.
Archaeology
LiDAR has many applications in the field of archaeology including aiding in the planning of field campaigns, mapping features beneath forest canopy, and providing an overview of broad, continuous features that may be indistinguishable on the ground. LiDAR can also provide archaeologists with the ability to create high-resolution digital elevation models (DEMs) of archaeological sites that can reveal micro-topography that are otherwise hidden by vegetation. LiDAR-derived products can be easily integrated into a Geographic Information System (GIS) for analysis and interpretation. For example at Fort Beausejour - Fort Cumberland National Historic Site, Canada, previously undiscovered archaeological features have been mapped that are related to the siege of the Fort in 1755. Features that could not be distinguished on the ground or through aerial photography were identified by overlaying hillshades of the DEM created with artificial illumination from various angles. With LiDAR the ability to produce high-resolution datasets quickly and relatively cheaply can be an advantage. Beyond efficiency, its ability to penetrate forest canopy has led to the discovery of features that were not distinguishable through traditional geo-spatial methods and are difficult to reach through field surveys.
Meteorology
The first LIDARs were used for studies of atmospheric composition, structure, clouds, and aerosols. Initially based on ruby lasers, LIDARs for meteorological applications were constructed shortly after the invention of the laser and represent one of the first applications of laser technology.
Elastic backscatter LIDAR is the simplest type of lidar and is typically used for studies of aerosols and clouds. The backscattered wavelength is identical to the transmitted wavelength, and the magnitude of the received signal at a given range depends on the backscatter coefficient of scatterers at that range and the extinction coefficients of the scatterers along the path to that range. The extinction coefficient is typically the quantity of interest.
Differential Absorption LIDAR (DIAL) is used for range-resolved measurements of a particular gas in the atmosphere, such as ozone, carbon dioxide, or water vapor. The LIDAR transmits two wavelengths: an "on-line" wavelength that is absorbed by the gas of interest and an off-line wavelength that is not absorbed. The differential absorption between the two wavelengths is a measure of the concentration of the gas as a function of range. DIAL LIDARs are essentially dual-wavelength elastic backscatter LIDARS.
Raman LIDAR is also used for measuring the concentration of atmospheric gases, but can also be used to retrieve aerosol parameters as well. Raman LIDAR exploits inelastic scattering to single out the gas of interest from all other atmospheric constituents. A small portion of the energy of the transmitted light is deposited in the gas during the scattering process, which shifts the scattered light to a longer wavelength by an amount that is unique to the species of interest. The higher the concentration of the gas, the stronger the magnitude of the backscattered signal.
Doppler LIDAR is used to measure wind speed along the beam by measuring the frequency shift of the backscattered light. Scanning LIDARs, such as NASA's HARLIE LIDAR, have been used to measure atmospheric wind velocity in a large three dimensional cone. ESA's wind mission ADM-Aeolus will be equipped with a Doppler LIDAR system in order to provide global measurements of vertical wind profiles. A doppler LIDAR system was used in the 2008 Summer Olympics to measure wind fields during the yacht competition. Doppler LIDAR systems are also now beginning to be successfully applied in the renewable energy sector to acquire wind speed, turbulence, wind veer and wind shear data. Both pulsed and continuous wave systems are being used. Pulsed systems using signal timing to obtain vertical distance resolution, whereas continuous wave systems rely on detector focusing.
Geology
In geology and seismology a combination of aircraft-based LIDAR and GPS have evolved into an important tool for detecting faults and measuring uplift. The output of the two technologies can produce extremely accurate elevation models for terrain that can even measure ground elevation through trees. This combination was used most famously to find the location of the Seattle Fault in Washington, USA. This combination is also being used to measure uplift at Mt. St. Helens by using data from before and after the 2004 uplift. Airborne LIDAR systems monitor glaciers and have the ability to detect subtle amounts of growth or decline. A satellite based system is NASA's ICESat which includes a LIDAR system for this purpose. NASA's Airborne Topographic Mapper is also used extensively to monitor glaciers and perform coastal change analysis.
Physics and Astronomy
A world-wide network of observatories uses lidars to measure the distance to reflectors placed on the moon, allowing the moon's position to be measured with mm precision and tests of general relativity to be done. MOLA, the Mars Orbiting Laser Altimeter, used a LIDAR instrument in a Mars-orbiting satellite (the NASA Mars Global Surveyor) to produce a spectacularly precise global topographic survey of the red planet.
In September, 2008, NASA's Phoenix Lander used LIDAR to detect snow in the atmosphere of Mars.
In atmospheric physics, LIDAR is used as a remote detection instrument to measure densities of certain constituents of the middle and upper atmosphere, such as potassium, sodium, or molecular nitrogen and oxygen. These measurements can be used to calculate temperatures. LIDAR can also be used to measure wind speed and to provide information about vertical distribution of the aerosol particles.
At the JET nuclear fusion research facility, in the UK near Abingdon, Oxfordshire, LIDAR Thomson Scattering is used to determine Electron Density and Temperature profiles of the plasma.
Biology and conservation
LIDAR has also found many applications in forestry. Canopy heights, biomass measurements, and leaf area can all be studied using airborne LIDAR systems. Similarly, LIDAR is also used by many industries, including Energy and Railroad, and the Department of Transportation as a faster way of surveying. Topographic maps can also be generated readily from LIDAR, including for recreational use such as in the production of orienteering maps.
In oceanography, LiDAR is used for estimation of phytoplankton fluorescence and generally biomass in the surface layers of the ocean. Another application is airborne lidar bathymetry of sea areas too shallow for hydrographic vessels.
Redwood ecology
The Save-the-Redwoods League is undertaking a project to map the tall redwoods on California's northern coast. LIDAR allows research scientists to not only measure the height of previously unmapped trees but to determine the biodiversity of the redwood forest. Dr. Stephen Sillett who is working with the League on the North Coast LIDAR project claims this technology will be useful in directing future efforts to preserve and protect ancient redwood trees.
Military and law enforcement
One situation where LIDAR has notable non-scientific application is in traffic speed law enforcement, for vehicle speed measurement, as a technology alternative to radar guns. The technology for this application is small enough to be mounted in a hand held camera "gun" and permits a particular vehicle's speed to be determined from a stream of traffic. Unlike RADAR which relies on doppler shifts to directly measure speed, police lidar relies on the principle of time-of-flight to calculate speed. The equivalent radar based systems are often not able to isolate particular vehicles from the traffic stream and are generally too large to be hand held. LIDAR has the distinct advantage of being able to pick out one vehicle in a cluttered traffic situation as long as the operator is aware of the limitations imposed by the range and beam divergence. Contrary to popular belief LIDAR does not suffer from “sweep” error when the operator uses the equipment correctly and when the LIDAR unit is equipped with algorithms that are able to detect when this has occurred. A combination of signal strength monitoring, receive gate timing, target position prediction and pre-filtering of the received signal wavelength prevents this from occurring. Should the beam illuminate sections of the vehicle with different reflectivity or the aspect of the vehicle changes during measurement that causes the received signal strength to be changed then the LIDAR unit will reject the measurement thereby producing speed readings of high integrity. For LIDAR units to be used in law enforcement applications a rigorous approval procedure is usually completed before deployment. Jelly-bean shaped vehicles are usually equipped with a vertical registration plate that, when illuminated causes a high integrity reflection to be returned to the LIDAR, many reflections and an averaging technique in the speed measurement process increase the integrity of the speed reading. In locations that do not require that a front or rear registration plate is fitted headlamps and rear-reflectors provide an almost ideal retro-reflective surface overcoming the reflections from uneven or non-compliant reflective surfaces thereby eliminating “sweep” error. It is these mechanisms which cause concern that LIDAR is somehow unreliable. Most traffic LIDAR systems send out a stream of approximately 100 pulses over the span of three-tenths of a second. A "black box," proprietary statistical algorithm picks and chooses which progressively shorter reflections to retain from the pulses over the short fraction of a second.
Military applications are not yet known to be in place and are possibly classified, but a considerable amount of research is underway in their use for imaging. Their higher resolution makes them particularly good for collecting enough detail to identify targets, such as tanks. Here the name LADAR is more common.
Five LIDAR units produced by the German company Sick AG were used for short range detection on Stanley, the autonomous car that won the 2005 DARPA Grand Challenge.
Vehicles
Lidar has been used to create Adaptive Cruise Control (ACC) systems for automobiles. Systems such as those by Siemens and Hella use a lidar device mounted in the front of the vehicle to monitor the distance between the vehicle and any vehicle in front of it. Often, the lasers are placed onto the bumper. In the event the vehicle in front slows down or is too close, the ACC applies the brakes to slow the vehicle. When the road ahead is clear, the ACC allows the vehicle to speed up to speed preset by the driver.
Imaging
3-D imaging is done with both scanning and non-scanning systems. "3-D gated viewing laser radar" is a non-scanning laser radar system that applies the so-called gated viewing technique. The gated viewing technique applies a pulsed laser and a fast gated camera. There are ongoing military research programmes in Sweden, Denmark, the USA and the UK with 3-D gated viewing imaging at several kilometers range with a range resolution and accuracy better than ten centimeters.
Coherent Imaging Lidar is possible using Synthetic Array Heterodyne Detection which is a form of Optical heterodyne detection that enables a staring single element receiver to act as though it were an imaging array.
Imaging LIDAR can also be performed using arrays of high speed detectors and modulation sensitive detectors arrays typically built on single chips using CMOS and hybrid CMOS / CCD fabrication techniques. In these devices each pixel performs some local processing such as demodulation or gating at high speed down converting the signals to video rate so that the array may be read like a camera. Using this technique many thousands of pixels / channels may be acquired simultaneously. In practical systems the limitation is light budget rather than parallel acquisition.
LIDAR has been used in the recording of a music video without cameras. The video for the song "House of Cards" by Radiohead is believed to be the first use of real-time 3D laser scanning to record a music video.
3D Mapping
Airborne LIDAR sensors are used by companies in the Remote Sensing area to create point clouds of the earth ground for further processing (e.g. used in forestry).
Labels: Electronics and communication, LIDAR, Physics, Seminar Topics, Seminars
RADAR
Radar is an object detection system that uses electromagnetic waves to identify the range, altitude, direction, or speed of both moving and fixed objects such as aircraft, ships, motor vehicles, weather formations, and terrain. The term RADAR was coined in 1941 as an acronym for radio detection and ranging. The term has since entered the English language as a standard word, radar, losing the capitalization. Radar was originally called RDF (Radio Direction Finder, now used as a totally different device) in the United Kingdom.
A radar system has a transmitter that emits microwaves or radio waves. These waves are in phase when emitted, and when they come into contact with an object are scattered in all directions. The signal is thus partly reflected back and it has a slight change of wavelength (and thus frequency) if the target is moving. The receiver is usually, but not always, in the same location as the transmitter. Although the signal returned is usually very weak, the signal can be amplified through use of electronic techniques in the receiver and in the antenna configuration. This enables radar to detect objects at ranges where other emissions, such as sound or visible light, would be too weak to detect. Radar uses include meteorological detection of precipitation, measuring ocean surface waves, air traffic control, police detection of speeding traffic, determining the speed of basesballs and by the military.
RAdio Detection And Ranging ,in short RADAR relies on sending and receiving electromagnetic radiation, usually in the form of radio waves (see Radio) or microwaves. Electromagnetic radiation is energy that moves in waves at or near the speed of light. The characteristics of electromagnetic waves depend on their wavelength. Gamma rays and X rays have very short wavelengths. Visible light is a tiny slice of the electromagnetic spectrum with wavelengths longer than X rays, but shorter than microwaves. Radar systems use long-wavelength electromagnetic radiation in the microwave and radio ranges. Because of their long wavelengths, radio waves and microwaves tend to reflect better than shorter wavelength radiation, which tends to scatter or be absorbed before it gets to the target. Radio waves at the long-wavelength end of the spectrum will even reflect off of the atmospheres ionosphere, a layer of electrically-charged particles in the earths atmosphere. The challenges for radar are stealth technology,clutter,jamming. It has certain applications like the traffic control,maritime navigation,millitary safety,air traffic control,meteorology etc.
History
Several inventors, scientists, and engineers contributed to the development of radar. The first to use radio waves to detect "the presence of distant metallic objects" was Christian Hülsmeyer, who in 1904 demonstrated the feasibility of detecting the presence of a ship in dense fog, but not its distance.He received Reichspatent Nr. 165546 for his pre-radar device in April 1904, and later patent 169154 for a related amendment for ranging. He also received a patent[9] in England for his telemobiloscope on September 22, 1904.
In August 1917 Nikola Tesla first established principles regarding frequency and power level for the first primitive radar units. He stated, " by their [standing electromagnetic waves] use we may produce at will, from a sending station, an electrical effect in any particular region of the globe; [with which] we may determine the relative position or course of a moving object, such as a vessel at sea, the distance traversed by the same, or its speed."
Before the Second World War developments by the Americans, the Germans, the French, the Soviets, and the British led to the modern version of radar. In 1934 the French Émile Girardeau stated he was building a radar system "conceived according to the principles stated by Tesla" and obtained a patent (French Patent n° 788795 in 1934) for a working dual radar system, a part of which was installed on the Normandie liner in 1935. The same year, American Dr. Robert M. Page tested the first monopulse radar and the Soviet military engineer P.K.Oschepkov, in collaboration with Leningrad Electrophysical Institute, produced an experimental apparatus RAPID capable of detecting an aircraft within 3 km of a receiver.[16] Hungarian Zoltán Bay produced a working model by 1936 at the Tungsram laboratory in the same vein.
However, it was the British who were the first to fully exploit it as a defence against aircraft attack. This was spurred on by fears that the Germans were developing death rays. Following a study of the possibility of propagating electromagnetic energy and the likely effect, the British scientists asked by the Air Ministry to investigate concluded that a death ray was impractical but detection of aircraft appeared feasible. Robert Watson-Watt demonstrated to his superiors the capabilities of a working prototype and patented the device in 1935 (British Patent GB593017) It served as the basis for the Chain Home network of radars to defend Great Britain.
The war precipitated research to find better resolution, more portability and more features for radar. The post-war years have seen the use of radar in fields as diverse as air traffic control, weather monitoring, astrometry and road speed control.
Principles
The radar dish, or antenna, transmits pulses of radio waves or microwaves which bounce off any object in their path. The object returns a tiny part of the wave's energy to a dish or antenna which is usually located at the same site as the transmitter. The time it takes for the reflected waves to return to the dish enables a computer to calculate how far away the object is, its radial velocity and other characteristics.
Reflection
Electromagnetic waves reflect (scatter) from any large change in the dielectric or diamagnetic constants. This means that a solid object in air or a vacuum, or other significant change in atomic density between the object and what is surrounding it, will usually scatter radar (radio) waves. This is particularly true for electrically conductive materials, such as metal and carbon fiber, making radar particularly well suited to the detection of aircraft and ships. Radar absorbing material, containing resistive and sometimes magnetic substances, is used on military vehicles to reduce radar reflection. This is the radio equivalent of painting something a dark color.
Radar waves scatter in a variety of ways depending on the size (wavelength) of the radio wave and the shape of the target. If the wavelength is much shorter than the target's size, the wave will bounce off in a way similar to the way light is reflected by a mirror. If the wavelength is much longer than the size of the target, the target is polarized (positive and negative charges are separated), like a dipole antenna. This is described by Rayleigh scattering, an effect that creates the Earth's blue sky and red sunsets. When the two length scales are comparable, there may be resonances. Early radars used very long wavelengths that were larger than the targets and received a vague signal, whereas some modern systems use shorter wavelengths (a few centimeters or shorter) that can image objects as small as a loaf of bread.
Short radio waves reflect from curves and corners, in a way similar to glint from a rounded piece of glass. The most reflective targets for short wavelengths have 90° angles between the reflective surfaces. A structure consisting of three flat surfaces meeting at a single corner, like the corner on a box, will always reflect waves entering its opening directly back at the source. These so-called corner reflectors are commonly used as radar reflectors to make otherwise difficult-to-detect objects easier to detect, and are often found on boats in order to improve their detection in a rescue situation and to reduce collisions.
For similar reasons, objects attempting to avoid detection will angle their surfaces in a way to eliminate inside corners and avoid surfaces and edges perpendicular to likely detection directions, which leads to "odd" looking stealth aircraft. These precautions do not completely eliminate reflection because of diffraction, especially at longer wavelengths. Half wavelength long wires or strips of conducting material, such as chaff, are very reflective but do not direct the scattered energy back toward the source. The extent to which an object reflects or scatters radio waves is called its radar cross section.
Polarization
In the transmitted radar signal, the electric field is perpendicular to the direction of propagation, and this direction of the electric field is the polarization of the wave. Radars use horizontal, vertical, linear and circular polarization to detect different types of reflections. For example, circular polarization is used to minimize the interference caused by rain. Linear polarization returns usually indicate metal surfaces. Random polarization returns usually indicate a fractal surface, such as rocks or soil, and are used by navigation radars.
Interference
Radar systems must overcome unwanted signals in order to focus only on the actual targets of interest. These unwanted signals may originate from internal and external sources, both passive and active. The ability of the radar system to overcome these unwanted signals defines its signal-to-noise ratio (SNR). SNR is defined as the ratio of a signal power to the noise power within the desired signal.
In less technical terms, SNR compares the level of a desired signal (such as targets) to the level of background noise. The higher a system's SNR, the better it is in isolating actual targets from the surrounding noise signals.
Noise
Signal noise is an internal source of random variations in the signal, which is generated by all electronic components. Noise typically appears as random variations superimposed on the desired echo signal received in the radar receiver. The lower the power of the desired signal, the more difficult it is to discern it from the noise (similar to trying to hear a whisper while standing near a busy road). Noise figure is a measure of the noise produced by a receiver compared to an ideal receiver, and this needs to be minimized.
Noise is also generated by external sources, most importantly the natural thermal radiation of the background scene surrounding the target of interest. In modern radar systems, due to the high performance of their receivers, the internal noise is typically about equal to or lower than the external scene noise. An exception is if the radar is aimed upwards at clear sky, where the scene is so "cold" that it generates very little thermal noise.
There will be also flicker noise due to electrons transit, but depending on 1/f, will be much lower than thermal noise when the frequency is high. Hence, in pulse radar, the system will be always heterodyne. See intermediate frequency.
Clutter
Clutter refers to radio frequency (RF) echoes returned from targets which are uninteresting to the radar operators. Such targets include natural objects such as ground, sea, precipitation (such as rain, snow or hail), sand storms, animals (especially birds), atmospheric turbulence, and other atmospheric effects, such as ionosphere reflections and meteor trails. Clutter may also be returned from man-made objects such as buildings and, intentionally, by radar countermeasures such as chaff.
Some clutter may also be caused by a long radar waveguide between the radar transceiver and the antenna. In a typical plan position indicator (PPI) radar with a rotating antenna, this will usually be seen as a "sun" or "sunburst" in the centre of the display as the receiver responds to echoes from dust particles and misguided RF in the waveguide. Adjusting the timing between when the transmitter sends a pulse and when the receiver stage is enabled will generally reduce the sunburst without affecting the accuracy of the range, since most sunburst is caused by a diffused transmit pulse reflected before it leaves the antenna.
While some clutter sources may be undesirable for some radar applications (such as storm clouds for air-defence radars), they may be desirable for others (meteorological radars in this example). Clutter is considered a passive interference source, since it only appears in response to radar signals sent by the radar.
There are several methods of detecting and neutralizing clutter. Many of these methods rely on the fact that clutter tends to appear static between radar scans. Therefore, when comparing subsequent scans echoes, desirable targets will appear to move and all stationary echoes can be eliminated. Sea clutter can be reduced by using horizontal polarization, while rain is reduced with circular polarization (note that meteorological radars wish for the opposite effect, therefore using linear polarization the better to detect precipitation). Other methods attempt to increase the signal-to-clutter ratio.
Constant False Alarm Rate (CFAR, a form of Automatic Gain Control, or AGC) is a method relying on the fact that clutter returns far outnumber echoes from targets of interest. The receiver's gain is automatically adjusted to maintain a constant level of overall visible clutter. While this does not help detect targets masked by stronger surrounding clutter, it does help to distinguish strong target sources. In the past, radar AGC was electronically controlled and affected the gain of the entire radar receiver. As radars evolved, AGC became computer-software controlled, and affected the gain with greater granularity, in specific detection cells.
Clutter may also originate from multipath echoes from valid targets due to ground reflection, atmospheric ducting or ionospheric reflection/refraction. This clutter type is especially bothersome, since it appears to move and behave like other normal (point) targets of interest, thereby creating a ghost. In a typical scenario, an aircraft echo is multipath-reflected from the ground below, appearing to the receiver as an identical target below the correct one. The radar may try to unify the targets, reporting the target at an incorrect height, or - worse - eliminating it on the basis of jitter or a physical impossibility. These problems can be overcome by incorporating a ground map of the radar's surroundings and eliminating all echoes which appear to originate below ground or above a certain height. In newer Air Traffic Control (ATC) radar equipment, algorithms are used to identify the false targets by comparing the current pulse returns, to those adjacent, as well as calculating return improbabilities due to calculated height, distance, and radar timing.
Jamming
Radar jamming refers to radio frequency signals originating from sources outside the radar, transmitting in the radar's frequency and thereby masking targets of interest. Jamming may be intentional, as with an electronic warfare (EW) tactic, or unintentional, as with friendly forces operating equipment that transmits using the same frequency range. Jamming is considered an active interference source, since it is initiated by elements outside the radar and in general unrelated to the radar signals.
Jamming is problematic to radar since the jamming signal only needs to travel one-way (from the jammer to the radar receiver) whereas the radar echoes travel two-ways (radar-target-radar) and are therefore significantly reduced in power by the time they return to the radar receiver. Jammers therefore can be much less powerful than their jammed radars and still effectively mask targets along the line of sight from the jammer to the radar (Mainlobe Jamming). Jammers have an added effect of affecting radars along other lines of sight, due to the radar receiver's sidelobes (Sidelobe Jamming).
Mainlobe jamming can generally only be reduced by narrowing the mainlobe solid angle, and can never fully be eliminated when directly facing a jammer which uses the same frequency and polarization as the radar. Sidelobe jamming can be overcome by reducing receiving sidelobes in the radar antenna design and by using an omnidirectional antenna to detect and disregard non-mainlobe signals. Other anti-jamming techniques are frequency hopping and polarization. See Electronic counter-counter-measures for details.
Interference has recently become a problem for C-band (5.66 GHz) meteorological radars with the proliferation of 5.4 GHz band WiFi equipment.
Radar engineering
A radars components are:
* A transmitter that generates the radio signal with an oscillator such as a klystron or a magnetron and controls its duration by a modulator.
* A waveguide that links the transmitter and the antenna.
* A duplexer that serves as a switch between the antenna and the transmitter or the receiver for the signal when the antenna is used in both situations.
* A receiver. Knowing the shape of the desired received signal (a pulse), an optimal receiver can be designed using a matched filter.
* An electronic section that controls all those devices and the antenna to perform the radar scan ordered by a software.
* A link to end users.
Antenna design
Radio signals broadcast from a single antenna will spread out in all directions, and likewise a single antenna will receive signals equally from all directions. This leaves the radar with the problem of deciding where the target object is located.
Early systems tended to use omni-directional broadcast antennas, with directional receiver antennas which were pointed in various directions. For instance the first system to be deployed, Chain Home, used two straight antennas at right angles for reception, each on a different display. The maximum return would be detected with an antenna at right angles to the target, and a minimum with the antenna pointed directly at it (end on). The operator could determine the direction to a target by rotating the antenna so one display showed a maximum while the other shows a minimum.
One serious limitation with this type of solution is that the broadcast is sent out in all directions, so the amount of energy in the direction being examined is a small part of that transmitted. To get a reasonable amount of power on the "target", the transmitting aerial should also be directional.
Parabolic reflector
More modern systems use a steerable parabolic "dish" to create a tight broadcast beam, typically using the same dish as the receiver. Such systems often combine two radar frequencies in the same antenna in order to allow automatic steering, or radar lock.
Parabolic reflectors can be either symmetric parabolas or spoiled parabolas:
* Symmetric parabolic antennas produce a narrow "pencil" beam in both the X and Y dimensions and consequently have a higher gain. The NEXRAD Pulse-Doppler weather radar uses a symmetric antenna to perform detailed volumetric scans of the atmosphere.
* Spoiled parabolic antennas produce a narrow beam in one dimension and a relatively wide beam in the other. This feature is useful if target detection over a wide range of angles is more important than target location in three dimensions. Most 2D surveillance radars use a spoiled parabolic antenna with a narrow azimuthal beamwidth and wide vertical beamwidth. This beam configuration allows the radar operator to detect an aircraft at a specific azimuth but at an indeterminate height. Conversely, so-called "nodder" height finding radars use a dish with a narrow vertical beamwidth and wide azimuthal beamwidth to detect an aircraft at a specific height but with low azimuthal precision.
Types of scan
* Primary Scan: A scanning technique where the main antenna aerial is moved to produce a scanning beam, examples include circular scan, sector scan etc
* Secondary Scan: A scanning technique where the antenna feed is moved to produce a scanning beam, examples include conical scan, unidirectional sector scan, lobe switching etc.
* Palmer Scan: A scanning technique that produces a scanning beam by moving the main antenna and its feed. A Palmer Scan is a combination of a Primary Scan and a Secondary Scan.

A radar system has a transmitter that emits microwaves or radio waves. These waves are in phase when emitted, and when they come into contact with an object are scattered in all directions. The signal is thus partly reflected back and it has a slight change of wavelength (and thus frequency) if the target is moving. The receiver is usually, but not always, in the same location as the transmitter. Although the signal returned is usually very weak, the signal can be amplified through use of electronic techniques in the receiver and in the antenna configuration. This enables radar to detect objects at ranges where other emissions, such as sound or visible light, would be too weak to detect. Radar uses include meteorological detection of precipitation, measuring ocean surface waves, air traffic control, police detection of speeding traffic, determining the speed of basesballs and by the military.
RAdio Detection And Ranging ,in short RADAR relies on sending and receiving electromagnetic radiation, usually in the form of radio waves (see Radio) or microwaves. Electromagnetic radiation is energy that moves in waves at or near the speed of light. The characteristics of electromagnetic waves depend on their wavelength. Gamma rays and X rays have very short wavelengths. Visible light is a tiny slice of the electromagnetic spectrum with wavelengths longer than X rays, but shorter than microwaves. Radar systems use long-wavelength electromagnetic radiation in the microwave and radio ranges. Because of their long wavelengths, radio waves and microwaves tend to reflect better than shorter wavelength radiation, which tends to scatter or be absorbed before it gets to the target. Radio waves at the long-wavelength end of the spectrum will even reflect off of the atmospheres ionosphere, a layer of electrically-charged particles in the earths atmosphere. The challenges for radar are stealth technology,clutter,jamming. It has certain applications like the traffic control,maritime navigation,millitary safety,air traffic control,meteorology etc.
History
Several inventors, scientists, and engineers contributed to the development of radar. The first to use radio waves to detect "the presence of distant metallic objects" was Christian Hülsmeyer, who in 1904 demonstrated the feasibility of detecting the presence of a ship in dense fog, but not its distance.He received Reichspatent Nr. 165546 for his pre-radar device in April 1904, and later patent 169154 for a related amendment for ranging. He also received a patent[9] in England for his telemobiloscope on September 22, 1904.
In August 1917 Nikola Tesla first established principles regarding frequency and power level for the first primitive radar units. He stated, " by their [standing electromagnetic waves] use we may produce at will, from a sending station, an electrical effect in any particular region of the globe; [with which] we may determine the relative position or course of a moving object, such as a vessel at sea, the distance traversed by the same, or its speed."
Before the Second World War developments by the Americans, the Germans, the French, the Soviets, and the British led to the modern version of radar. In 1934 the French Émile Girardeau stated he was building a radar system "conceived according to the principles stated by Tesla" and obtained a patent (French Patent n° 788795 in 1934) for a working dual radar system, a part of which was installed on the Normandie liner in 1935. The same year, American Dr. Robert M. Page tested the first monopulse radar and the Soviet military engineer P.K.Oschepkov, in collaboration with Leningrad Electrophysical Institute, produced an experimental apparatus RAPID capable of detecting an aircraft within 3 km of a receiver.[16] Hungarian Zoltán Bay produced a working model by 1936 at the Tungsram laboratory in the same vein.
However, it was the British who were the first to fully exploit it as a defence against aircraft attack. This was spurred on by fears that the Germans were developing death rays. Following a study of the possibility of propagating electromagnetic energy and the likely effect, the British scientists asked by the Air Ministry to investigate concluded that a death ray was impractical but detection of aircraft appeared feasible. Robert Watson-Watt demonstrated to his superiors the capabilities of a working prototype and patented the device in 1935 (British Patent GB593017) It served as the basis for the Chain Home network of radars to defend Great Britain.
The war precipitated research to find better resolution, more portability and more features for radar. The post-war years have seen the use of radar in fields as diverse as air traffic control, weather monitoring, astrometry and road speed control.
Principles
The radar dish, or antenna, transmits pulses of radio waves or microwaves which bounce off any object in their path. The object returns a tiny part of the wave's energy to a dish or antenna which is usually located at the same site as the transmitter. The time it takes for the reflected waves to return to the dish enables a computer to calculate how far away the object is, its radial velocity and other characteristics.
Reflection
Electromagnetic waves reflect (scatter) from any large change in the dielectric or diamagnetic constants. This means that a solid object in air or a vacuum, or other significant change in atomic density between the object and what is surrounding it, will usually scatter radar (radio) waves. This is particularly true for electrically conductive materials, such as metal and carbon fiber, making radar particularly well suited to the detection of aircraft and ships. Radar absorbing material, containing resistive and sometimes magnetic substances, is used on military vehicles to reduce radar reflection. This is the radio equivalent of painting something a dark color.
Radar waves scatter in a variety of ways depending on the size (wavelength) of the radio wave and the shape of the target. If the wavelength is much shorter than the target's size, the wave will bounce off in a way similar to the way light is reflected by a mirror. If the wavelength is much longer than the size of the target, the target is polarized (positive and negative charges are separated), like a dipole antenna. This is described by Rayleigh scattering, an effect that creates the Earth's blue sky and red sunsets. When the two length scales are comparable, there may be resonances. Early radars used very long wavelengths that were larger than the targets and received a vague signal, whereas some modern systems use shorter wavelengths (a few centimeters or shorter) that can image objects as small as a loaf of bread.
Short radio waves reflect from curves and corners, in a way similar to glint from a rounded piece of glass. The most reflective targets for short wavelengths have 90° angles between the reflective surfaces. A structure consisting of three flat surfaces meeting at a single corner, like the corner on a box, will always reflect waves entering its opening directly back at the source. These so-called corner reflectors are commonly used as radar reflectors to make otherwise difficult-to-detect objects easier to detect, and are often found on boats in order to improve their detection in a rescue situation and to reduce collisions.
For similar reasons, objects attempting to avoid detection will angle their surfaces in a way to eliminate inside corners and avoid surfaces and edges perpendicular to likely detection directions, which leads to "odd" looking stealth aircraft. These precautions do not completely eliminate reflection because of diffraction, especially at longer wavelengths. Half wavelength long wires or strips of conducting material, such as chaff, are very reflective but do not direct the scattered energy back toward the source. The extent to which an object reflects or scatters radio waves is called its radar cross section.
Polarization
In the transmitted radar signal, the electric field is perpendicular to the direction of propagation, and this direction of the electric field is the polarization of the wave. Radars use horizontal, vertical, linear and circular polarization to detect different types of reflections. For example, circular polarization is used to minimize the interference caused by rain. Linear polarization returns usually indicate metal surfaces. Random polarization returns usually indicate a fractal surface, such as rocks or soil, and are used by navigation radars.
Interference
Radar systems must overcome unwanted signals in order to focus only on the actual targets of interest. These unwanted signals may originate from internal and external sources, both passive and active. The ability of the radar system to overcome these unwanted signals defines its signal-to-noise ratio (SNR). SNR is defined as the ratio of a signal power to the noise power within the desired signal.
In less technical terms, SNR compares the level of a desired signal (such as targets) to the level of background noise. The higher a system's SNR, the better it is in isolating actual targets from the surrounding noise signals.
Noise
Signal noise is an internal source of random variations in the signal, which is generated by all electronic components. Noise typically appears as random variations superimposed on the desired echo signal received in the radar receiver. The lower the power of the desired signal, the more difficult it is to discern it from the noise (similar to trying to hear a whisper while standing near a busy road). Noise figure is a measure of the noise produced by a receiver compared to an ideal receiver, and this needs to be minimized.
Noise is also generated by external sources, most importantly the natural thermal radiation of the background scene surrounding the target of interest. In modern radar systems, due to the high performance of their receivers, the internal noise is typically about equal to or lower than the external scene noise. An exception is if the radar is aimed upwards at clear sky, where the scene is so "cold" that it generates very little thermal noise.
There will be also flicker noise due to electrons transit, but depending on 1/f, will be much lower than thermal noise when the frequency is high. Hence, in pulse radar, the system will be always heterodyne. See intermediate frequency.
Clutter
Clutter refers to radio frequency (RF) echoes returned from targets which are uninteresting to the radar operators. Such targets include natural objects such as ground, sea, precipitation (such as rain, snow or hail), sand storms, animals (especially birds), atmospheric turbulence, and other atmospheric effects, such as ionosphere reflections and meteor trails. Clutter may also be returned from man-made objects such as buildings and, intentionally, by radar countermeasures such as chaff.
Some clutter may also be caused by a long radar waveguide between the radar transceiver and the antenna. In a typical plan position indicator (PPI) radar with a rotating antenna, this will usually be seen as a "sun" or "sunburst" in the centre of the display as the receiver responds to echoes from dust particles and misguided RF in the waveguide. Adjusting the timing between when the transmitter sends a pulse and when the receiver stage is enabled will generally reduce the sunburst without affecting the accuracy of the range, since most sunburst is caused by a diffused transmit pulse reflected before it leaves the antenna.
While some clutter sources may be undesirable for some radar applications (such as storm clouds for air-defence radars), they may be desirable for others (meteorological radars in this example). Clutter is considered a passive interference source, since it only appears in response to radar signals sent by the radar.
There are several methods of detecting and neutralizing clutter. Many of these methods rely on the fact that clutter tends to appear static between radar scans. Therefore, when comparing subsequent scans echoes, desirable targets will appear to move and all stationary echoes can be eliminated. Sea clutter can be reduced by using horizontal polarization, while rain is reduced with circular polarization (note that meteorological radars wish for the opposite effect, therefore using linear polarization the better to detect precipitation). Other methods attempt to increase the signal-to-clutter ratio.
Constant False Alarm Rate (CFAR, a form of Automatic Gain Control, or AGC) is a method relying on the fact that clutter returns far outnumber echoes from targets of interest. The receiver's gain is automatically adjusted to maintain a constant level of overall visible clutter. While this does not help detect targets masked by stronger surrounding clutter, it does help to distinguish strong target sources. In the past, radar AGC was electronically controlled and affected the gain of the entire radar receiver. As radars evolved, AGC became computer-software controlled, and affected the gain with greater granularity, in specific detection cells.
Clutter may also originate from multipath echoes from valid targets due to ground reflection, atmospheric ducting or ionospheric reflection/refraction. This clutter type is especially bothersome, since it appears to move and behave like other normal (point) targets of interest, thereby creating a ghost. In a typical scenario, an aircraft echo is multipath-reflected from the ground below, appearing to the receiver as an identical target below the correct one. The radar may try to unify the targets, reporting the target at an incorrect height, or - worse - eliminating it on the basis of jitter or a physical impossibility. These problems can be overcome by incorporating a ground map of the radar's surroundings and eliminating all echoes which appear to originate below ground or above a certain height. In newer Air Traffic Control (ATC) radar equipment, algorithms are used to identify the false targets by comparing the current pulse returns, to those adjacent, as well as calculating return improbabilities due to calculated height, distance, and radar timing.
Jamming
Radar jamming refers to radio frequency signals originating from sources outside the radar, transmitting in the radar's frequency and thereby masking targets of interest. Jamming may be intentional, as with an electronic warfare (EW) tactic, or unintentional, as with friendly forces operating equipment that transmits using the same frequency range. Jamming is considered an active interference source, since it is initiated by elements outside the radar and in general unrelated to the radar signals.
Jamming is problematic to radar since the jamming signal only needs to travel one-way (from the jammer to the radar receiver) whereas the radar echoes travel two-ways (radar-target-radar) and are therefore significantly reduced in power by the time they return to the radar receiver. Jammers therefore can be much less powerful than their jammed radars and still effectively mask targets along the line of sight from the jammer to the radar (Mainlobe Jamming). Jammers have an added effect of affecting radars along other lines of sight, due to the radar receiver's sidelobes (Sidelobe Jamming).
Mainlobe jamming can generally only be reduced by narrowing the mainlobe solid angle, and can never fully be eliminated when directly facing a jammer which uses the same frequency and polarization as the radar. Sidelobe jamming can be overcome by reducing receiving sidelobes in the radar antenna design and by using an omnidirectional antenna to detect and disregard non-mainlobe signals. Other anti-jamming techniques are frequency hopping and polarization. See Electronic counter-counter-measures for details.
Interference has recently become a problem for C-band (5.66 GHz) meteorological radars with the proliferation of 5.4 GHz band WiFi equipment.
Radar engineering
A radars components are:
* A transmitter that generates the radio signal with an oscillator such as a klystron or a magnetron and controls its duration by a modulator.
* A waveguide that links the transmitter and the antenna.
* A duplexer that serves as a switch between the antenna and the transmitter or the receiver for the signal when the antenna is used in both situations.
* A receiver. Knowing the shape of the desired received signal (a pulse), an optimal receiver can be designed using a matched filter.
* An electronic section that controls all those devices and the antenna to perform the radar scan ordered by a software.
* A link to end users.
Antenna design
Radio signals broadcast from a single antenna will spread out in all directions, and likewise a single antenna will receive signals equally from all directions. This leaves the radar with the problem of deciding where the target object is located.
Early systems tended to use omni-directional broadcast antennas, with directional receiver antennas which were pointed in various directions. For instance the first system to be deployed, Chain Home, used two straight antennas at right angles for reception, each on a different display. The maximum return would be detected with an antenna at right angles to the target, and a minimum with the antenna pointed directly at it (end on). The operator could determine the direction to a target by rotating the antenna so one display showed a maximum while the other shows a minimum.
One serious limitation with this type of solution is that the broadcast is sent out in all directions, so the amount of energy in the direction being examined is a small part of that transmitted. To get a reasonable amount of power on the "target", the transmitting aerial should also be directional.
Parabolic reflector
More modern systems use a steerable parabolic "dish" to create a tight broadcast beam, typically using the same dish as the receiver. Such systems often combine two radar frequencies in the same antenna in order to allow automatic steering, or radar lock.
Parabolic reflectors can be either symmetric parabolas or spoiled parabolas:
* Symmetric parabolic antennas produce a narrow "pencil" beam in both the X and Y dimensions and consequently have a higher gain. The NEXRAD Pulse-Doppler weather radar uses a symmetric antenna to perform detailed volumetric scans of the atmosphere.
* Spoiled parabolic antennas produce a narrow beam in one dimension and a relatively wide beam in the other. This feature is useful if target detection over a wide range of angles is more important than target location in three dimensions. Most 2D surveillance radars use a spoiled parabolic antenna with a narrow azimuthal beamwidth and wide vertical beamwidth. This beam configuration allows the radar operator to detect an aircraft at a specific azimuth but at an indeterminate height. Conversely, so-called "nodder" height finding radars use a dish with a narrow vertical beamwidth and wide azimuthal beamwidth to detect an aircraft at a specific height but with low azimuthal precision.
Types of scan
* Primary Scan: A scanning technique where the main antenna aerial is moved to produce a scanning beam, examples include circular scan, sector scan etc
* Secondary Scan: A scanning technique where the antenna feed is moved to produce a scanning beam, examples include conical scan, unidirectional sector scan, lobe switching etc.
* Palmer Scan: A scanning technique that produces a scanning beam by moving the main antenna and its feed. A Palmer Scan is a combination of a Primary Scan and a Secondary Scan.
Labels: Physics, RADAR, Science, Seminar Topics, Seminars
Synergetics
Inpired by the Laser theory and founded by Hermann Haken and Arne Wunderlin, Synergetics is an interdisciplinary science explaining the formation and self-organization of patterns and structures in 'open' systems far from thermodynamic equilibrium. Self-organization requires a 'macroscopic' system, consisting of many nonlinearly interacting subsystems. Depending on the external control parameters (environment, energy-fluxes) self-organization takes place. Essential in Synergetics is the order-parameter concept which was originally introduced in the Ginzburg-Landau theory in order to describe phase-transitions in thermodynamics.
The order parameter concept is generalized by Haken to the 'enslaving-principle' saying that the dynamics of fast-relaxing (stable) modes is completely determined by the 'slow' dynamics of as a rule only a few 'order-parameters' (unstable modes). The order parameters can be interpreted as the amplitudes of the unstable modes determining the macroscopic pattern. As a consequence, self-organization means an enormous reduction of degrees of freedom (entropy) of the system which macroscopically reveals as increase of 'order' (pattern-formation). This far-reaching macroscopic order is independent on the details of the microscopic interactions of the subsystems. This is why Synergetics explains the self-organization of patterns in so many different systems in physics, chemistry, biology and even social systems.
Inpired by the Laser theory and founded by Hermann Haken and Arne Wunderlin, Synergetics is an interdisciplinary science explaining the formation and self-organization of patterns and structures in 'open' systems far from thermodynamic equilibrium. Self-organization requires a 'macroscopic' system, consisting of many nonlinearly interacting subsystems. Depending on the external control parameters (environment, energy-fluxes) self-organization takes place. Essential in Synergetics is the order-parameter concept which was originally introduced in the Ginzburg-Landau theory in order to describe phase-transitions in thermodynamics.
The order parameter concept is generalized by Haken to the 'enslaving-principle' saying that the dynamics of fast-relaxing (stable) modes is completely determined by the 'slow' dynamics of as a rule only a few 'order-parameters' (unstable modes). The order parameters can be interpreted as the amplitudes of the unstable modes determining the macroscopic pattern. As a consequence, self-organization means an enormous reduction of degrees of freedom (entropy) of the system which macroscopically reveals as increase of 'order' (pattern-formation). This far-reaching macroscopic order is independent on the details of the microscopic interactions of the subsystems. This is why Synergetics explains the self-organization of patterns in so many different systems in physics, chemistry, biology and even social systems.
Labels: Physics, Science, Seminar Topics, Seminars, Synergetics
Spintronics

Spintronics (a neologism meaning "spin transport electronics"), also known as magnetoelectronics, is an emerging technology that exploits the intrinsic spin of electrons and its associated magnetic moment, in addition to its fundamental electronic charge, in solid-state devices.
History
The research field of Spintronics emerged from experiments on spin-dependent electron transport phenomena in solid-state devices done in the 1980s, including the observation of spin-polarized electron injection from a ferromagnetic metal to a normal metal by Er. Jiveshwar Sharma (Jove) and Johnson and Silsbee (1985), and the discovery of giant magnetoresistance independently by Albert Fert et al. and Peter Grünberg et al. (1988). The origins can be traced back further to the ferromagnet/superconductor tunneling experiments pioneered by Meservey and Tedrow, and initial experiments on magnetic tunnel junctions by Julliere in the 1970s. The use of semiconductors for spintronics can be traced back at least as far as the theoretical proposal of a spin field-effect-transistor by Datta and Das in 1990.
Conventional electronic devices rely on the transport of electrical charge carriers - electrons - in a semiconductor such as silicon. Now, however, physicists are trying to exploit the 'spin' of the electron rather than its charge to create a remarkable new generation of 'spintronic' devices which will be smaller, more versatile and more robust than those currently making up silicon chips and circuit elements. The potential market is worth hundreds of billions of dollars a year. See Spintronics
All spintronic devices act according to the simple scheme: (1) information is stored (written) into spins as a particular spin orientation (up or down), (2) the spins, being attached to mobile electrons, carry the information along a wire, and (3) the information is read at a terminal. Spin orientation of conduction electrons survives for a relatively long time (nanoseconds, compared to tens of femtoseconds during which electron momentum decays), which makes spintronic devices particularly attractive for memory storage and magnetic sensors applications, and, potentially for quantum computing where electron spin would represent a bit (called qubit) of information. See Spintronics
Magnetoelectronics, Spin Electronics, and Spintronics are different names for the same thing: the use of electrons' spins (not just their electrical charge) in information circuits. See Magnetoelectronics, Spin Electronics, and Spintronics
Theory
Electrons are spin-1/2 fermions and therefore constitute a two-state system with spin "up" and spin "down". To make a spintronic device, the primary requirements are to have a system that can generate a current of spin polarized electrons comprising more of one spin species—up or down—than the other (called a spin injector), and a separate system that is sensitive to the spin polarization of the electrons (spin detector). Manipulation of the electron spin during transport between injector and detector (especially in semiconductors) via spin precession can be accomplished using real external magnetic fields or effective fields caused by spin-orbit interaction.
Spin polarization in non-magnetic materials can be achieved either through the Zeeman effect in large magnetic fields and low temperatures, or by non-equilibrium methods. In the latter case, the non-equilibrium polarization will decay over a timescale called the "spin lifetime". Spin lifetimes of conduction electrons in metals are relatively short (typically less than 1 nanosecond) but in semiconductors the lifetimes can be very long (microseconds at low temperatures), especially when the electrons are isolated in local trapping potentials (for instance, at impurities, where lifetimes can be milliseconds).
Metals-based spintronic devices
The simplest method of generating a spin-polarised current in a metal is to pass the current through a ferromagnetic material. The most common application of this effect is a giant magnetoresistance (GMR) device. A typical GMR device consists of at least two layers of ferromagnetic materials separated by a spacer layer. When the two magnetization vectors of the ferromagnetic layers are aligned, the electrical resistance will be lower (so a higher current flows at constant voltage) than if the ferromagnetic layers are anti-aligned. This constitutes a magnetic field sensor.
Two variants of GMR have been applied in devices: (1) current-in-plane (CIP), where the electric current flows parallel to the layers and (2) current-perpendicular-to-plane (CPP), where the electric current flows in a direction perpendicular to the layers.
Other metals-based spintronics devices:
* Tunnel Magnetoresistance (TMR), where CPP transport is achieved by using quantum-mechanical tunneling of electrons through a thin insulator separating ferromagnetic layers.
* Spin Torque Transfer, where a current of spin-polarized electrons is used to control the magnetization direction of ferromagnetic electrodes in the device.
Applications
The storage density of hard drives is rapidly increasing along an exponential growth curve, in part because spintronics-enabled devices like GMR and TMR sensors have increased the sensitivity of the read head which measures the magnetic state of small magnetic domains (bits) on the spinning platter. The doubling period for the areal density of information storage is twelve months, much shorter than Moore's Law, which observes that the number of transistors that can cheaply be incorporated in an integrated circuit doubles every two years.
MRAM, or magnetic random access memory, uses a grid of magnetic storage elements called magnetic tunnel junctions (MTJ's). MRAM is nonvolatile (unlike charge-based DRAM in today's computers) so information is stored even when power is turned off, potentially providing instant-on computing. Motorola has developed a 1st generation 256 kb MRAM based on a single magnetic tunnel junction and a single transistor and which has a read/write cycle of under 50 nanoseconds (Everspin, Motorola's spin-off, has since developeda 4 Mbit version). There are two 2nd generation MRAM techniques currently in development: Thermal Assisted Switching (TAS) which is being developed by Crocus Technology, and Spin Torque Transfer (STT) on which Crocus, Hynix, IBM, and several other companies are working.
Another design in development, called Racetrack memory, encodes information in the direction of magnetization between domain walls of a ferromagnetic metal wire.
Semiconductor-based spintronic devices
In early efforts, spin-polarized electrons are generated via optical orientation using circularly-polarized photons at the bandgap energy incident on semiconductors with appreciable spin-orbit interaction (like GaAs and ZnSe). Although electrical spin injection can be achieved in metallic systems by simply passing a current through a ferromagnet, the large impedance mismatch between ferromagnetic metals and semiconductors prevented efficient injection across metal-semiconductor interfaces. A solution to this problem is to use ferromagnetic semiconductor sources (like manganese-doped gallium arsenide GaMnAs), increasing the interface resistance with a tunnel barrier, or using hot-electron injection.
Spin detection in semiconductors is another challenge, which has been met with the following techniques:
* Faraday/Kerr rotation of transmitted/reflected photons
* Circular polarization analysis of electroluminescence
* Nonlocal spin valve (adapted from Johnson and Silsbee's work with metals)
* Ballistic spin filtering
The latter technique was used to overcome the lack of spin-orbit interaction and materials issues to achieve spin transport in silicon, the most important semiconductor for electronics.
Because external magnetic fields (and stray fields from magnetic contacts) can cause large Hall effects and magnetoresistance in semiconductors (which mimic spin-valve effects), the only conclusive evidence of spin transport in semiconductors is demonstration of spin precession and dephasing in a magnetic field non-colinear to the injected spin orientation. This is called the Hanle effect.
Applications
Advantages of semiconductor-based spintronics applications are potentially lower power use and a smaller footprint than electrical devices used for information processing. Also, applications such as semiconductor lasers using spin-polarized electrical injection have shown threshold current reduction and controllable circularly polarized coherent light output. Future applications may include a spin-based transistor having advantages over MOSFET devices such as steeper sub-threshold slope.


History
The research field of Spintronics emerged from experiments on spin-dependent electron transport phenomena in solid-state devices done in the 1980s, including the observation of spin-polarized electron injection from a ferromagnetic metal to a normal metal by Er. Jiveshwar Sharma (Jove) and Johnson and Silsbee (1985), and the discovery of giant magnetoresistance independently by Albert Fert et al. and Peter Grünberg et al. (1988). The origins can be traced back further to the ferromagnet/superconductor tunneling experiments pioneered by Meservey and Tedrow, and initial experiments on magnetic tunnel junctions by Julliere in the 1970s. The use of semiconductors for spintronics can be traced back at least as far as the theoretical proposal of a spin field-effect-transistor by Datta and Das in 1990.
Conventional electronic devices rely on the transport of electrical charge carriers - electrons - in a semiconductor such as silicon. Now, however, physicists are trying to exploit the 'spin' of the electron rather than its charge to create a remarkable new generation of 'spintronic' devices which will be smaller, more versatile and more robust than those currently making up silicon chips and circuit elements. The potential market is worth hundreds of billions of dollars a year. See Spintronics
All spintronic devices act according to the simple scheme: (1) information is stored (written) into spins as a particular spin orientation (up or down), (2) the spins, being attached to mobile electrons, carry the information along a wire, and (3) the information is read at a terminal. Spin orientation of conduction electrons survives for a relatively long time (nanoseconds, compared to tens of femtoseconds during which electron momentum decays), which makes spintronic devices particularly attractive for memory storage and magnetic sensors applications, and, potentially for quantum computing where electron spin would represent a bit (called qubit) of information. See Spintronics
Magnetoelectronics, Spin Electronics, and Spintronics are different names for the same thing: the use of electrons' spins (not just their electrical charge) in information circuits. See Magnetoelectronics, Spin Electronics, and Spintronics
Theory
Electrons are spin-1/2 fermions and therefore constitute a two-state system with spin "up" and spin "down". To make a spintronic device, the primary requirements are to have a system that can generate a current of spin polarized electrons comprising more of one spin species—up or down—than the other (called a spin injector), and a separate system that is sensitive to the spin polarization of the electrons (spin detector). Manipulation of the electron spin during transport between injector and detector (especially in semiconductors) via spin precession can be accomplished using real external magnetic fields or effective fields caused by spin-orbit interaction.
Spin polarization in non-magnetic materials can be achieved either through the Zeeman effect in large magnetic fields and low temperatures, or by non-equilibrium methods. In the latter case, the non-equilibrium polarization will decay over a timescale called the "spin lifetime". Spin lifetimes of conduction electrons in metals are relatively short (typically less than 1 nanosecond) but in semiconductors the lifetimes can be very long (microseconds at low temperatures), especially when the electrons are isolated in local trapping potentials (for instance, at impurities, where lifetimes can be milliseconds).
Metals-based spintronic devices
The simplest method of generating a spin-polarised current in a metal is to pass the current through a ferromagnetic material. The most common application of this effect is a giant magnetoresistance (GMR) device. A typical GMR device consists of at least two layers of ferromagnetic materials separated by a spacer layer. When the two magnetization vectors of the ferromagnetic layers are aligned, the electrical resistance will be lower (so a higher current flows at constant voltage) than if the ferromagnetic layers are anti-aligned. This constitutes a magnetic field sensor.
Two variants of GMR have been applied in devices: (1) current-in-plane (CIP), where the electric current flows parallel to the layers and (2) current-perpendicular-to-plane (CPP), where the electric current flows in a direction perpendicular to the layers.
Other metals-based spintronics devices:
* Tunnel Magnetoresistance (TMR), where CPP transport is achieved by using quantum-mechanical tunneling of electrons through a thin insulator separating ferromagnetic layers.
* Spin Torque Transfer, where a current of spin-polarized electrons is used to control the magnetization direction of ferromagnetic electrodes in the device.
Applications
The storage density of hard drives is rapidly increasing along an exponential growth curve, in part because spintronics-enabled devices like GMR and TMR sensors have increased the sensitivity of the read head which measures the magnetic state of small magnetic domains (bits) on the spinning platter. The doubling period for the areal density of information storage is twelve months, much shorter than Moore's Law, which observes that the number of transistors that can cheaply be incorporated in an integrated circuit doubles every two years.
MRAM, or magnetic random access memory, uses a grid of magnetic storage elements called magnetic tunnel junctions (MTJ's). MRAM is nonvolatile (unlike charge-based DRAM in today's computers) so information is stored even when power is turned off, potentially providing instant-on computing. Motorola has developed a 1st generation 256 kb MRAM based on a single magnetic tunnel junction and a single transistor and which has a read/write cycle of under 50 nanoseconds (Everspin, Motorola's spin-off, has since developeda 4 Mbit version). There are two 2nd generation MRAM techniques currently in development: Thermal Assisted Switching (TAS) which is being developed by Crocus Technology, and Spin Torque Transfer (STT) on which Crocus, Hynix, IBM, and several other companies are working.
Another design in development, called Racetrack memory, encodes information in the direction of magnetization between domain walls of a ferromagnetic metal wire.
Semiconductor-based spintronic devices
In early efforts, spin-polarized electrons are generated via optical orientation using circularly-polarized photons at the bandgap energy incident on semiconductors with appreciable spin-orbit interaction (like GaAs and ZnSe). Although electrical spin injection can be achieved in metallic systems by simply passing a current through a ferromagnet, the large impedance mismatch between ferromagnetic metals and semiconductors prevented efficient injection across metal-semiconductor interfaces. A solution to this problem is to use ferromagnetic semiconductor sources (like manganese-doped gallium arsenide GaMnAs), increasing the interface resistance with a tunnel barrier, or using hot-electron injection.
Spin detection in semiconductors is another challenge, which has been met with the following techniques:
* Faraday/Kerr rotation of transmitted/reflected photons
* Circular polarization analysis of electroluminescence
* Nonlocal spin valve (adapted from Johnson and Silsbee's work with metals)
* Ballistic spin filtering
The latter technique was used to overcome the lack of spin-orbit interaction and materials issues to achieve spin transport in silicon, the most important semiconductor for electronics.
Because external magnetic fields (and stray fields from magnetic contacts) can cause large Hall effects and magnetoresistance in semiconductors (which mimic spin-valve effects), the only conclusive evidence of spin transport in semiconductors is demonstration of spin precession and dephasing in a magnetic field non-colinear to the injected spin orientation. This is called the Hanle effect.
Applications
Advantages of semiconductor-based spintronics applications are potentially lower power use and a smaller footprint than electrical devices used for information processing. Also, applications such as semiconductor lasers using spin-polarized electrical injection have shown threshold current reduction and controllable circularly polarized coherent light output. Future applications may include a spin-based transistor having advantages over MOSFET devices such as steeper sub-threshold slope.
Subscribe to:
Posts (Atom)