Green Engine

This seminar representing the Green engine effect. That is for increasing the efficiency of the engine and avoiding excessive pollution a new method adopting, a ceramic coating [non-metallic solid coating]is done on the parts like piston and crown of the engine used in automobiles

Factors affecting the efficiency

Incomplete combustion
Carbon deposition
Thermal shocking
Pollution control

For avoiding these factors we adopt the method of ceramic coating on the engine Features of ceramic coating We conduct various process in this coating

1. Physical vapour deposition
2. Chemical vapour deposition
3. Ion plating
4. Spattering


1. This prevents he deposition of carbon over the cylinder head and piston

2. It acts as a thermal barrier which reduces the amount of heat leakage

3. It help complete combustion of fuels

4. It avoids thermal shocking

5. All the factors above contribute increases the efficiency up to 9% and reduction in pollution in a wide rate


1. Additional reactions takes place due to coating.
2. High expense for coating

Smart combustors

This seminar will review the state of the art of active control of gas turbine combustors processes. The seminar will first discuss recently developed approaches for active control of detrimental combustion instabilities by use of 'fast' injectors that modulate the fuel injection rate at the frequency of the instability and appropriate phase and gain. Next, the paper discusses two additional approaches for damping of combustion instabilities; i.e., active modification of the combustion process characteristics and open loop modulation of the fuel injection rate at frequencies that differ from the instability frequency. The second part of the seminar will discuss active control of lean blowout in combustors that burn fuel in a lean premixed mode of combustion to reduce NOx emissions. This discussion will describe recent developments of optical and acoustic sensing techniques that employ sophisticated data analysis approaches to detect the presence of lean blowout precursors in the measured data. It will be shown that this approach can be used to determine in advance the onset of lean blowout and that the problem can be prevented by active control of the relative amounts of fuel supplied to the main, premixed, combustion region and a premixed pilot flame. The will close with a discussion of research needs, with emphasis on the integration of utilized active control and health monitoring and prognostication systems into a single combustor control system


The Technique of measuring high temperature is known as pyrometry and the instrument employed is called pyrometer. Pyrometer is specialized type of thermometer used to measure high temperatures in the production and heat treatment of metal and alloys. Ordinary temperatures can be measured by ordinary thermometer, instead pyrometer is employed for measuring higher temperature.
Any metallic surface when heated emits radiation of different wavelengths which are not visible at low temperatures but at about 5400C radiations are in shorter wavelength and are visible to eye and from colour judgement is made as to probable temperature, the colour scale is roughly as follows.

Dark red - 5400C
Red - 7000C
Bright red - 3500C
Orange - 9000C
Yellow - 10100C
White - 12050C and above

The Technique of measuring high temperature is known as pyrometry and the instrument employed is called pyrometer. Pyrometer is specialized type of thermometer used to measure high temperatures in the production and heat treatment of metal and alloys. Ordinary temperatures can be measured by ordinary thermometer, instead pyrometer is employed for measuring higher temperature.
Any metallic surface when heated emits radiation of different wavelengths which are not visible at low temperatures but at about 5400C radiations are in shorter wavelength and are visible to eye and from colour judgement is made as to probable temperature, the colour scale is roughly as follows.

Dark red - 5400C
Red - 7000C
Bright red - 3500C
Orange - 9000C
Yellow - 10100C
White - 12050C and above

When a substance receives heat, change in pressure, electric resistance, radiation, thermoelectric e.m.f and or colour may takeplace. Any of these change can be used for measurement of temperature. Inorder to exercise provision control over the heat treatment and melting operation in the industry temperaturemeasuring device known as pyrometers are used. Also accurate measurement of temperature of Furnaces, molten metals and other heated materials

A smart camera performs real-time analysis to recognize scenic elements. Smart cameras are useful in a variety of scenarios: surveillance, medicine,etc. We have built a real-time system for recognizing gestures. Our smart camera uses novel algorithms to recognize gestures based on low-level analysis of body parts as well as hidden Markov models for the moves that comprise the gestures. These algorithms run on a Trimedia processor. Our system can recognize gestures at the rate of 20 frames/second. The camera can also fuse the results of multiple cameras

Recent technological advances are enabling a new generation of smart cameras that represent a quantum leap in sophistication. While today's digital cameras capture images, smart cameras capture high-level descriptions of the scene and analyze what they see. These devices could support a wide variety of applications including human and animal detection, surveillance, motion analysis, and facial identification.

Video processing has an insatiable demand for real-time performance. Fortunately, Moore's law provides an increasing pool of available computing power to apply to real-time analysis. Smart cameras leverage very large-scale integration (VLSI) to provide such analysis in a low-cost, low-power system with substantial memory. Moving well beyond pixel processing and compression, these systems run a wide range of algorithms to extract meaning from streaming video.

Because they push the design space in so many dimensions, smart cameras are a leading-edge application for embedded system research.

Open RAN

The vision of the OpenRAN architecture is to design radio access network architecture with the following characteristics:


Such architecture would be open because it defines open, standardized interfaces at key points that in past architectures were closed and proprietary. It would be flexible because it admits of several implementations, depending on the wired network resources available in the deployment situation. It would be distributed because monolithic network elements in architectures would have been broken down into their respective functional entities, and the functional entities would have been grouped into network elements that can be realized as a distributed system.

The architecture would define an interface with the core network that allows the core network to be designed independently from the RAN, preserving access network independence in the core. Finally, the architecture would not require changes in radio link protocols; in particular, a radio link protocol based on IP would not be necessary.

This document presents the first steps in developing the OpenRAN vision. In its first phase, the subject of this document, the OpenRAN architecture is purely concerned with distributing RAN functions to facilitate achieving open interfaces and flexible deployment. The transport substrate for implementing the architecture is assumed to be IP but no attempts is made to optimize the use of IP protocols, nor are specific interfaces designated as open. The architecture could as well be implemented on top of existing functional architectures that maintain a strict isolation between the transport layer and radio network layer, by splitting an existing radio network layer into control and bearer parts.

In addition, interoperation with existing core and RAN networks is supported via interworking functions. Chapters 7 through 11 in this report are exclusively concerned with this first phase of the architecture, and it is possible that the architecture may change as the actual implementation of the OpenRAN is considered and For Further Study items are resolved

Space Theaters

The pace at which space is being exploited and explored is at present slowed by the cost of launching payloads. Typically, a payload will be placed by a rocket into low-Earth orbit (around 400 km), and then be boosted higher by rocket thrusters. But just transporting a satellite from the lower altitude to its eventual destination can run to several thousand dollars per kilogram of payload. To cut expenses, space experts are reconsidering the technology used to place payloads in their final orbits.

A distinctly different type of propulsion may provide a cheap, lightweight, and reliable alternative to conventional rocket thrusters. Called an electrodynamic tether, it is a current-carrying wire that harnesses the force exerted by Earth's magnetic field.

NASAs US $7 million Propulsive Small Expendable Deployer System (ProSEDS) experiment will show that an 11-kg, 5-km-long, 1.2-mm-diameter aluminum wire can rapidly remove a rocket's upper stage from orbit. Should this and subsequent experiments succeed, they will pave the way to replacing rocket thrusters with tethers in certain space applications.

Magnetic Levitation Train

Transportation is one factor that influence our way of life most and has got a direct impact on a country's economy. Explosive demand for fast intercity transport is strangling urban airport. It was found that the trains with wheels cannot accelerate beyond a certain hint. It exerts heavy dynamic head on the track and at high speeds the wheels begins slipping against the track. This led the scientists to think and develop Maglev trains.

Magnetic levitation trains popularly known as 'Maglevs', is the latest development in the locomotive industry. Maglev does not run on wheels. Therefore they do not have any mechanical limitations to achieving high speed. Two types of Maglevs are discussing in this seminar. One is the electro-magnetic suspension system, which uses the magnetic attraction principle. Other is the electro-dynamic suspension system, which uses the repulsion principle to levitate the train from the guide way.

Infini band

Amdahl's Law is one of the fundamental principles of computer science and basically states that efficient systems must provide a balance between CPU performance, memory bandwidth, and I/O performance. At odds with this, is Moore's Law which has accurately predicted that semiconductors double their performance roughly every 18 months. Since I/O interconnects are governed by mechanical and electrical limitations more severe than the scaling capabilities of semiconductors, these two laws lead to an eventual imbalance and limit system performance.

This would suggest that I/O interconnects need to radically change every few years in order to maintain system performance. In fact, there is another practical law which prevents I/O interconnects from changing frequently - if it am not broke don't fix it.

Bus architectures have a tremendous amount of inertia because they dictate the bus interface architecture of semiconductor devices. For this reason, successful bus architectures typically enjoy a dominant position for ten years or more. The PCI bus was introduced to the standard PC architecture in the early 90's and has maintained its dominance with only one major upgrade during that period: from 32 bit/33 MHz to 64bit/66Mhz.

The PCI-X initiative takes this one step further to 133MHz and seemingly should provide the PCI architecture with a few more years of life. But there is a divergence between what personal computer and servers require.
Personal Computers or PCs are not pushing the bandwidth capabilities of PCI 64/66. PCI slots offer a great way for home or business users to purchase networking, video decode, advanced sounds, or other cards and upgrade the capabilities of their PC. On the other hand, servers today often include clustering, networking (Gigabit Ethernet) and storage (Fibre Channel) cards in a single system and these push the 1GB bandwidth limit of PCI-X.

Holographic Memory

Devices that use light to store and read data have been the backbone of data storage for nearly two decades. Compact discs revolutionized data storage in the early 1980s, allowing multi-megabytes of data to be stored on a disc that has a diameter of a mere 12 centimeters and a thickness of about 1.2 millimeters. In 1997, an improved version of the CD, called a digital versatile disc (DVD), was released, which enabled the storage of full-length movies on a single disc.

CDs and DVDs are the primary data storage methods for music, software, personal computing and video. A CD can hold 783 megabytes of data. A double-sided, double-layer DVD can hold 15.9 GB of data, which is about eight hours of movies. These conventional storage mediums meet today's storage needs, but storage technologies have to evolve to keep pace with increasing consumer demand. CDs, DVDs and magnetic storage all store bits of information on the surface of a recording medium. In order to increase storage capabilities, scientists are now working on a new optical storage method called holographic memory that will go beneath the surface and use the volume of the recording medium for storage, instead of only the surface area. Three-dimensional data storage will be able to store more information in a smaller space and offer faster data transfer times.

Holographic memory is developing technology that has promised to revolutionalise the storage systems. It can store data upto 1 Tb in a sugar cube sized crystal. Data from more than 1000 CDs can fit into a holographic memory System. Most of the computer hard drives available today can hold only 10 to 40 GB of data, a small fraction of what holographic memory system can hold. Conventional memories use only the surface to store the data. But holographic data storage systems use the volume to store data. It has more advantages than conventional storage systems. It is based on the principle of holography.

Scientist Pieter J. van Heerden first proposed the idea of holographic (three-dimensional) storage in the early 1960s. A decade later, scientists at RCA Laboratories demonstrated the technology by recording 500 holograms in an iron-doped lithium-niobate crystal and 550 holograms of high-resolution images in a light-sensitive polymer material. The lack of cheap parts and the advancement of magnetic and semiconductor memories placed the development of holographic data storage on hold

Earth Simulator

Today, there is a general consensus that, in the near future, wide area networks (WAN)(such as, a nation wide backbone network) will be based on Wavelength Division Multiplexed (WDM) optical networks. One of the main advantages of a WDM WAN over other optical technologies, such as, Time Division Multiplexed (TDM) optical networks, is that it allows us to exploit the enormous bandwidth of an optical fiber (up to 50 terabits bits per second) with requiring electronic devices, which operate at extremely high speeds.

The concept of light tree is introduced in a wavelength routed optical network, which employs wavelength -division multiplexing (WDM).
Depending on the underlying physical topology networks can be classified into three generations:

" First Generation: these networks do not employ fiber optic technology; instead they employ copper-based or microwave technology. E.g. Ethernet.
" Second Generation: these networks use optical fibers for data transmission but switching is performed in electronic domain. E.g. FDDI.
" Third Generation: in these networks both data transmission and switching is performed in optical domain. E.g. WDM.

WDM wide area networks employ tunable lasers and filters at access nodes and optical/electronic switches at routing nodes. An access node may transmit signals on different wavelengths, which are coupled into the fiber using wavelength multiplexers. An optical signal passing through an optical wavelength-routing switch (WRS) may be routed from an output fiber without undergoing opto-electronic conversion

Infini Bridge

InfiniBand is a powerful new architecture designed to support I/O connectivity for the Internet infrastructure.InfiniBand is supported by all the major OEM server vendors as a means to expand beyond and create the next generation I/O interconnect standard in servers. For the first time, a high volume, industry standard I/O interconnect extends the role of traditional “in the box” busses. InfiniBand is unique in providing both, an “in the box” backplane solution an external interconnect and “Bandwidth Out of the box”, thus it provides connectivity in a way previously reserved only for traditional networking interconnects. This unification of I/O and system area networking requires a new architecture that supports the needs of these two previously separate domains. Underlying this major I/O transition is InfiniBand’s ability to support the Internet’s requirement for RAS: reliability, availability, and serviceability. This white paper discusses the features and capabilities which demonstrate InfiniBand’s superior abilities to support RAS relative to the legacy PCI bus and other proprietary switch fabric and I/O solutions. Further, it provides an overview of how the InfiniBand architecture supports a comprehensive silicon, software, and system solution. The comprehensive nature of the architecture is illustrated by providing an overview of the major sections of the InfiniBand 1.0 specification. The scope of the 1.0 specification ranges from industry standard electrical interfaces and mechanical connectors to well defined software and management interfaces.a

Night vision technology

Night vision technology was developed by the US defense department mainly for defense purposes ,but with the development of technology night vision devices are being used in day to day lives . In this seminar of mine I wish to bring out the various principles of working of these devices that have changed the outlook both on the warfront and in our common lives. Night Vision can work in two different ways depending on the technology used.
1.Image enhancement- This works by collecting the tiny
amounts of light including the lower portion of the infrared
light spectrum, those are present but may be imperceptible
to our eyes, and amplifying it to the point
that we can easily observe the image.
2:Thermal imaging- This

technology operates by capturing the upper portion of the infrared light spectrum, which is emitted as heat by objects
instead of simply reflected as light. Hotter objects, such
as warm bodies, emit more of this light than cooler objects like
trees or buildings


Biometrics literally means "life measurement." Biometrics is the science and technology of measuring and statistically analyzing biological data. In information technology, biometrics usually refers to technologies for measuring and analyzing human body characteristics such as fingerprints, eye retinas and irises, voice patterns, facial patterns, and hand measurements, especially for authenticating someone. Often seen in science-fiction action adventure movies, face pattern matchers and body scanners may emerge as replacements for computer passwords So, Biometric systems can be defined as "automated methods of verifying or recognizing the identity of a living person based on a physiological or behavior characteristic".

Automated methods By this we mean that the analysis of the data is done by a computer with little or no human intervention. Traditional fingerprint matching and showing your drivers license or other forms of photo ID when proving your identity are examples of such systems.
Verification and recognition This sets forth the two principal applications of biometric systems. Verification is where the user lays claim to an identity and the system decides whether they are who they say they are. It's analogous to a challenge/response protocol; the system challenges the user to prove their identity, and they respond by providing the biometric to do so. Recognition is where the user presents the biometric, and the system scans a database and determines the identity of the user automatically.

Living person This points out the need to prevent attacks where copy of the biometric of an authorized user is presented. Biometric systems should also prevent unauthorized users from gaining access when they are in possession of the body part of an authorized user necessary for the measurement.

Physiological and behavioral characteristics This defines the two main classes of biometrics. Physiological characteristics are physical traits, like fingerprint or retina that are direct parts of the body. Behavioral characteristics are those that are based upon what we do, such as voiceprint and typing patterns. While physiological traits are usually more stabile than behavioral traits, systems using them are typically more intrusive and more expensive to implement

Voice Portals

In its most generic sense a voice portal can be defined as "speech enabled access to Web based information". In other words, a voice portal provides telephone users with a natural language interface to access and retrieve Web content. An Internet browser can provide Web access from a computer but not from a telephone. A voice portal is a way to do that.

The voice portal market is exploding with enormous opportunities for service providers to grow business and revenues. Voice based internet access uses rapidly advancing speech recognition technology to give users any time, anywhere communication and access-the Human Voice- over an office, wireless, or home phone. Here we would describe the various technology factors that are making voice portal the next big opportunity on the web, as well as the various approaches service providers and developers of voice portal solutions can follow to maximize this exciting new market opportunity.

Why Voice?

Natural speech is modality used when communicating with other people. This makes it easier for a user to learn the operation of voice-activate services. As an output modality, speech has several advantages. First, auditory input does not interfere with visual tasks, such as driving a car. Second, it allows for easy incorporation of sound-based media, such as radio broadcasts, music, and voice-mail messages. Third, advances in TTS (Text To Speech) technology mean text information can be transferred easily to the user. Natural speech also has an advantage as an input modality, allowing for hands-free and eyes-free use. With proper design, voice commands can be created that are easy for a user to remember .These commands do not have to compete for screen space. In addition unlike keyboard-based macros (e.g., ctrl-F7), voice commands can be inherently mnemonic ("call United Airlines"), obviating the necessity for hint cards. Speech can be used to create an interface that is easy to use and requires a minimum of user attention.

VUI (Voice User Interface) For a voice portal to function, one of the most important technology we have to include is a good VUI (Voice User Interface).There has been a great deal of development in the field of interaction between human voice and the system. And there are many other fields they have started to get implemented. Like insurance has turned to interactive voice response (IVR) systems to provide telephonic customer self-service, reduce the load on call-center staff, and cut overall service costs. The promise is certainly there, but how well these systems perform-and, ultimately, whether customers leave the system satisfied or frustrated-depends in large part on the user interface


"Firewall"... the name itself conjures up vivid images of strength and safety. What executive wouldn't want to erect a flaming bastion of steel around the corporate network to protect it from unseemly elements lurking on the public Internet? Unfortunately, this imagery no longer matches reality. In recent years, companies across all industry segments have been gradually tearing down the walls that once isolated their private networks from the outside world. Internet-based technologies have allowed significantly tighter links with customers, remote employees, suppliers, and business partners at a fraction of the cost. In many industries, it is no longer possible to remain competitive without extending the virtual corporation far beyond its previous boundaries. With so many users rapidly approaching the enterprise from different points of entry, it is no longer possible for yesterday's security technology to adequately protect private networks from unauthorized access. The vast majority of firewalls in use today serve only as a passive enforcement point, simply standing guard at the main door. They are incapable of observing suspicious activity and modifying their protection as a result. They are powerless to prevent attacks from those already inside the network and unable to communicate information directly to other components of the corporate security system without manual intervention. Recent statistics clearly indicate the danger of relying on passive security systems in today's increasingly interconnected world. According to the FBI, corporations reporting security incidents last year lost an average of $570,000 as a direct result, a 36 percent increase from the year before (1998 Computer Crime and Security Survey FBI/Computer Security Institute). And since the vast majority of security breaches are never reported, actual losses may be even higher.

In perhaps the most frightening statistic of all, it is estimated that as many as 95 percent of all computer security breaches today go completely undetected by the companies who are victimized. In a well-publicized security audit conducted recently at the Department of Defense, security consultants were asked to attack the DOD network and report back on their findings. Over a period of several months, auditors reported that fewer than 4 percent of all systems broken into were able to detect the attack. Even more disturbing, fewer than1 percent responded in any way to the attack (Report on Information Security, GAO).

Diamond is the hardest material known to man kind. When used on tools, diamond grinds away material on micro (Nano) level. Diamond is the hardest substance known and is given a value of 10 in the Mohs hardness scale, devised by the German mineralogist Friedrich Mohs to indicate relative hardness of substances on a rating scale from 1 to 10. Its hardness varies in every diamond with the crystallographic direction. Moreover, hardness on the same face or surface varies with the direction of the cut.

Diamond crystallizes in different forms. Eight and twelve sided crystal forms are most commonly found. Cubical, rounded, and paired crystals are also common. Crystalline diamonds always separate cleanly along planes parallel to the faces. The specific gravity for pure diamond crystals is almost always 3.52. Other properties of the diamond are frequently useful in differentiating between true diamonds and imitations: Because diamonds are excellent conductors of heat, they are cold to the touch; Most diamonds are not good electrical conductors and become charged with positive electricity when rubbed; Diamond is resistant to attack by acids or bases; Transparent diamond crystals heated in oxygen burn at about 1470° F, forming carbon dioxide

Ball Piston machines

From the day machines with reciprocating piston has come into existence efforts have been undertaken to improve the efficiency of the machines .The main drawbacks of reciprocating machines are the considerably large number of moving parts due to the presence of valves , greater inertial loads which reduces dynamic balance and leakage and friction due to the presence of piston rings . The invention urges has reached on Rotary machines .

One main advantage to be gained with a rotary machine is reduction of inertial loads and better dynamic balance. The Wankel rotary engine has been the most successful example to date , but sealing problems contributed to its decline . There , came the ideas of ball piston machines . In the compressor and pump arena, reduction of reciprocating mass in positive displacement machines has always been an objective, and has been achieved most effectively by lobe, gear, sliding vane, liquid ring, and screw compressors and pumps , but at the cost of hardware complexity or higher losses. Lobe, gear, and screw machines have relatively complex rotating element shapes and friction losses. Sliding vane machines have sealing and friction issues . Liquid ring compressors have fluid turbulence losses.

The new design concept of the Ball Piston Engine uses a different approach that has many advantages, including low part count and simplicity of design , very low friction , low heat loss, high power to weight ratio , perfect dynamic balance , and cycle thermodynamic tailoring capability

Surround sound system

We are now entering the Third Age of reproduced sound. The monophonic era was the First Age, which lasted from the Edison's invention of the phonograph in 1877 until the 1950s. During those times, the goal was simply to reproduce the timbre of the original sound. No attempts were made to reproduce directional properties or spatial realism.

The stereo era was the Second Age. It was based on the inventions from the 1930s, reached the public in the mid-'50s, and has provided great listening pleasure for four decades. Stereo improved the reproduction of timbre and added two dimensions of space: the left - right spread of performers across a stage and a set of acoustic cues that allow listeners to perceive a front-to-back dimension.

In two-channel stereo, this realism is based on fragile sonic cues. In most ordinary two-speaker stereo systems, these subtle cues can easily be lost, causing the playback to sound flat and uninvolved. Multi channel surround systems, on the other hand, can provide this involving presence in a way that is robust, reliable and consistent.

The purpose of this seminar is to explore the advances and technologies of surround sound in the consumer market

Optical networking

Here we explains about SONET -Synchronous Digital Network, that has been greeted with unparalleled enthusiasm throughout the world. It also explains how it came into existence and in which way it differs from others. What does synchronous mean?" Bits from one telephone call are always in the same location inside a digital transmission frame".

This material is assumed to be comfortable to the reader as the basic concepts of a public telecommunications network, with its separate functions of transmission and switching, and is assumed to be aware of the context for the growth of broadband traffic.

In the early 1970's digital transmission systems began to appear, utilizing a method known as Pulse Code Modulation (PCM), first proposed by STC in 1937. As demand for voice telephony increased, and levels of traffic in the network grew ever higher, it became clear that standard 2 Mbit/s signal was not sufficient. To cope with the traffic loads occurring in the trunk network. As the need arose, further levels multiplexing were added to the standard at much higher speed and thus SONET came into existence. For the first time in telecommunications history there will be a worldwide, uniform and seamless transmission standard for service delivery. SONET provides the capability to send data at multi-gigabit rate over today's single-mode fiber-optic links

As end-users become ever more dependent on effective communications, there has been an explosion in the demand for sophisticated telecom services. Services such as videoconferencing remote database access, and multimedia file transfer require a flexible network with the availability of virtually unlimited bandwidth. The complexity of the network, means that network operators are unable to meet this demand. At present SONET is being implemented for long-haul traffic, but there is no reason it cannot be used for short distances

Magnetic RAM

In 1984 Drs. Arthur Pohm and Jim Daughton, both employed at that time by Honeywell, conceived of a new class of magnetoresistance memory devices which offered promise for high density, random access, nonvolatile memory. In 1989 Dr. Daughton left Honeywell to form Nonvolatile Electronics, Inc. having entered into a license agreement allowing him to sublicense Honeywell MRAM technology for commercial applications. Dr. Pohm, Dr. Daughton, and others at NVE continued to improve basic MRAM technology, and innovated new techniques which take advantage of revolutionary advances in magnetoresistive devices, namely giant magnetoresistance and spin dependent tunneling.

Today there is a tremendous potential for MRAM as a nonvolatile, solid state memory to replace flash memory and EEPROM where fast writing or high write endurance is required, and in the longer term as a general purpose read/write random access memory. NVE has a substantial patent portfolio containing 10 MRAM patents, and is willing to license these, along with 12 Honeywell MRAM patents, to companies interested in manufacturing MRAM. In addition, NVE is considering internal production of certain niche MRAM products over the next several years.

Laser communication systems

Lasers have been considered for space communications since their realization in 1960. Specific advancements were needed in component performance and system engineering particularly for space qualified hardware. Advances in system architecture, data formatting and component technology over the past three decades have made laser communications in space not only viable but also an attractive approach into inter satellite link applications.

Information transfer is driving the requirements to higher data rates, laser cross -link technology explosions, global development activity, increased hardware, and design maturity. Most important in space laser communications has been the development of a reliable, high power, single mode laser diode as a directly modulable laser source. This technology advance offers the space laser communication system designer the flexibility to design very lightweight, high bandwidth, low-cost communication payloads for satellites whose launch costs are a very strong function of launch weigh. This feature substantially reduces blockage of fields of view of most desirable areas on satellites.

The smaller antennas with diameter typically less than 30 centimeters create less momentum disturbance to any sensitive satellite sensors. Fewer on board consumables are required over the long lifetime because there are fewer disturbances to the satellite compared with heavier and larger RF systems. The narrow beam divergence affords interference free and secure operation.

Blues eyes

Human cognition depends primarily on the ability to perceive, interpret, and integrate audio-visuals and sensoring information. Adding extraordinary perceptual abilities to computers would enable computers to work together with human beings as intimate partners.

Researchers are attempting to add more capabilities to computers that will allow them to interact like humans, recognize human presents, talk, listen, or even guess their feelings.

The BLUE EYES technology aims at creating computational machines that have perceptual and sensory ability like those of human beings. It uses non-obtrusige sensing method, employing most modern video cameras and microphones to identifies the users actions through the use of imparted sensory abilities . The machine can understand what a user wants, where he is looking at, and even realize his physical or emotional states

Artificial Eye

The retina is a thin layer of neural tissue that lines the back wall inside the eye. Some of these cells act to receive light, while others interpret the information and send messages to the brain through the optic nerve. This is part of the process that enables us to see. In damaged or dysfunctional retina, the photoreceptors stop working, causing blindness. By some estimates, there are more than 10 million people worldwide affected by retinal diseases that lead to loss of vision.

The absence of effective therapeutic remedies for retinitis pigmentosa (RP) and age-related macular degeneration (AMD) has motivated the development of experimental strategies to restore some degree of visual function to affected patients. Because the remaining retinal layers are anatomically spared, several approaches have been designed to artificially activate this residual retina and thereby the visual system.
At present, two general strategies have been pursued. The "Epiretinal" approach involves a semiconductor-based device placed above the retina, close to or in contact with the nerve fiber layer retinal ganglion cells. The information in this approach must be captured by a camera system before transmitting data and energy to the implant.

The "Sub retinal" approach involves the electrical stimulation of the inner retina from the sub retinal space by implantation of a semiconductor-based micro photodiode array (MPA) into this location. The concept of the sub retinal approach is that electrical charge generated by the MPA in response to a light stimulus may be used to artificially alter the membrane potential of neurons in the remaining retinal layers in a manner to produce formed images.

Some researchers have developed an implant system where a video camera captures images, a chip processes the images, and an electrode array transmits the images to the brain. It's called Cortical Implants.


Robocode is an Open Source educational game by Mathew Nelson (originally R was provided by IBM). It is designed to help people learn to program in Java and enjoy the experience. It is very easy to start - a simple robot can be written in just a few minutes - but perfecting a bot can take months or more. Competitors write software that controls a miniature tank that fights other identically-built (but differently programmed) tanks in a playing field. Robots move, shoot at each other, scan for each other, and hit the walls (or other robots) if they aren't careful. Though the idea of this 'game' may seem simple, the actual strategy needed to win is not. Good robots have hundreds of lines in their code dedicated to strategy. Some of the more successful robots use techniques such as statistical analysis and attempts at neural networks in their designs. One can test a robot against many other competitors by downloading their bytecode, so design competition is fierce. Robocode provides a security sandbox (bots are restricted in what they can do on the machine they run on) which makes this a safe thing to do.


Choreography, in a Web services context, refers to specifications for how messages should flow among diverse, interconnected components and applications to ensure optimum interoperability. The term is borrowed from the dance world, in which choreography directs the movement and interactions of dancers.
Web services choreography can be categorized as abstract, portable or concrete:
In abstract choreography, exchanged messages are defined only according to the data type and transmission sequence.
Portable choreography defines the data type, transmission sequence, structure, control methods and technical parameters.
Concrete choreography is similar to portable choreography but includes, in addition, the source and destination URLs as well as security information such as digital certificates.

Mobile agent

In computer science, a mobile agent is a composition of computer software and data which is able to migrate (move) from one computer to another autonomously and continue its execution on the destination computer.Mobile Agent, namely, is a type of software agent, with the feature of autonomy, social ability, learning, and most important, mobility.When the term mobile agent is used, it refers to a process that can transport its state from one environment to another, with its data intact, and still being able to perform appropriately in the new environment. Mobile agents decide when and where to move next, which is evolved from RPC. So how exactly does a mobile agent move? Just like a user doesn't really visit a website but only make a copy of it, a mobile agent accomplishes this move through data duplication. When a mobile agent decides to move, it saves its own state and transports this saved state to next host and resume execution from the saved state.Mobile agents are a specific form of mobile code and software agents paradigms. However, in contrast to the Remote evaluation and Code on demand paradigms, mobile agents are active in that they may choose to migrate between computers at any time during their execution. This makes them a powerful tool for implementing distributed applications in a computer network.Advantages1) Move computation to data, reducing network load.2) Asynchronous execution on multiple heterogeneous network hosts3) Dynamic adaptation - actions are dependent on the state of the host environment4) Tolerant to network faults - able to operate without an active connection between client and server5) Flexible maintenance - to change an agent's actions, only the source (rather than the computation hosts) must be updatedApplications1) Resource availability, discovery, monitoring2) Information retrieval3) Network management4) Dynamic software deployment


TurboGears is a Python web 'megaframework' created by bringing together a number of mature components such as MochiKit, SQLObject, CherryPy and Kid, along with some TurboGears specific code to make everything work together easily.
TurboGears was created in 2005 by Kevin Dangoor as the framework being the as yet unreleased Zesty News product, but when he released it as an open source framework in the end of September, the project took off, with more than 30,000 screencast downloads in the first 3 months.
As of March 2006, just six months after its initial release, there were nearly 1,500 users on the high traffic TurboGears mailing list, a book from Prentice Hall in the works, and a number of open source TurboGears applications in the works.
TurboGears is designed around the Model-view-controller architecture, much like Struts or Ruby on Rails, designed to make rapid web application development in Python a lot easier and more maintainable


The typical Wi-Fi setup contains one or more Access Points (APs) and one or more clients. An AP broadcasts its SSID (Service Set Identifier, Network name) via packets that are called beacons, which are broadcasted every 100 ms. The beacons are transmitted at 1 Mbit/s, and are relatively short and therefore are not of influence on performance. Since 1 Mbit/s is the lowest rate of Wi-Fi it assures that the client who receives the beacon can communicate at at least 1 Mbit/s. Based on the settings (i.e. the SSID), the client may decide whether to connect to an AP. Also the firmware running on the client Wi-Fi card is of influence. Say two AP's of the same SSID are in range of the client, the firmware may decide based on signal strength (Signal-to-noise ratio) to which of the two AP's it will connect. The Wi-Fi standard leaves connection criteria and roaming totally open to the client. This is a strength of Wi-Fi, but also means that one wireless adapter may perform substantially better than the other. Since Windows XP there is a feature called zero configuration which makes the user show any network available and let the end user connect to it on the fly. In the future wireless cards will be more and more controlled by the operating system. Microsoft's newest feature called SoftMAC will take over from on-board firmware. Having said this, roaming criteria will be totally controlled by the operating system. Wi-Fi transmits in the air, it has the same properties as non-switched ethernet network. Even collisions can therefore appear like in non-switched ethernet LAN's.
Advantages of Wi-Fi
Unlike packet radio systems, Wi-Fi uses unlicensed radio spectrum and does not require regulatory approval for individual deployers.
Allows LANs to be deployed without cabling, potentially reducing the costs of network deployment and expansion. Spaces where cables cannot be run, such as outdoor areas and historical buildings, can host wireless LANs.
Wi-Fi products are widely available in the market. Different brands of access points and client network interfaces are interoperable at a basic level of service.
Competition amongst vendors has lowered prices considerably since their inception.
Wi-Fi networks support roaming, in which a mobile client station such as a laptop computer can move from one access point to another as the user moves around a building or area.
Many access points and network interfaces support various degrees of encryption to protect traffic from interception.
Wi-Fi is a global set of standards. Unlike cellular carriers, the same Wi-Fi client works in different countries around the world


A new wireless technology could beat fiber optics for speed in some applications.
Atop each of the Trump towers in New York City, there's a new type of wireless transmitter and receiver that can send and receive data at rates of more than one gigabit per second -- fast enough to stream 90 minutes of video from one tower to the next, more than one mile apart, in less than six seconds. By comparison, the same video sent over a DSL or cable Internet connection would take almost an hour to download.
This system is dubbed 'WiFiber' by its creator, GigaBeam, a Virginia-based telecommunications startup. Although the technology is wireless, the company's approach -- high-speed data transferring across a point-to-point network -- is more of an alternative to fiber optics, than to Wi-Fi or Wi-Max, says John Krzywicki, the company's vice president of marketing. And it's best suited for highly specific data delivery situations.*
This kind of point-to-point wireless technology could be used in situations where digging fiber-optic trenches would disrupt an environment, their cost be prohibitive, or the installation process take too long, as in extending communications networks in cities, on battlefields, or after a disaster.
Blasting beams of data through free space is not a new idea. LightPointe and Proxim Wireless also provide such services. What makes GigaBeam's technology different is that it exploits a different part of the electromagnetic spectrum. Their systems use a region of the spectrum near visible light, at terahertz frequencies. Because of this, weather conditions in which visibility is limited, such as fog or light rain, can hamper data transmission.
GigaBeam, however, transmits at 71-76, 81-86, and 92-95 gigahertz frequencies, where these conditions generally do not cause problems. Additionally, by using this region of the spectrum, GigaBeam can outpace traditional wireless data delivery used for most wireless networks.
Because so many devices, from Wi-Fi base stations to baby monitors, use the frequencies of 2.4 and 5 gigahertz, those spectrum bands are crowded, and therefore require complex algorithms to sort and route traffic -- both data-consuming endeavors, says Jonathan Wells, GigaBeam's director of product development. With less traffic in the region between 70 to 95 gigahertz, GigaBeam can spend less time routing data, and more time delivering it. And because of the directional nature of the beam, problems of interference, which plague more spread-out signals at the traditional frequencies, are not likely; because the tight beams of data will rarely, if ever, cross each other's paths, data transmission can flow without interference, Wells says.
Correction: As a couple of readers pointed out, our title was misleading. Although the emergence of a wireless technology operating in the gigabits per second range is an advance, it does not outperform current fiber-optic lines, which can still send data much faster.
Even with its advances, though, Gigabeam faces the same problem as other point-to-point technologies: creating a network with an unbroken sight line. Still, it could offer some businesses an alternative to fiber optics. Currently, a GigaBeam link, which consists of a set of transmitting and receiving radios, costs around $30,000. But Krzywicki says that improving technology is driving down costs. In addition to outfitting the Trump towers, the company has deployed a link on the campuses of Dartmouth College and Boston University, and two links for San Francisco's Public Utility Commission


These computers include the entire spectrum of PCs, through professional workstations upto super-computers. As the performance of computers has increased, so too has the demand for communication between all systems for exchanging data, or between central servers and the associated host computer system.
The replacement of copper with fiber and the advancement sin digital communication and encoding are at the heart of several developments that will change the communication infrastructure. The former development has provided us with huge amount of transmission bandwidth. While the latter has made the transmission of all information including voice and video through a packet switched network possible.
With continuously work sharing over large distances, including international communication, the systems must be interconnected via wide area networks with increasing demands for higher bit rates.
For the first time, a single communications technology meets LAN and WAN requirements and handles a wide variety of current and emerging applications. ATM is the first technology to provide a common format for bursts of high speed data and the ebb and flow of the typical voice phone call. Seamless ATM networks provide desktop-to-desktop multimedia networking over single technology, high bandwidth, low latency network, removing the boundary between LAN WAN.
ATM is simply a Data Link Layer protocol. It is asynchronous in the sense that the recurrence of the cells containing information from an individual user is not necessarily periodic. It is the technology of choice for evolving B-ISDN (Board Integrated Services Digital Network), for next generation LANs and WANs. ATM supports transmission speeds of 155Mbits / sec. In the future. Photonic approaches have made the advent of ATM switches feasible, and an evolution towards an all packetized, unified, broadband telecommunications and data communication world based on ATM is taking place

Cam less Engines

The cam has been an integral part of the IC engine from its invention. The cam controls the "breathing channels" of the IC engines, that is, the valves through which the fuel air mixture (in SI engines) or air (in CI engines) is supplied and exhaust driven out.
Besieged by demands for better fuel economy, more power, and less pollution, motor engineers around the world are pursuing a radical "camless" design that promises to deliver the internal-combustion engine's biggest efficiency improvement in years. The aim of all this effort is liberation from a constraint that has handcuffed performance since the birth of the internal-combustion engine more than a century ago. Camless engine technology is soon to be a reality for commercial vehicles. In the camless valvetrain, the valve motion is controlled directly by a valve actuator - there's no camshaft or connecting mechanisms. Precise electronic circuit controls the operation of the mechanism, thus bringing in more flexibility and accuracy in opening and closing the valves. The seminar looks at the working of the electronically controlled camless engine, its general features and benefits over conventional engine.
The engines powering today's vehicles, whether they burn gasoline or diesel fuel, rely on a system of valves to admit fuel and air to the cylinders and to let exhaust gases escape after combustion. Rotating steel camshafts with precision-machined egg-shaped lobes, or cams, are the hard-tooled "brains" of the system. They push open the valves at the proper time and guide their closure, typically through an arrangement of pushrods, rocker arms, and other hardware. Stiff springs return the valves to their closed position


For years, the trusty seatbelts provided the sole form of passive restraints in our car. There were debates about their safety, especially related to children, but over time, much of the country adopted mandatory seat-belt laws. Statistics have show that seat-belts have saved thousands of lives that might have been lost in collisions.Airbags have been under development for many years. The first commercial airbags appeared in automobiles in the 1980s.They are a proven safety device that save a growing number of lives, and prevent a large number of head and chest injuries. They are reducing driver deaths by 14 percent and passenger bags reduce deaths by about 11 percent.
People who use seat-belts think they do not need airbags. But they do. Airbags and lap/shoulder belts work together as a system, and one without the other isn't as effective. Deaths are 12 percent lower among drivers with belts and 9 percent lower among belted passengers.
Since model year, all new cars have been required to have airbags on both driver and passenger sides. Light trucks came under the rule in 1999.Newer than steering-wheel-mounted or dashboard-mounted bags are seat-mounted door-mounted and window airbags. Airbags are subject of serious government and industry researches and tests.
Airbags can cause some unintended adverse effects. Nearly all of these are minor injuries like bruises and abrasions that are more than offset by the lives airbags are saving.You can eliminate this risk, and position is what counts. Serious inflation injuries occur primarily because of peoples position when airbags first begin inflating.
Stopping an objects momentum requires force acting over a period of time. When a car crashes the force required to stop an object is very great because the car's momentum has changed instantly while the passengers has not. The goal of any supplement restraint system is to help stop the passengers while doing as little damage to him or her as possible.
What an airbag want to do is to slow down the passenger's speed to zero with little or no damage .The constraints that it has to work are huge .The airbag has the space between the passenger and the steering wheel or dashboard and a fraction of a second to work with. Even that tiny amount of space and time is valuable

Stirling Engine

The Stirling engine, is a heat engine of the external combustion piston engine type whose heat-exchange process allows for near-ideal efficiency in conversion of heat into mechanical movement by following the Carnot cycle as closely as is practically possible with given materials.
Its invention is credited to the Scottish clergyman Rev. Robert Stirling in 1816 who made significant improvements to earlier designs and took out the first patent. He was later assisted in its development by his engineer brother James Stirling.
The inventors sought to create a safer alternative to the steam engines of the time, whose boilers often exploded due to the high pressure of the steam and the inadequate materials. Stirling engines will convert any temperature difference directly into movement.
The Stirling engine works by the repeated heating and cooling of a usually sealed amount of working gas, usually air or other gases such as hydrogen or helium. This is accomplished by moving the gas between hot and cold heat exchangers, the hot heat exchanger being a chamber in thermal contact with an external heat source, e.g. a fuel burner, and the cold heat exchanger being a chamber in thermal contact with an external heat sink, e.g. air fins.
The gas follows the behaviour described by the gas laws which describe how a gas' pressure, temperature and volume are related. When the gas is heated, because it is in a sealed chamber, the pressure rises and this then acts on the power piston to produce a power stroke. When the gas is cooled the pressure drops and this means that less work needs to be done by the piston to recompress the gas on the return stroke, giving a net gain in power available on the shaft. The working gas flows cyclically between the hot and cold heat exchangers.

Today the need for digital storage capacity is on increase, with a rate growth of 60% per annum. There is strong requirement for more storage facility for the amenities like the storage area networks, data warehouses, supercomputers and e-commerce related data mining as the volume of data to be processed is ever rising. The arrival of high bandwidth Internet and data-intensive applications such as high-definition TV (HDTV) and video & music on-demand, even smaller devices such as personal VCRs, PDAs, mobile phones etc will require multi-gigabyte and terabyte capacity in the next couple of years.
This ever-increasing capacity demand can only be only managed by the steady increase in the areal density of the magnetic and optical recording media. In future, this density increase is feasible only by taking advantage of the shorter wavelength lasers, higher lens numerical aperture (NA) or by employing near-field techniques. This increase is best achieved with optical memory technologies.
Fluorescent multiplayer disc (FMD) is a three dimensional storage that can store a large volume of data and is also capable of increasing the capacity of a given volume with an aim to achieve a cubic storage element having the dimensions of writing or reading laser wavelength. The current wavelength of 650 µm should be sufficient enough to store up to a Terabyte of data.

A new generation of materials called smart materials is changing the way a structural system is going to be designed, built and monitored. Advances in composite materials, instrumentation and sensing technology (fiber-optic sensors) in combination with a new generation of actuator systems based on Piezoelectric ceramics and shape Memory Alloys have made this possible.
Shape memory alloys have found applications in a variety of high performance products, ranging from aircraft hydraulic coupling and electrical connectors to surgical suture anchors. Since the material can generate high actuation forces in response to temperature changes, shape memory alloys have the potential to serve as an alternative to solenoids, special significance in the area of smart structures because it offers significant advantages over conventional actuations technologies in number of ways.

Large amounts of recoverable strains offer very high work densities. This is very attractive in situations where the work output to weight ration is critical.Direct actuation without moving part there by increasing the efficiency.Large available strains permits long strokes with constant force output.The actuation can be linear or rotatory.

Brain fingerprinting is a technique that measures recognition of familiar stimuli by measuring electrical brain wave responses to words, phrases, or pictures that are presented on a computer screen. Brain fingerprinting was invented by Dr. B. S. Farwell. The theory is that the suspect's reaction to the details of an event or activity will reflect if the suspect had prior knowledge of the event or activity. This test uses the Memory and Encoding Related Multifaceted Electroencephalographic Response to detect familiarity reaction. It is hoped it might be more accurate than a polygraph (lie-detector) test, which measures physiological signals such as heart rate, sweating, and blood pressure.The person to be tested wears a special headband with electronic sensors that measure the EEG from several locations on the scalp. In order to calibrate the brain fingerprinting system, the testee is first presented with a series of irrelevant stimuli, words, and pictures, and then a series of relevant stimuli, words, and pictures. The testee's brain response to these two different types of stimuli allow the testor to determine if the measured brain responses to test stimuli, called probes, are more similar to the relevant or irrelevant responses.

Cluster Computing

A cluster is a type of parallel or distributed processing system, which consists of a collection of interconnected stand-alone computers co - operatively working together as a single, integrated computing resource. This cluster of computers shares common network characteristics like the same namespace and it is available to other computers on the network as a single resource. These computers are linked together using high-speed network interfaces between themselves and the actual binding together of the all the individual computers in the cluster is performed by the operating system and the software used.

Blue Tooth

Bluetooth wireless technology is a cable replacement technology that provides wireless communication between portable devices, desktop devices and peripherals. It is used to swap data and synchronize files between devices without having to connect each other with cable. The wireless link has a range of 10m which offers the user mobility. Bluetooth wireless technology is always on and runs in the background. Bluetooth devices scan for other Bluetooth devices and when these devices are in range they start to exchange messages so they can become aware of each others capabilities. These devices do not require a line of sight to transmit data with each other. Within a few years about 80 percent of the mobile phones are expected to carry the Bluetooth chip. The Bluetooth transceiver operates in the globally available unlicensed ISM radio band of 2.4GHz, which do not require operator license from a regulatory agency.
The initial development started in 1994 by Ericsson. Bluetooth now has a special interest group (SIG) which has 1800 companies worldwide. Bluetooth technology enables voice and data transmission in a short-range radio. There is a wide range of devises which can be connected easily and quickly without the need for cables. Soon people world over will enjoy the convenience, speed and security of instant wireless connection. Bluetooth is expected to be embedded in hundreds of millions mobile phones, PCs, laptops and a whole range of other electronic devices in the next few years. This is mainly because of the elimination of cables and this makes the work environment look and feel comfortable and inviting.

Mobile agent

In computer science, a mobile agent is a composition of computer software and data which is able to migrate (move) from one computer to another autonomously and continue its execution on the destination computer.Mobile Agent, namely, is a type of software agent, with the feature of autonomy, social ability, learning, and most important, mobility.When the term mobile agent is used, it refers to a process that can transport its state from one environment to another, with its data intact, and still being able to perform appropriately in the new environment. Mobile agents decide when and where to move next, which is evolved from RPC. So how exactly does a mobile agent move? Just like a user doesn t really visit a website but only make a copy of it, a mobile agent accomplishes this move through data duplication. When a mobile agent decides to move, it saves its own state and transports this saved state to next host and resume execution from the saved state.Mobile agents are a specific form of mobile code and software agents paradigms.
This makes them a powerful tool for implementing distributed applications in a computer network.
1) Move computation to data, reducing network load.

2) Asynchronous execution on multiple heterogeneous network hosts
3) Dynamic adaptation - actions are dependent on the state of the host
4) Tolerant to network faults - able to operate without an active
connection between client and server
5) Flexible maintenance - to change an agent s actions, only the source
(rather than the computation hosts) must be updated
1) Resource availability, discovery, monitoring 2) Information retrieval
3) Network management 4) Dynamic software deployment


Nanotechnology, development and production of artefacts in which a dimension of less than 100 nanometres (nm) is critical to functioning (1 nm = 10-9 m/40 billionths of an inch). Nanotechnology is a hybrid science combining engineering and chemistry. Atoms and molecules stick together because they have complementary shapes that lock tog- ether, or charges that attract. As millions of these atoms are pieced together by nanomachines, a specific product will begin to take shape. The goal of nanotechnology is to manipulate atoms individually and place them in a pattern to produce a desired structure. Nanotechnology is likely to change the way almost everything, including medicine, computers and cars, are designed and constructed. Nanotechnology holds out the promise of materials of precisely specified composition and properties, which could yield structures of unprecedented strength and computers of extraordinary compactness and power. Nanotechnology may lead to revolutionary methods of atom-by-atom manufacturing and to surgery on the cellular scale. Scientists have made some progress at building devices, including computer components, at nanoscales. Nanotechnology is anywhere from five to 15 years in the future.


In the field of massive and complex manufacturing we are now in need of materials, with properties, that can be manipulated according to our needs. Smart materials are one among those unique materials, which can change its shape or size simply by adding a little bit of heat, or can change from a liquid to a solid almost instantly when near a magnet. These materials include piezoelectric materials, magneto-rheostatic materials, electro-rheostatic materials, and shape memory alloys. Shape memory alloys are metals, which exhibit two very unique properties, pseudo-elasticity (an almost rubber-like flexibility, under-loading), and the shape memory effect (ability to be severely deformed and then return to its original shape simply by heating). The two unique properties described above are made possible through a solid state phase change that is a molecular rearrangement, in which the molecules remain closely packed so that the substance remains a solid. The two phases, which occur in shape memory alloys, are Martensite, and Austenite.

gate valve

A gate valve is a valve that opens by lifting a round or rectangular gate out of the path of the fluid. The distinct feature of a gate valve is the sealing surfaces between the gate and seats are planar. The gate faces can form a wedge shape or they can be parallel. Gate valves are sometimes used for regulating flow, but many are not suited for that purpose, having been designed to be fully opened or closed. When fully open, the typical gate valve has no obstruction in the flow path, resulting in very low friction loss.
Bonnets provide leakproof closure for the valve body. Gate valves may have a screw-in, union, or bolted bonnet. Screw-in bonnet is the simplest, offering a durable, pressure-tight seal. Union bonnet is suitable for applications requiring frequent inspection and cleaning. It also gives the body added strength. Bolted bonnet is used for larger valves and higher pressure applications.
Another type of bonnet construction in a gate valve is pressure seal bonnet. This construction is adopted for valves for high pressure service, typically in excess of 2250 psi. The unique feature about the pressure seal bonnet is that the body - bonnet joints seals improves as the internal pressure in the valve increases, compared to other constructions where the increase in internal pressure tends to create leaks in the body-bonnet joint.

Stream computing

The main task is to pull in streams of data, process the data and stream it back out as a single flow and thereby analyzes multiple data streams from many sources live. Stream computing uses software algorithms that analyzes the data in real time as it streams in to increase speed and accuracy when dealing with data handling and analysis. System S, the stream computing system of IBM, introduced in June 2007, runs on 800 microprocessors and the System S software enables software applications to split up tasks and then reassemble the data into an answer. ATI Technologies also announced a stream computing technology derived from a class of applications that run on the GPU instead of a CPU which enables the graphics processors (GPUs) to work in conjunction with high-performance, low-latency CPUs to solve complex computational problems.

Space Shuttle

Previously, all ventures in to space were achieved with giant rockets which, after a certain amount of time , were directed back in to the earth’s atmosphere to be reduced to a cinder by the enormous heat of re entry –after the crew and their capsule had been ejected virtually all of that tremendously expensive equipment was destroyed after only one use.

NASA's Space Shuttle, officially called the Space Transportation System (STS), is the spacecraft currently used by the United States government for its human spaceflight missions. At launch, it consists of a rust-colored external tank (ET), two white, slender Solid Rocket Boosters (SRBs), and the orbiter, a winged spaceplane which is the space shuttle in the narrowest sense.

The orbiter carries astronauts and payload such as satellites or space station parts into low earth orbit, into the Earth's upper atmosphere or thermosphere.[1] Usually, five to seven crew members ride in the orbiter. The payload capacity is 22,700 kg (50,000 lb). When the orbiter's mission is complete it fires its Orbital Maneuvering System (OMS) thrusters to drop out of orbit and re-enters the lower atmosphere. During the descent and landing, the shuttle orbiter acts as a glider, and makes a completely unpowered ("deadstick") landing.

Following are the main supporting systems of a space shuttle.

1. Propulsion system
2. External fuel tank
3. Space shuttle orbiter