Nanorobotics

Nanorobotics is concerned with:
1) Design and fabrication of nanorobots with overall dimensions at or below the micrometer range and made of nanoscopic components
2) Programming and coordination of large numbers (swarms) of such nanorobots
3) Programmable assembly of nanometer-scale components either by manipulation with macro or micro devices, or by self-assembly on programmed templates.
Nanorobots have overall dimensions comparable to those of biological cells and organelles. This opens a vast array of potential applications in environmental monitoring for microorganisms and in health care. For example, imagine Artificial cells (nanorobots) that patrol the circulatory system, detect small concentrations of pathogens, and destroy them. This would amount to a programmable immune system, and might have far-reaching implications in medicine, causing a paradigm shift from treatment to prevention. Other applications such as Cell repair might be possible if nanorobots were small enough to penetrate the cells.

smart dust

Advance in hardware technology and engineering design have led to dramatic reduction in size, power consumption and cost for digital circuiting, wireless communication and micro electro mechanics sensors (MEMS). This has enabled very compact, autonomous and mobile nodes each containing of one or more sensors. This millimeter scale nodes each called as SMARTDUST. It was discovered at Berkeley at University of California by a team led by Prof.Kristofer.S.J. Pister This device is around size of grain of sand and contains sensors, computing ability, bi-directional wireless communication and a power supply.As tiny as dust particles, smart dust motes can be spread throughout buildings or into the atmosphere to collect and monitor data. Thus smart dust as small as a grain of rice is able to sense, think, talk and listen. Smart dust devices have applications in everything from military to meteorological to medical fields.

From light to the vibrations can be recognized using small wireless micro electromechanical sensors (MEMS), which are broadly classified as Smart Dust devices. As innovative ideas in silicon and fabrication followed, the smart dust devices, which carries out the communication, computation, and sensing into an all-in-one package, has been able to reduce its size to that of a sand grain.

These motes would collect the data compute and then pass the information using the two-way band radio between motes at distances approaching 1,000 feet. Some of the uses of these smart dust devices are identifying the manufacturing defects using vibrations, tracking patient movements in hospitals etc.


Microbotics

Microbotics (or microrobotics) is the field of miniature robotics, in particular mobile robots with characteristic dimensions less than 1 mm. The term can also be used for robots capable of handling micrometer size components. While the 'micro' prefix has been used subjectively to mean small, standardizing on length scales avoids confusion. Thus a nanorobot would have characteristic dimensions at or below 1 micrometer, or manipulate components on the 1 to 1000 nm size range. A microrobot would have characteristic dimensions less than 1 millimeter, a millirobot would have dimensions less than a cm, a minirobot would have dimensions less than 10 cm, and a small robot would have dimensions less than 100 cm. Due to their small size, microbots are potentially very cheap, and could be used in large numbers to explore environments which are too small or too dangerous for people or larger robots. It is expected that microbots will be useful in applications such as looking for survivors in collapsed buildings after an earthquake, or crawling through the digestive tract. What microbots lack in brawn or computational power, they can make up for by using large numbers, as in swarms of microbots. Microbots were born thanks to the appearance of the microcontroller in the last decade of the 20th century, and the appearance of miniature mechanical systems on silicon (MEMS), although many microbots do not use silicon for mechanical components other than sensors. One of the major challenges in developing a microrobot is to achieve motion using a very limited power supply. The microrobots can use a small light battery source like a coin cell or can scavenge power from the surrounding in the form of vibration or light energy.

Plastic electronics

Plastic electronics, is a branch of electronics that deals with conductive polymers, or plastics. It is called 'organic' electronics because the molecules in the polymer are carbon-based, like the molecules of living things. This is as opposed to traditional electronics which relies on inorganic conductors such as copper or silicon. In addition to organic Charge transfer complexes, technically, electrically conductive polymers are mainly derivatives of polyacetylene black (the 'simplest melanin'). Examples include PA (more specificially iodine-doped trans-polyacetylene); polyaniline: PANI, when doped with a protonic acid; and poly(dioctyl-bithiophene): PDOT. Conduction mechanisms involve resonance stabilization and delocalization of pi electrons along entire polymer backbones, as well as mobility gaps, tunneling, and phonon-assisted hopping. Conductive polymers are lighter, more flexible, and less expensive than inorganic conductors. This makes them a desirable alternative in many applications. It also creates the possibility of new applications that would be impossible using copper or silicon. New applications include smart windows and electronic paper. Conductive polymers are expected to play an important role in the emerging science of molecular computing. In general organic conductive polymers have a higher resistance and therefore conduct electricity poorly and inefficiently, as compared to inorganic conductors. Researchers currently are exploring ways of 'doping' organic semiconductors, like melanin, with relatively small amounts of conductive metals to boost conductivity. However, for many applications, inorganic conductors will remain the only viable option.

Ground bounce

In electronic engineering, ground bounce is a phenomenon associated with transistor switching where the gate voltage can appear to be less than the local ground potential, causing the unstable operation of a logic gate. Ground bounce is usually seen on high density VLSI where insufficient precautions have been taken to supply a logic gate with a sufficiently low resistance connection (or sufficiently high capacitance) to ground. In this phenomenon, when the gate is turned on, enough current flows through the emitter-collector circuit that the silicon in the immediate vicinity of the emitter is pulled high, sometimes by several volts, thus raising the local ground, as perceived by the transistor, to a value significantly above true ground. Relative to this local ground, the gate voltage can go negative, thus shutting off the transistor. As the excess local charge dissipates, the transistor turns back on, possibly causing a repeat of the phenomenon, sometimes up to a half-dozen bounces. Ground bounce is one of the leading causes of 'hung' or metastable gates in modern digital circuit design. This happens because the ground bounce puts the input of a flip flop effectively at voltage level that is neither a one or a zero at clock time, or causes untoward effects in the clock itself. A similar phenomenon may be seen on the collector side, called VCC sag, where VCC is pulled unnaturally low.


Lenses of Liquid

Fluid droplets could replace plastic lenses in cell-phone cameras, banishing blurry photos.

We don't expect much from a cell-phone camera. For one thing, only a handful of camera phones have a lens system capable of automatically focusing on objects at different distances -- causing many fuzzy snapshots.

But there may be a solution to the problem of camera phone focus -- and one that could find uses in other devices as well. Saman Dharmatilleke, Isabel Rodriguez, and colleagues at the Institute of Materials Research and Engineering in Singapore have proposed replacing the stationary plastic lens in most camera phones with a drop of liquid, such as water, that could be auto-focused by varying the amount of pressure applied to the drop. The team's lens has no moving parts, making it rugged, and it uses only minimal electricity, so it would not drain a cell-phone battery.

Additionally, the optical properties of liquids can be better than standard lens material. 'Water is more transparent to light than glass or plastic,' Rodriguez says. 'Water cannot be scratched and, in principle, is defect free.'

The technology, which appeared online in the January 26 issue of Applied Physics Letters, is based on the fact that a drop of a liquid with a high surface tension has a natural curvature similar to that of a conventional lens. When the drop is placed in a small well, and pressure is applied to it, the curvature of the drop alters; more pressure increases the curvature, and less flattens out the drop. As the curvature changes, so does the lens's focal length, allowing a clear image to be captured from various distances. In most cameras, the auto-focus feature mechanically moves the solid lens forward or back in order to adjust focal length. But in a liquid lens camera, the droplet stays put and only its curvature changes.

The researchers tested varying sizes of drops, from 100 microns to 3 millimeters: all responded to pressure changes within milliseconds. The bigger the lens, of course, the more light it collects, and more light produces better pictures. But when a droplet becomes too large, it is more difficult to keep stable. 'Up to two millimeters the lens stays perfectly in the aperture by surface tension,' Rodriguez says. 'You need to shake it very hard for it to move out.' She suspects that lenses one to two millimeters in diameter are ideal for most miniaturized imaging systems.

Stein Kuiper, the Philips researcher who developed the electrowetting technique for his company's liquid lenses, sees advantages in using pressure instead. 'The electrical properties of the liquid are not relevant, which allows for a wider range of liquids, and thus optical and mechanical properties of the lens.' Additionally, Kuiper says, the voltage required to change the pressure within a liquid lens system may be less than is required in a system using electrowetting. For these reasons, he says, Philips has 'built up' intellectual property rights on both types of lenses.

Tablet PC

A tablet PC is a notebook- or slate-shaped mobile computer. Its touchscreen or digitizing tablet technology allows the user to operate the computer with a stylus or digital pen instead of a keyboard or mouse.

The form factor presents an alternate method of interacting with a computer, the main intent being to increase mobility and productivity. Tablet PCs are often used in places where normal notebooks are impractical or unwieldy, or do not provide the needed functionality.

The tablet PC is a culmination of advances in miniaturization of notebook hardware and improvements in integrated digitizers as methods of input. A digitizer is typically integrated with the screen, and correlates physical touch or digital pen interaction on the screen with the virtual information portrayed on it. A tablet's digitizer is an absolute pointing device rather than a relative pointing device like a mouse or touchpad. A target can be virtually interacted with directly at the point it appears on the screen.


Light Pen

A light pen is a computer input device in the form of a light-sensitive wand used in conjunction with the computer's CRT monitor. It allows the user to point to displayed objects, or draw on the screen, in a similar way to a touch screen but with greater positional accuracy. A light pen can work with any CRT-based monitor, but not with LCD screens, projectors or other display devices.

A light pen is fairly simple to implement. The light pen works by sensing the sudden small change in brightness of a point on the screen when the electron gun refreshes that spot. By noting exactly where the scanning has reached at that moment, the X,Y position of the pen can be resolved. This is usually achieved by the light pen causing an interrupt, at which point the scan position can be read from a special register, or computed from a counter or timer. The pen position is updated on every refresh of the screen.

The light pen became moderately popular during the early 1980s. It was notable for its use in the Fairlight CMI, and the BBC Micro. However, due to the fact that the user was required to hold his or her arm in front of the screen for long periods of time, the light pen fell out of use as a general purpose input device.


Plasma Television

Television has been around since 19th century and for the past 50 years it held a pretty common place in our leaving room. Since the invention of television engineers have been striving to produce slim & flat displays that would deliver as good or even better images than the bulky C.R.T. Scores of research teams all over the world have been working to achieve this. Plasma television has achieved this goal. Technologies inside it are plasma and hi-definition which are just two of the latest technologies to hit stores. The main contenders in the flat race are PDP (Plasma Display Panel) and flat CRT with LCD and FED (Field Emission Display) To get an idea of what makes a plasma display different it needs to understand how a conventional TV set works. Conventional TV’s used CRT to create the images we see on the screen. The cathode is a heated filament, like the one in a light bulb. It is housed inside a vacuum created in a tube of thick glass….that is what makes your TV so big and heavy. The newest entrant in the field of flat panel display systems is Plasma display. Plasma display panels don’t contain cathode ray tubes and pixels are activated differently.

Astrophotography

Astrophotography is a specialised type of photography that entails making photographs of astronomical objects in the night sky such as planets, stars, and deep sky objects such as star clusters and galaxies.

Astrophotography is used to reveal objects that are too faint to observe with the naked eye, as both film and digital cameras can accumulate and sum photons over long periods of time.

Astrophotography poses challenges that are distinct from normal photography, because most subjects are usually quite faint, and are often small in angular size. Effective astrophotography requires the use of many of the following techniques:

  • Mounting the camera at the focal point of a large telescope
  • Emulsions designed for low light sensitivity
  • Very long exposure times and/or multiple exposures (often more than 20 per image).
  • Tracking the subject to compensate for the rotation of the Earth during the exposure
  • Gas hypersensitizing of emulsions to make them more sensitive (not common anymore)
  • Use of filters to reduce background fogging due to light pollution of the night sky.

Free Space Optics

Mention optical communication and most people think of fiber optics. But light travels through air for a lot less money. So it is hardly a surprise that clever entrepreneurs and technologists are borrowing many of the devices and techniques developed for fiber-optic systems and applying them to what some call fiber-free optical communication. Although it only recently, and rather suddenly, sprang into public awareness, free-space optics is not a new idea. It has roots that go back over 30 years--to the era before fiber-optic cable became the preferred transport medium for high-speed communication. In those days, the notion that FSO systems could provide high-speed connectivity over short distances seemed futuristic, to say the least. But research done at that time has made possible today's free-space optical systems, which can carry full-duplex (simultaneous bidirectional) data at gigabit-per-second rates over metropolitan distances of a few city blocks to a few kilometers.

FSO first appeared in the 60's, for military applications. At the end of 80's, it appeared as a commercial option but technological restrictions prevented it from success. Low reach transmission, low capacity, severe alignment problems as well as vulnerability to weather interferences were the major drawbacks at that time. The optical communication without wire, however, evolved! Today, FSO systems guarantee 2.5 Gb/s taxes with carrier class availability. Metropolitan, access and LAN networks are reaping the benefits. FSO success can be measured by its market numbers: forecasts predict it will reach a USS 2.5 billion market by 2006.

The use of free space optics is particularly interesting when we perceive that the majority of customers does not possess access to fibers as well as fiber installation is expensive and demands long time. Moreover, right-of-way costs, difficulties in obataining government licenses for new fiber installation etc. are further problems that has turned FSO into the option of choice for short reach applications.

FSO uses lasers, or light pulses, to send packetized data in the terahertz (THz) spectrum range. Air, ot fiber, is the transport medium. This means that urban businesses needing fast data and Internet access have a significantly lower-cost option.

An FSO system for local loop access comprises several laser terminals, each one residing at a network node to create a single, point-to-point link; an optical mesh architecture; or a star topology, which is usually point-to-multipoint. These laser terminals, or nodes, are installed on top of customers' rooftops or inside a window to complete the last-mile connection. Signals are beamed to and from hubs or central nodes throughout a city or urban area. Each node requires a Line-Of-Sight (LOS) view of the hub.

Active pixel sensor

An active pixel sensor (APS) is an image sensor consisting of an integrated circuit containing an array of pixels, each containing a photodetector as well as three or more transistors. Since it can be produced by an ordinary CMOS process, APS is emerging as an inexpensive alternative to CCDs.
Architecture
Pixel
The standard CMOS APS pixel consists of three transistors as well as a photodetector.
The photodetector is usually a photodiode, though photogate detectors are used in some devices and can offer lower noise through the use of correlated double sampling. Light causes an accumulation, or integration of charge on the 'parasitic' capacitance of the photodiode, creating a voltage change related to the incident light.
One transistor, Mrst, acts as a switch to reset the device. When this transistor is turned on, the photodiode is effectively connected to the power supply, VRST, clearing all integrated charge. Since the reset transistor is n-type, the pixel operates in soft reset.
The second transistor, Msf, acts as a buffer (specifically, a source follower), an amplifier which allows the pixel voltage to be observed without removing the accumulated charge. Its power supply, VDD, is typically tied to the power supply of the reset transistor.
The third transistor, Msel, is the row-select transistor. It is a switch that allows a single row of the pixel array to be read by the read-out electronics.
Array
A typical two-dimensional array of pixels is organized into rows and columns. Pixels in a given row share reset lines, so that a whole row is reset at a time. The row select lines of each pixel in a row are tied together as well. The outputs of each pixel in any given column are tied together. Since only one row is selected at a given time, no competition for the output line occurs. Further amplifier circuitry is typically on a column basis.

Wibro

Designed and integrated by the Korean telecom industry as an answer to the drawbacks of speed curbs in the likes of CDMA 1x mobile phones and to increase the flow rate of broadband Internet like the ADSL or Wireless LAN, the technology uses TDD for duplexing, OFDMA for multiple access and 8.75MHz as a channel bandwidth.

As the wibro base stations, provides a dataflow rate of 30 to 50 Mbit/s and also allow the usage of portable internet with in arrange of 1 – 5 Km, obviously the data flow rate, of devices in motion can be in a range of 120 km/hr and about 250 km/hr for wireless lan’s having a slow speed and for mobile phones. This figures were actually higher when compared with the range and bandwidth it offered during the testing of this technology in connection with the APEC summit in Busan in 2005. The main advantage this technology has over the WIMAX standard is its Quality of service (QoS). This QoS gives more reliability for the streaming video content and for other loss-sensitive data. WiBro is quite demanding, in its needs varying from the spectrum use to equipment design, WiMAX leaves much of this up to the equipment provider while supplying enough information to confirm the interoperability between designs.

In Korea, the government by 2001 recognized the advent of this innovative technology by giving a 100 MHz of electromagnetic spectrum in the 2.3 - 2.4 GHz band. By the end of 2004, WiBro Phase 1 was standardized by the TTA of Korea and in late 2005, ITU reflected WiBro as IEEE 802.16e. By June 2006, two major telecom companies in Korea namely the KT and the SKT Two Korean Telco (KT, SKT) began the commercial operations in the country, starting with a charge rate of 30 US$.

Since then, many telecom giants around the world namely the TI (Italia), TVA (Brazil), Omnivision (Venezuela), PORTUS (Croatia), and Arialink (Michigan) have started plans to come out with the commercial operations of the technology.


hydrophone

A hydrophone is a sound-to-electricity transducer for use in water or other liquids, analogous to a microphone for air. Note that a hydrophone can sometimes also serve as a projector (emitter), but not all hydrophones have this capability, and may be destroyed if used in such a manner.The first device to be called a 'hydrophone' was developed when the technology matured, and used ultrasonic waves, which would provide for higher overall acoustic output, as well as increasing detection. The ultrasonic waves were produced by a mosaic of thin quartz crystals glued between two steel plates, having a resonant frequency of about 150 kHz. Contemporary hydrophones more often use barium titanate, a piezoelectric ceramic material, giving higher sensitivity than quartz. Hydrophones are an important part of the SONAR system used to detect submarines by both surface vessels and other submarines. A large number of hydrophones were used in the building of various fixed location detection networks such as SOSUS.

Wearable computers

Wearable computing facilitates a new form of human - computer interaction based on a small body-worn computer system that is always ON and always ready and accessible. In this regard, the new computational framework differs from that of hand held devices, laptop computers and Personal Digital Assistants (PDA's).

The "always ready" capability leads to a new form of synergy between human and computer, characterized by long-term adaptation through constancy of user-interface. This new technology has a lot in store for you. You can do a lot of amazing things like typing your document while jogging, shoot a video from a horse-back, or while riding your mountain-bike over the railroad ties. And quite amazingly, you can even recall scenes that ceased to exist.

The whole of a wearable computer spreads all over the body, with the main unit situated in front of the user's eye. Wearable computers find a variety of applications by providing the user mediated augmented reality, helping people with poor eyesight etc. The MediWear and ENGwear are two models that highlight the applications of Wearable computers. However, some disadvantages do exist. With the introduction of "Under wearable computers" by Covert Systems, you can surely look ahead at the future of wearable computers in a optimistic way.

Tunable lasers

Tunable lasers are still a relatively young technology, but as the number of wavelengths in networks increases so will their importance. Each different wavelength in an optical network will be separated by a multiple of 0.8 nanometers (sometimes referred to as 100GHz spacing. Current commercial products can cover maybe four of these wavelengths at a time. While not the ideal solution, this still cuts your required number of spare lasers down. More advanced solutions hope to be able to cover larger number of wavelengths, and should cut the cost of spares even further.

The devices themselves are still semiconductor-based lasers that operate on similar principles to the basic non-tunable versions. Most designs incorporate some form of grating like those in a distributed feedback laser. These gratings can be altered in order to change the wavelengths they reflect in the laser cavity, usually by running electric current through them, thereby altering their refractive index. The tuning range of such devices can be as high as 40nm, which would cover any of 50 different wavelengths in a 0.8nm wavelength spaced system. Technologies based on vertical cavity surface emitting lasers (VCSELs) incorporate moveable cavity ends that change the length of the cavity and hence the wavelength emitted. Current designs of tunable VCSELs have similar tuning ranges

Neural Networks

Neural networks include the capacity to map the perplexed & extremely non - linear relationship between load levels of zone and the system topologies, which is required for the feeder reconfiguration in distribution systems.

This study is intended to purpose the strategies to reconfigure the feeders by using artificial neural n/w s with the mapping ability. Artificial neural n/w's determine the appropriate system topology that reduces the power loss according to the variation of load pattern. The control strategy can be easily obtained on the basis of system topology which is provided by artificial neural networks.
Artificial neural networks determine the most appropriate system topology according to the load pattern on the basis of trained knowledge in the training set . This is contrary to the repetitive process of transferring the load & estimating power loss in conventional algorithm.

ANN are designed to two groups:
1) The first group is to estimate the proper load data of each zone .
2)The second is to determine the appropriate system topology from input load level .

In addition, several programs with the training set builder are developed for the design the training & accuracy test of A.N.N.This paper will present the strategy of feeder reconfiguration to reduce power loss, by using A.N.N. The approach developed here is basically different from methods reviewed above on flow solution during search process are not required .The training set of A.N.N is the optimal system topology corresponding to various load patterns which minimizes the loss under given conditions

Sun Spot

Sun SPOT (Sun Small Programmable Object Technology) is a wireless sensor network (WSN) mote developed by Sun Microsystems. The device is built upon the IEEE 802.15.4 standard. Unlike other available mote systems, the Sun SPOT is built on the Java 2 Micro Edition Virtual Machine (JVM).
Hardware
The completely assembled device should be able to fit in the palm of your hand.
Processing
180 MHz 32 bit ARM920T core - 512K RAM - 4M Flash
2.4 GHz IEEE 802.15.4 radio with integrated antenna
USB interface
Sensor Board
2G/6G 3-axis accelerometer
Temperature sensor
Light sensor
8 tri-color LEDs
6 analog inputs
2 momentary switches
5 general purpose I/O pins and 4 high current output pins
Networking
The motes communicate using the IEEE 802.15.4 standard including the base-station approach to sensor networking. This implementation of 802.15.4 is not ZigBee-compliant.
Software
The device's use of Java device drivers is particularly remarkable as Java is known for its ability to be hardware-independent. Sun SPOT uses a small J2ME which runs directly on the processor without an OS.

MEMS in space

The satellite industry could experience its biggest revolution since it joined the ranks of commerce, thanks to some of the smallest machines in existence. Researchers are performing experiments designed to convince the aerospace industry that microelectromechanical systems (MEMS) could open the door to low-cost, high-reliability, mass-produced satellites.
MEMS combine conventional semiconductor electronics with beams, gears, levers, switches, accelerometers, diaphragms, microfluidic thrusters, and heat controllers, all of them microscopic in size."We can do a whole new array of things with MEMS that cannot be done any other way," said Henry Helvajian, a senior scientist with Aerospace Corp., a nonprofit aerospace research and development organization in El Segundo, Calif.

Microelectromechanical Systems, or MEMS, are integrated micro devices or systems combining electrical and mechanical components. They are fabricated using integrated circuit (IC) batch processing techniques and can range in size from micrometers to millimeters. These systems can sense, control and actuate on the micro scale, and function individually or in arrays to generate effects on the macro scale.

MEMS is an enabling technology and current applications include accelerometers, pressure, chemical and flow sensors, micro-optics, optical scanners, and fluid pumps. Generally a satellite consists of battery, internal state sensors, communication systems and control units. All these can be made of MEMS so that size and cost can be considerably reduced. Also small satellites can be constructed by stacking wafers covered with MEMS and electronics components. These satellites are called 'I' Kg class satellites or Picosats. These satellites having high resistance to radiation and vibration compared to conventional devices can be mass-produced there by reducing the cost. These can be used for various space applications.
Also small satellites can be constructed by stacking wafers covered with MEMS and electronics components. These satellites are called 'I' Kg

TECHNOLOGY

Although MEMS devices are extremely small MEMS technology is not about size. Instead, MEMS is a manufacturing technology; a new way of making complex electromechanical systems using batch fabrication techniques similar to the way integrated circuits are made and making these electromechanical elements along with electronics.

Material used
The material used for manufacturing MEMS is Silicon. Silicon possesses excellent materials properties making it an attractive choice for many high-performance mechanical applications (e.g. the strength-to-weight ratio for silicon is higher than many other engineering materials allowing very high bandwidth mechanical devices to be realized).

Components of MEMS
Micro-Electro-Mechanical Systems (MEMS) is the integration of mechanical elements, sensors, actuators, and electronics on a common silicon substrate through the utilization of micro fabrication technology. MEMS is truly an enabling technology allowing the development of smart products by augmenting the computational ability of microelectronics with the perception and control capabilities of micro sensors and micro actuators

Autonomic Computing

"Civilization advances by extending the number of important operations which we can perform without thinking about them." - Alfred North Whitehead.

This quote made by the preeminent mathematician Alfred Whitehead holds both the lock and the key to the next era of computing. It implies a threshold moment surpassed only after humans have been able to automate increasingly complex tasks in order to achieve forward momentum.
We are at just such a threshold right now in computing. The millions of businesses, billions of humans that compose them, and trillions of devices that they will depend upon all require the services of the IT industry to keep them running. And it's not just a matter of numbers. It's the complexity of these systems and the way they work together that is creating a shortage of skilled IT workers to manage all of the systems. It's a problem that is not going away, but will grow exponentially, just as our dependence on technology has.
The solution is to build computer systems that regulate themselves much in the same way our autonomic nervous system regulates and protects our bodies. This new model of computing is called autonomic computing. The good news is that some components of this technology are already up and running. However, complete autonomic systems do not yet exist. Autonomic computing calls for a whole new area of study and a whole new way of conducting business.
Autonomic computing was conceived to lessen the spiraling demands for skilled IT resources, reduce complexity and to drive computing into a new era that may better exploit its potential to support higher order thinking and decision making.
Immediate benefits will include reduced dependence on human intervention to maintain complex systems accompanied by a substantial decrease in costs. Long-term benefits will allow individuals, organizations and businesses to collaborate on complex problem solving.

Short-term IT related benefits

  • Simplified user experience through a more responsive, real-time system.
  • Cost-savings - scale to use.
  • Scaled power, storage and costs that optimize usage across both ardware and software.
  • Full use of idle processing power, including home PC's, through networked system.
  • Natural language queries allow deeper and more accurate returns.
  • Seamless access to multiple file types. Open standards will allow users to pull data from all potential sources by re-formatting on the fly.
  • Stability. High availability. High security system. Fewer system or network errors due to self-healing.

Long-term, Higher Order Benefits

  • Realize the vision of enablement by shifting available resources to higher-order business.
  • Embedding autonomic capabilities in client or access devices, servers, storage systems, middleware, and the network itself. Constructing autonomic federated systems.
  • Achieving end-to-end service level management.
  • Collaboration and global problem solving. Distributed computing allows for more immediate sharing of information and processing power to use complex mathematics to solve problems.
  • Massive simulation - weather, medical, complex calculations like protein folding - that require processors to run 24/7 for as long as a year at a time

Quantum dot lasers

The infrastructure of the Information Age has to date relied upon advances in microelectronics to produce integrated circuits that continually become smaller, better, and less expensive. The emergence of photonics, where light rather than electricity is manipulated, is posed to further advance the Information Age. Central to the photonic revolution is the development of miniature light sources such as the Quantum dots(QDs).

Today, Quantum Dots manufacturing has been established to serve new datacom and telecom markets. Recent progress in microcavity physics, new materials, and fabrication technologies has enabled a new generation of high performance QDs. This presentation will review commercial QDs and their applications as well as discuss recent research, including new device structures such as composite resonators and photonic crystals Semiconductor lasers are key components in a host of widely used technological products, including compact disk players and laser printers, and they will play critical roles in optical communication schemes.

The basis of laser operation depends on the creation of non-equilibrium populations of electrons and holes, and coupling of electrons and holes to an optical field, which will stimulate radiative emission. . Other benefits of quantum dot active layers include further reduction in threshold currents and an increase in differential gain-that is, more efficient laser operation

Valvetronic

The Valvetronic system is the first variable valve timing system to offer continuously variable timing (on both intake and exhaust camshafts) along with continuously variable intake valve lift, from ~0 to 10 mm, on the intake camshaft only. Valvetronic-equipped engines are unique in that they rely on the amount of valve lift to throttle the engine rather than a butterfly valve in the intake tract. In other words, in normal driving, the 'gas pedal' controls the Valvetronic hardware rather than the throttle plate

First introduced by BMW on the 316ti compact in 2001, Valvetronic has since been added to many of BMW's engines. The Valvetronic system is coupled with BMW's proven double-VANOS, to further enhance both power and efficiency across the engine speed range. Valvetronic will not be coupled to BMW's N53 and N54, 'High Precision Injection' (gasoline direct injection) technology due to lack of room in the cylinder head.

Cylinder heads with Valvetronic use an extra set of rocker arms, called intermediate arms (lift scaler), positioned between the valve stem and the camshaft. These intermediate arms are able to pivot on a central point, by means of an extra, electronicly actuated camshaft. This movement alone, without any movement of the intake camshaft, can open or close the intake valves.

Because the intake valves now have the ability to move from fully closed to fully open positions, and everywhere in between, the primary means of engine load control is transferred from the throttle plate to the intake valvetrain. By eliminating the throttle plate's 'bottleneck' in the intake track, pumping losses are reduced, fuel economy and responsiveness are improved.


MAP sensor

A MAP sensor (manifold absolute pressure) is one of the sensors used in an internal combustion engine's electronic control system. Engines that use a MAP sensor are typically fuel injected. The manifold absolute pressure sensor provides instantaneous pressure information to the engine's electronic control unit (ECU). This is necessary to calculate air density and determine the engine's air mass flow rate, which in turn is used to calculate the appropriate fuel flow. (See stoichiometry.)

An engine control system that uses manifold absolute pressure to calculate air mass, is using the speed-density method. Engine speed (RPM) and air temperature are also necessary to complete the speed-density calculation. Not all fuel injected engines use a MAP sensor to infer mass air flow, some use a MAF sensor (mass air flow).


BlueTec

BlueTec is DaimlerChrysler's name for its two nitrogen oxide (NOx) reducing systems, for use in their Diesel automobile engines. One is a urea catalyst called AdBlue, the other is called DeNOx and uses an oxidising catalytic converter and particular filter combined with other NOx introduced the systems in the reducing systems. Both systems were designed to slash emissions further than ever before. Mercedes-BenzE-Class (using the 'DeNOx' system) and GL-Class (using 'AdBlue') at the 2006 North American International Auto Show as the E 320 and GL 320 Bluetec. This system makes these vehicles 45-state and 50-state legal respectively in the United States, and is expected to meet all emissions regulations through 2009. It also makes DaimlerChrysler the only car manufacturer in the the US committed to selling diesel models in the 2007 model year.


stratified charge engine

The stratified charge engine is a type of internal-combustion engine, similar in some ways to the Diesel cycle, but running on normal gasoline. The name refers to the layering of fuel/air mixture, the charge inside the cylinder.

In a traditional Otto cycle engine the fuel and air are mixed outside the cylinder and are drawn into it during the intake stroke. The air/fuel ratio is kept very close to stoichiometric, which is defined as the exact amount of air necessary for a complete combustion of the fuel. This mixture is easily ignited and burns smoothly.

The problem with this design is that after the combustion process is complete, the resulting exhaust stream contains a considerable amount of free single atoms of oxygen and nitrogen, the result of the heat of combustion splitting the O2 and N2 molecules in the air. These will readily react with each other to create NOx, a pollutant. A catalytic converter in the exhaust system re-combines the NOx back into O2 and N2 in modern vehicles.

A Diesel engine, on the other hand, injects the fuel into the cylinder directly. This has the advantage of avoiding premature spontaneous combustion—a problem known as detonation or ping that plagues Otto cycle engines—and allows the Diesel to run at much higher compression ratios. This leads to a more fuel-efficient engine. That is why they are commonly found in applications where they are being run for long periods of time, such as in trucks.


Configware

Re-configurable or the coarse-grained reconfigurable platforms which includes the FPGA’s (field-programmable gate arrays) or the reconfigurable data path arrays (rDPAs) are commonly termed as morphware. The program resource of morphware is called the configware (configuration ware). It was in the mid 90’s, Prof. Reiner Hartenstein and TU Kaiserslautern, came up with this revolutionary concept. Configware is considered to be a matching part of the software, but differs by their method of programming. Software usually of an instruction-based nature, has always a procedural way of programming while configware confines to the structural method of programming.

The configware code files having parts of unique nature, to specific producers, always represent the bit code configuration files for a FPGA (field-programmable gate array)

or for a rDPA (morphware). They also symbolise the configuration of logic blocks, which are internal to the device, their interconnection between the blocks and the external I/O. Some software of the configuration management disposition always manages these configuration code files. Data scheduling is initiated using flow-ware, once the configuration is through and the resources are available.

These days, von-Neumann-like processors employs configware compilers and other configware application development support tools, as their software which is likely to change in the future for many reasons. The emerging operating systems for configware (CW-OS) gains the edge, above the operating systems for software (SW-OS) with respect to its ability to manage multi-tasking and other administrative jobs on the reconfigurable platforms. CW-OS also gets the upper hand in reconfigurable systems, where the parts of the reconfigurable resources are at execution mode while the other parts are at configuration mode and also achieve the flexibility and the resource saving through the swapping of configware code modules.



ZFS

Todays file systems, which the system administrators observe to be always in the verge of data corruption and more over find it extremely difficult to manage due to its slow rate of execution has enabled ZFS to emerge as one of the most powerful file system. Used in Suns Solaris 10 Operating System (Solaris OS), ZFS finds its edge over the other file systems by its unique features of

  • Cutting short the administrative difficulties by 80 percent by automating and combining complicated storage administration concepts.
  • It ensures the data integrity and safety of all data with 64-bit checksums that can detect and correct silent data corruption.
  • It offers more scalability by providing 16 billion times storage of 32 or 64 bit systems.
  • High performance gains are achieved by using the concept of transactional object model that removes most of the traditional constraints on the order of issuing I/Os.

SALT

SALT (Speech Application Language Tags)

SALT stands for Speech Application Language Tags. It consists of small set of XML elements with associated attributes and DOM object properties, events and methods which apply a speech interface to web pages. SALT allows applications to be run on a wide variety of devices and also through different methods for inputting data.

The main design principle of SALT include reuse the existing standards for grammar, speech output and also separation of the speech interface from business logic and data etc. SALT is designed to run inside different Web execution environments. So SALT does not have any predefined execution model but it uses an event-wiring model.

It contains a set of tags for inputting the data as well as storing and manipulating that data. The main elements of a SALT document are , and . Using these elements we can specify grammar for inputting data , inspect the results of recognition and copy those results properly and provide the application needed.The architecture of SALT contains mainly 4 components

FREENET

FREENET- A Distributed and Anonymous
Information Storage and Retrieval System

Networked computer systems are rapidly growing in importance as the medium of choice for the storage and exchange of information. However, current systems afford little privacy to their users, and typically store any given data item in only one or a few fixed places, creating a central point of failure. Because of a continued desire among individuals to protect the privacy of their authorship or readership of various types of sensitive information, and the undesirability of central points of failure which can be attacked by opponents wishing to remove data from the system or simply overloaded by too much interest, systems offering greater security and reliability are needed.

Freenet is being developed as a distributed information storage and retrieval system designed to address these concerns of privacy and availability. The system operates as a location-independent distributed file system across many individual computers that allow files to be inserted, stored, and requested anonymously. There are five main design goals:


" Anonymity for both producers and consumers of information
" Deniability for storers of information
" Resistance to attempts by third parties to deny access to information
" Efficient dynamic storage and routing of information
" Decentralization of all network functions

The system is designed to respond adaptively to usage patterns, transparently moving, replicating, and deleting files as necessary to provide efficient service without resorting to broadcast searches or centralized location indexes. It is not intended to guarantee permanent file storage, although it is hoped that a sufficient number of nodes will join with enough storage capacity that most files will be able to remain indefinitely. In addition, the system operates at the application layer and assumes the existence of a secure transport layer, although it is transport-independent. It does not seek to provide anonymity for general network usage, only for Freenet file transactions.

Freenet is currently being developed as a free software project on http://sourceforge.net, and a preliminary implementation can be downloaded from http://www.freenetproject.org. It grew out of work originally done by the first author at the University of Edinburgh

TIGER SHARC Processor

The Tiger SHARC processor is the newest and most power member of this family which incorporates many mechanisms like SIMD, VLIW and short vector memory access in a single processor. This is the first time that all these techniques have been combined in a real time processor.
The TigerSHARC DSP is an ultra high-performance static superscalar architecture that is optimized for tele-communications infrastructure and other computationally demanding applications. This unique architecture combines elements of RISC, VLIW, and standard DSP processors to provide native support for 8, 16, and 32-bit fixed, as well as floating-point data types on a single chip.
Large on-chip memory, extremely high internal and external bandwidths and dual compute blocks provide the necessary capabilities to handle a vast array of computationally demanding, large signal processing tasks

Globe valves


Globe valves are named for their spherical body shape. The two halves of the valve body are separated by a baffle with a disc in the center. Globe valves operate by screw action of the handwheel. They are used for applications requiring throttling and frequent operation. Since the baffle restricts flow, they're not recommended where full, unobstructed flow is required.
A bonnet provides leakproof closure for the valve body. Globe valves may have a screw-in, union, or bolted bonnet. Screw-in bonnet is the simplest bonnet, offering a durable, pressure-tight seal. Union bonnet is suitable for applications requiring frequent inspection or cleaning. It also gives the body added strength. Bolted bonnet is used for larger or higher pressure applications.
Many globe valves have a class rating that corresponds to the pressure specifications of ANSI 16.34. Other different types of valve usually are called globe style valves because of the shape of the body or the way of closure of the disk. As an example typical swing check valves could be called globe type.

Butterfly valve


A Butterfly valve is a type of flow control device, used to make a fluid start or stop flowing through a section of pipe. The valve is similar in operation to a ball valve. A flat circular plate is positioned in the center of the pipe. The plate has a rod through it connected to a handle on the outside of the valve. Rotating the handle turns the plate either parallel or perpendicular to the flow of water, shutting off the flow. It is a very robust and reliable design. However, unlike the ball valve, the plate does not rotate out of the flow of water, so that a pressure drop is induced in the flow.
There are three types of butterfly valve:
1.Resilient butterfly valve which has a flexible rubber seat. Working pressure up to 1.6 Mpa.
2.High performance butterfly valve which is usually double eccentric in design . Working pressure up to 5.0 Mpa.
3.Tricentric butterfly valve which is usually with metal seated design. Working pressure up to 10.0 Mpa.
Butterfly valves are also commonly utilised in conjunction with carburetors to control the flow of air through the intake manifold and hence the flow of fuel and air into an internal combustion engine. The butterfly valve in this circumstance called a throttle as it is 'throttling' the engines aspiration. It is controlled via a cable or electronics by the furthest right pedal in the drivers footwell (although adaptions for hand control do exist). This is why the accelerator pedal in some countries is called a throttle pedal.

Diesel Particulate Filter


A Diesel Particulate Filter, sometimes called a DPF, is device designed to remove Diesel Particulate Matter or soot from the exhaust gas of a Diesel engine, most of which are rated at 85% efficiency, but often attaining efficiencies of over 90%. A Diesel-powered vehicle with a filter installed will emit no visible smoke from its exhaust pipe.
In addition to collecting the particulate, a method must be designed to get rid of it. Some filters are single use (disposable), while others are designed to burn off the accumulated particulate, either through the use of a catalyst (passive), or through an active technology, such as a fuel burner which heats the filter to soot combustion temperatures, or through engine modifications (the engine is set to run a certain specific way when the filter load reachs a pre-determined level, either to heat the exhaust gasses, or to produce high amounts of No2, which will oxidize the particualte at relatively low temperatures). This procedure is known as 'filter regeneration.' Fuel sulfur interferes many 'Regeneration' strategies, and all jurisdictions that are interested in reduction of particulate emissions, are also passing regulations governing fuel sulfur levels.

CVCC

CVCC is a trademark by the Honda Motor Company for a device used to reduce automotive emissions called Compound Vortex Controlled Combustion. This technology allowed Honda's cars to meet the 1970s US Emission requirements without a catalytic converter, and first appeared on the 1975 ED1 engine. It is a form of stratified charge engine.
Honda CVCC engines have normal inlet and exhaust valves, plus a small auxiliary inlet valve which provides a relatively rich air / fuel mixture to a volume near the spark plug. The remaining air / fuel charge, drawn into the cylinder through the main inlet valve is leaner than normal. The volume near the spark plug is contained by a small perforated metal plate. Upon ignition flame fronts emerge from the perforations and ignite the remainder of the air / fuel charge. The remaining engine cycle is as per a standard four stroke engine.
This combination of a rich mixture near the spark plug, and a lean mixture in the cylinder allowed stable running, yet complete combustion of fuel, thus reducing CO (carbon monoxide) and hydrocarbon emissions.

E85


E85 is an alcohol fuel mixture of 85% ethanol and 15% gasoline, by volume. ethanol derived from crops (bioethanol) is a biofuel.
E85 as a fuel is widely used in Sweden and is becoming increasingly common in the United States, mainly in the Midwest where corn is a major crop and is the primary source material for ethanol fuel production.
E85 is usually used in engines modified to accept higher concentrations of ethanol. Such flexible-fuel engines are designed to run on any mixture of gasoline or ethanol with up to 85% ethanol by volume. The primary differences from non-FFVs is the elimination of bare magnesium, aluminum, and rubber parts in the fuel system, the use of fuel pumps capable of operating with electrically-conductive (ethanol) instead of non-conducting dielectric (gasoline) fuel, specially-coated wear-resistant engine parts, fuel injection control systems having a wider range of pulse widths (for injecting approximately 30% more fuel), the selection of stainless steel fuel lines (sometimes lined with plastic), the selection of stainless steel fuel tanks in place of terne fuel tanks, and, in some cases, the use of acid-neutralizing motor oil. For vehicles with fuel-tank mounted fuel pumps, additional differences to prevent arcing, as well as flame arrestors positioned in the tank's fill pipe, are also sometimes used.