White LED

Until recently, though, the price of an LED lighting system was too high for most residential use. With sales rising and prices steadily decreasing, it's been said that whoever makes the best white LED will open a goldmine.White LED lighting has been used for years by the RV and boating crowd, running off direct current (DC) battery systems. It then got popular in off-the-grid houses, powered by photovoltaic cells. It used to be that white LED was possible only by "rainbow" groups of three LEDs -- red, green, and blue -- and controlling the current to each to yield an overall white light. Now a blue indium gallium chip with a phosphor coating is used to create the wave shift necessary to emit white light from a single diode. This process is much less expensive for the amount of light generated. Each diode is about 1/4 inch and consumes about ten milliamps (a tenth of a watt). Lamps come in various arrangements of diodes on a circuit board. Standard arrays are three, six, 12, or 18 diodes, or custom sizes -- factories can incorporate these into custom-built down lights, sconces and surface-mounted fixtures. With an inexpensive transformer, they run on standard 120-volt alternating current (AC), albeit with a slight (about 15% to 20%) power loss. They are also available as screw-in lamps to replace incandescent. A 1.2 watt white LED light cluster is as bright as a 20-watt incandescent lamp.

Welding Robots

Welding technology has obtained access virtually to every branch of manufacturing; to name a few bridges, ships, rail road equipments, building constructions, boilers, pressure vessels, pipe lines, automobiles, aircrafts, launch vehicles, and nuclear power plants. Especially in India, welding technology needs constant upgrading, particularly in field of industrial and power generation boilers, high voltage generation equipment and transformers and in nuclear aero-space industry. Computers have already entered the field of welding and the situation today is that the welding engineer who has little or no computer skills will soon be hard-pressed to meet the welding challenges of our technological times. In order for the computer solution to be implemented, educational institutions cannot escape their share of responsibilities. Automation and robotics are two closely related technologies. In an industrial context, we can define automation as a technology that is concerned with the use of mechanical, electronics and computer-based systems in the operation and control of production. Examples of this technology include transfer lines, mechanized assembly machines, feed back control systems, numerically controlled machine tools, and robots. Accordingly, robotics is a form of industrial automation. There are three broad classes of industrial automation: fixed automaton, programmable automation, and flexible automation. Fixed automation is used when the volume of production is very high and it is therefore appropriate to design specialized equipment to process the product very efficiently and at high production rates.

Voice morphing

Voice morphing means the transition of one speech signal into another. Like image morphing, speech morphing aims to preserve the shared characteristics of the starting and final signals, while generating a smooth transition between them. Speech morphing is analogous to image morphing. In image morphing the in-between images all show one face smoothly changing its shape and texture until it turns into the target face. It is this feature that a speech morph should possess. One speech signal should smoothly change into another, keeping the shared characteristics of the starting and ending signals but smoothly changing the other properties. The major properties of concern as far as a speech signal is concerned are its pitch and envelope information. These two reside in a convolved form in a speech signal. Hence some efficient method for extracting each of these is necessary. We have adopted an uncomplicated approach namely cepstral analysis to do the same. Pitch and formant information in each signal is extracted using the cepstral approach. Necessary processing to obtain the morphed speech signal include methods like Cross fading of envelope information, Dynamic Time Warping to match the major signal features (pitch) and Signal Re-estimation to convert the morphed speech signal back into the acoustic waveform.Speech morphing can be achieved by transforming the signal's representation from the acoustic waveform obtained by sampling of the analog signal, with which many people are familiar with, to another representation. To prepare the signal for the transformation, it is split into a number of 'frames' - sections of the waveform.

Touch Screen

A type of display screen that has a touch-sensitive transparent panel covering the screen. Instead of using a pointing device such as a mouse or light pen, you can use your finger to point directly to objects on the screen. Although touch screens provide a natural interface for computer novices, they are unsatisfactory for most applications because the finger is such a relatively large object. It is impossible to point accurately to small areas of the screen. In addition, most users find touch screens tiring to the arms after long use. Touch-screens are typically found on larger displays, in phones with integrated PDA features. Most are designed to work with either your finger or a special stylus. Tapping a specific point on the display will activate the virtual button or feature displayed at that location on the display.Some phones with this feature can also recognize handwriting written on the screen using a stylus, as a way to quickly input lengthy or complex information. A touchscreen is an input device that allows users to operate a PC by simply touching the display screen. Touch input is suitable for a wide variety of computing applications. A touchscreen can be used with most PC systems as easily as other input devices such as track balls or touch pads. Browse the links below to learn more about touch input technology and how it can work for you. A touch screen is a special type of visual display unit with a screen which is sensitive to pressure or touching. The screen can detect the position of the point of touch. The design of touch screens is best for inputting simple choices and the choices are programmable. The device is very user-friendly since it 'talks' with the user when the user is picking up choices on the screen.Touch technology turns a CRT, flat panel display or flat surface into a dynamic data entry device that replaces both the keyboard and mouse. In addition to eliminating these separate data entry devices, touch offers an "intuitive" interface. In public kiosks, for example, users receive no more instruction than 'touch your selection.

Surround Sound System

There are many surround systems available in the market .They use different technologies for produce surround effect. Some Surround sound is based on using audio compression technology (for example Dolby ProLogic® or Digital AC-3®) to encode and deliver a multi-channel soundtrack, and audio decompression technology to decode the soundtrack for delivery on a surround sound 5-speaker setup. Additionally, virtual surround sound systems use 3D audio technology to create the illusion of five speakers emanating from a regular set of stereo speakers, therefore enabling a surround sound listening experience without the need for a five speaker setup. We are now entering the Third Age of reproduced sound. The monophonic era was the First Age, which lasted from the Edison's invention of the phonograph in 1877 until the 1950s. during those times, the goal was simply to reproduce the timbre of the original sound. No attempts were made to reproduce directional properties or spatial realism. The stereo era was the Second Age. It was based on the inventions from the 1930s, reached the public in the mid-'50s, and has provided great listening pleasure for four decades. Stereo improved the reproduction of timbre and added two dimensions of space: the left - right spread of performers across a stage and a set of acoustic cues that allow listeners to perceive a front-to-back dimension.In two-channel stereo, this realism is based on fragile sonic cues. In most ordinary two-speaker stereo systems, these subtle cues can easily be lost, causing the playback to sound flat and uninvolved. Multichannel surround systems, on the other hand, can provide this involving presence in a way that is robust, reliable and consistent. The purpose of this seminar is to explore the advances and technologies of surround sound in the consumer market. Human hearing is binaural (based on two ears), yet we have the ability to locate sound spatially.

Space Shuttle

The successful explortion of space requires a system that will reliably transport payloads into space and return back to earth; without subjecting them an uncomfortable or hazardous environment. In other words, the space crafts and its pay loads have to be recovered safely into the earth. The space shuttle used at older times were not re-usable. So NASA invented re-usable space shuttle that could launch like a rocket but deliver and land like an aeroplane. Now NASA is planning to launch a series of air-breathing planes that would replace the space shuttle.

A Brief History Of The Space Shuttle

Near the end of the Apollo space program, NASA officials were looking at the future of the American space program. At that time, the rockets used to place astronauts and equipment in outer space was one-shot disposable rockets. What they needed was a reliable, but less expensive, rocket, perhaps one that was reusable. The idea of a reusable "space shuttle" that could launch like a rocket but deliver and land like an airplane was appealing and would be a great technical achievement. NASA began design, cost and engineering studies on a space shuttle. Many aerospace companies also explored the concepts. In 1972 NASA announced that it would develop a reusable space shuttle or space transportation programme (STS).NASA decided that the shuttle would consist of an orbiter attached to solid rocket boosters and an external fuel tank because this design was considered safer and more cost effective. At that time, spacecraft used ablative heat shields that would burn away as the spacecraft re-entered the Earth's atmosphere. However, to be reusable, a different strategy would have to be used. The designers of the space shuttle came up with an idea to cover the space shuttle with many insulating ceramic tiles that could absorb the heat of re-entry without harming the astronauts.

Smart Fabrics

Based on the advances in computer technology, especially in the field of miniaturization, wireless technology and worldwide networking, the vision of wearable computers emerged. We already use a lot of portable electronic devices like cell phones, notebooks and organizers. The next step in mobile computing could be to create truly wearable computers that are integrated into our daily clothing and always serve as our personal assistant. This paper explores this from a textile point of view. Which new functions could textiles have? Is a combination of textiles and electronics possible? What sort of intelligent clothing can be realized? Necessary steps of textile research and examples of current developments are presented as well as future challenges.Today, the interaction of human individuals with electronic devices demands specific user skills. In future, improved user interfaces can largely alleviate this problem and push the exploitation of microelectronics considerably. In this context the concept of smart clothes promises greater user-friendliness, user empowerment, and more efficient services support. Wearable electronics responds to the acting individual in a more or less invisible way. It serves individual needs and thus makes life much easier. We believe that today, the cost level of important microelectronic functions is sufficiently low and enabling key technologies are mature enough to exploit this vision to the benefit of society. In the following, we present various technology components to enable the integration of electronics into textiles.

Smart Card

A smart card, simply speaking, is a credit card sized plastic card with an embedded computer chip and some memory. You can put it to a wide variety of uses to help simplify your daily life. Shopping, identification, telephone services and licenses are just couples of them. ISO 7816 defines the smart cards standard it details the physical, electrical, mechanical and application programming interface for it. Smart card technology Smart card technology has its historical origin in the late 60's and 70's when inventors in Germany, Japan and France filled the original patents`. However due to several factors , not just of which was immature semiconductors technology , most work on smart cards was not at all completed so 80s' after that ,major rollouts such as the French National Visa Debit and served as eye openers to the potential of smart cards. The industry is now growing at a tremendous rate, shipping more than one million cards per year since 1998.

Manufacturing Technology
Manufacturing a smart card involves much more than just sticking a chip on the Plastic. The plastic used is usually P.V.C (poly vinyl chloride), but other substitutes like A.B.S (acryl nitrite butadiene styrene), P.C (polycarbonate) and PET is also used. The chip Is also known as micro module, is very thin and is embodied into the plastic substrate or Card. To do this a cavity is formed or milled into the plastic card. Then either a cold or hot glue process bonds the micro module to the cards. The SIM (subscriber identification module) cards in cellphones are smart cards, and act as a repository for information like owner ID, cash balance, etc. More than 300 million of these cards are being used world wide today.Small dish TV satellite receivers also use smart cards for storing subscription information. These are over four million in the US alone and millions more in Europe and Asia.There are tons of other applications that smart cards can be used.


have globally transformed life as we know it. Disruptive technologies like fire, the printing press, oil, and television have dramatically changed both the planet we live on and mankind itself, most often in extraordinary and unpredictable ways. In pre-history these disruptions took place over hundreds of years. With the time compression induced by our rapidly advancing technology, they can now take place in less than a generation. We are currently at the edge of one such event. In ten years robotic systems will fly our planes, grow our food, explore space, discover life saving drugs, fight our wars, sweep our homes and deliver our babies. In the process, this robotics driven disruptive event will create a new 200 billion dollar global industry and change life as you now know it, forever. Just as my children cannot imagine a world without electricity, your children will never know a world without robots. Come take a bold look at the future and the opportunities for Mechanical Engineers that wait there.

The Three Laws of Robotics are:

1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law

The European Space Agency (ESA) has programmes underway to place Satellites carrying optical terminals in GEO orbit within the next decade. The first is the ARTEMIS technology demonstration satellite which carries both microwave and SILEX (Semiconductor Laser Intro satellite Link Experiment) optical interorbit communications terminal. SILEX employs direct detection and GaAIAs diode laser technology; the optical antenna is a 25cm diameter reflecting telescope. The SILEX GEO terminal is capable of receiving data modulated on to an incoming laser beam at a bit rate of 50 Mbps and is equipped with a high power beacon for initial link acquisition together with a low divergence (and unmodulated) beam which is tracked by the communicating partner. ARTEMIS will be followed by the operational European data relay system (EDRS) which is planned to have data relay Satellites (DRS). These will also carry SILEX optical data relay terminals.Once these elements of Europe's space Infrastructure are in place, these will be a need for optical communications terminals on LEO satellites which are capable of transmitting data to the GEO terminals. A wide range of LEO space craft is expected to fly within the next decade including earth observation and science, manned and military reconnaissance system. The LEO terminal is referred to as a user terminal since it enables real time transfer of LEO instrument data back to the ground to a user access to the DRS s LEO instruments generate data over a range of bit rates extending of Mbps depending upon the function of the instrument.

Optical fiber Cable

Optical fiber (or "fiber optic") refers to the medium and the technology associated with the transmission of information as light pulses along a glass or plastic wire or fiber. Optical fiber carries much more information than conventional copper wire and is in general not subject to electromagnetic interference and the need to retransmit signals. Most telephone company long-distance lines are now of optical fiber. Transmission on optical fiber wire requires repeaters at distance intervals. The glass fiber requires more protection within an outer cable than copper. For these reasons and because the installation of any new wiring is labor-intensive, few communities yet have optical fiber wires or cables from the phone company's branch office to local customers (known as local loops). Optical fiber consists of a core, cladding, and a protective outer coating, which guide light along the core by total internal reflection. The core, and the higher-refractive-index cladding, are typically made of high-quality silica glass, though they can both be made of plastic as well. An optical fiber can break if bent too sharply. Due to the microscopic precision required to align the fiber cores, connecting two optical fibers, whether done by fusion splicing or mechanical splicing, requires special skills and interconnection technology.Two main categories of optical fiber used in fiber optic communications are multi-mode optical fiber and single-mode optical fiber. Multimode fiber has a larger core allowing less precise, cheaper transmitters and receivers to connect to it as well as cheaper connectors.


Nanotechnology is defined as fabrication of devices with atomic or molecular scale precision. Devices with minimum feature sizes less than 100 nanometers (nm) are considered to be products of nanotechnology. A nanometer is one billionth of a meter (10-9 m) and is the unit of length that is generally most appropriate for describing the size of single molecules. The nanoscale marks the nebulous boundary between the classical and quantum mechanical worlds; thus, realization of nanotechnology promises to bring revolutionary capabilities. Fabrication of nanomachines, nanoelectronics and other nanodevices will undoubtedly solve an enormous amount of the problems faced by mankind today.Nanotechnology is currently in a very infantile stage. However, we now have the ability to organize matter on the atomic scale and there are already numerous products available as a direct result of our rapidly increasing ability to fabricate and characterize feature sizes less than 100 nm. Mirrors that don't fog, biomimetic paint with a contact angle near 180°, gene chips and fat soluble vitamins in aqueous beverages are some of the first manifestations of nanotechnology. However, immenant breakthroughs in computer science and medicine will be where the real potential of nanotechnology will first be achieved.Nanoscience is an interdisciplinary field that seeks to bring about mature nanotechnology. Focusing on the nanoscale intersection of fields such as physics, biology, engineering, chemistry, computer science and more, nanoscience is rapidly expanding. Nanotechnology centers are popping up around the world as more funding is provided and nanotechnology market share increases. The rapid progress is apparent by the increasing appearance of the prefix "nano" in scientific journals and the news. Thus, as we increase our ability to fabricate computer chips with smaller features and improve our ability to cure disease at the molecular level, nanotechnology is here.

Laser Communication System

Lasers have been considered for space communications since their realization in 1960. Specific advancements were needed in component performance and system engineering particularly for space qualified hardware. Advances in system architecture, data formatting and component technology over the past three decades have made laser communications in space not only viable but also an attractive approach into inter satellite link applications.Information transfer is driving the requirements to higher data rates, laser cross -link technology explosions, global development activity, increased hardware, and design maturity. Most important in space laser communications has been the development of a reliable, high power, single mode laser diode as a directly modulable laser source. This technology advance offers the space laser communication system designer the flexibility to design very lightweight, high bandwidth, low-cost communication payloads for satellites whose launch costs are a very strong function of launch weigh. This feature substantially reduces blockage of fields of view of most desirable areas on satellites. The smaller antennas with diameter typically less than 30 centimeters create less momentum disturbance to any sensitive satellite sensors. Fewer on board consumables are required over the long lifetime because there are fewer disturbances to the satellite compared with heavier and larger RF systems. The narrow beam divergence affords interference free and secure operation.


The heliodisplay is an interactive planar display. Though the image it projects appears much like a hologram, its inventors claim that it doesn't use holographic technology, though it does use rear projection (not lasers as originally reported) to project its image. It does not require any screen or substrate other than air to project its image, but it does eject a water-based vapour curtain for the image to be projected upon. The curtain is produced using similar ultrasonic technology as used in foggers and comprises a number of columns of fog. This curtain is sandwiched between curtains of clean air to create an acceptable screen.Heliodisplay moves through a dozen metal plates and then comes out again. (The exact details of its workings are unknown, pending patent applications.) It works as a kind of floating touch screen, making it possible to manipulate images projected in air with your fingers, and can be connected to a computer using a standard VGA connection. It can also connect with a TV or DVD by a standard RGB video cable. Though due to the turbulent nature of the curtain, not currently suitable as a workstation.The Heliodisplay is an invention by Chad Dyner, who built it as a 5-inch prototype in his apartment before founding IO2 technologies to further develop the product.

Guided missiles

Presently there are many types of guided missiles. They can be broadly classified on the basis of their features such as type of target, method of launching, range, launch platform, propulsion or guidance and tye of trajectory. On the basis of target they are classified as Antitank/ Antiarmour, Antipersonnel, Antiaircraft, Antiship/ Antisubmarine, Antisatallites or Antimissiles.Another classification of missiles which depends upon the method of launching. They are surface- to- surface Missiles [ SSM], Surface-to-Air Missiles [SAM], Air -to-Air Missiles [AAM] and Air- to- Surface Missiles. Surface- to - Surface Missiles are common Ground-to-Ground ones. Although these may be launched from a ship to another ship. Under water weapons, which are launched from a submarine, also come under this classification.Surface-to-Air Missiles ar3 essential complaint of modern air defence systems along with Antiaircraft guns which are used against hostile aircrafts. Air-to-air Missiles are for air born battle among fighter or bomber aircraft. These are usually mounted under the wings of the aircraft and are fired against the target. The computer and radar networks control these missiles.On the basis of range, missiles can be broadly classified as short range missiles, medium range missiles, intermediate range missiles and long range missiles. These classifications is mainly used in the care of SSM missiles which travel a distance of about a distance of about 50 to 100 km. Are designated as short range missiles. Those with a range of 100 to 1500 km. Are called medium range missiles and missiles having a range up to 5000 km. Are said to be intermediate- range missiles. Missiles, which travel a distance of 12000 km, are called long-range missiles.On the basis of launch platform missiles can be termed as shoulder fired, Land/ Mobile fired, Aircraft/ Helicopter borne, Ship/ submarine- launched.

Free Space Optics

Mention optical communication and most people think of fiber optics. But light travels through air for a lot less money. So it is hardly a surprise that clever entrepreneurs and technologists are borrowing many of the devices and techniques developed for fiber-optic systems and applying them to what some call fiber-free optical communication. Although it only recently, and rather suddenly, sprang into public awareness, free-space optics is not a new idea. It has roots that go back over 30 years--to the era before fiber-optic cable became the preferred transport medium for high-speed communication. In those days, the notion that FSO systems could provide high-speed connectivity over short distances seemed futuristic, to say the least. But research done at that time has made possible today's free-space optical systems, which can carry full-duplex (simultaneous bidirectional) data at gigabit-per-second rates over metropolitan distances of a few city blocks to a few kilometers. FSO first appeared in the 60's, for military applications. At the end of 80's, it appeared as a commercial option but technological restrictions prevented it from success. Low reach transmission, low capacity, severe alignment problems as well as vulnerability to weather interferences were the major drawbacks at that time. The optical communication without wire, however, evolved! Today, FSO systems guarantee 2.5 Gb/s taxes with carrier class availability. Metropolitan, access and LAN networks are reaping the benefits. The use of free space optics is particularly interesting when we perceive that the majority of customers does not possess access to fibers as well as fiber installation is expensive and demands long time. Moreover, right-of-way costs, difficulties in obataining government licenses for new fiber installation etc. are further problems that has turned FSO into the option of choice for short reach applications.FSO uses lasers, or light pulses, to send packetized data in the terahertz (THz) spectrum range. Air, ot fiber, is the transport medium. This means that urban businesses needing fast data and Internet access have a significantly lower-cost option.

Fractal Robot

Fractal Robot is a science that promises to revolutionize technology in a way that has never been witnessed before. Fractal Robots are objects made from cubic bricks that can be controlled by a computer to change shape and to reconfigure themselves into objects of different shapes. These cubic motorized bricks can be programmed to move and shuffle themselves to change shape to make objects like a house potentially in few seconds. It is exactly like kids playing with Lego bricks and making a toy house or a toy bridge by snapping together Lego bricks, except that here we are using a computer.This technology has the potential to penetrate every field of human work like construction, medicine, research and others. Fractal robots can enable buildings to build within a day, help perform sensitive medical operations and can assist in laboratory experiments. Also, Fractal Robots have built-in self repair which means they continue to work without human intervention. Also, this technology brings down the manufacturing price down dramatically.A Fractal Robot resembles itself, i.e. wherever you look at, any part of its body will be similar to the whole object. The robot can be animated around its joints in a uniform manner. Such robots can be straight forward geometric patterns/images that look more like natural structures such as plants. This patented product however has a cubical structure.A fractal cube can be of any size. The smallest expected size is between 1000 and 10,000 atoms wide. These cubes are embedded with computer chips that control their movement.


E-intelligence systems provide internal business users, trading partners, and corporate clients rapid and easy access to the e-business information, applications, and services they need in order to compete effectively and satisfy customer needs. They offer many business benefits to organizations in exploiting the power of the Internet. For example, e-intelligence systems give the organization the ability to:

1.Integrate e-business operations into the traditional business environment, giving business users a complete view of all corporate business operations and information.
2.Help business users make informed decisions based on accurate and consistent e-business information that is collected and integrated from e-business applications.
This business information helps business users optimize Web-based offerings (products offered, pricing and promotions, service and support, and so on) to match marketplace requirements and analyze business performance with respect to competitors and the organization's business-performance objectives.
3.Assist e-business applications in profiling and segmenting e-business customers. Based on this information, businesses can personalize their Web pages and the products and services they offer.
4.Extend the business intelligence environment outside the corporate firewall, helping the organization share internal business information with trading partners. Sharing this information will let it optimize the product supply chain to match the demand for products sold through the Internet and minimizes the costs of maintaining inventory.
5.Extend the business intelligence environment outside the corporate firewall to key corporate clients, giving them access to business information about their accounts.
With this information, clients can analyze and tune their business relationships with other organization, improving client service and satisfaction.
6.Link e-business applications with business intelligence and collaborative processing applications, allowing internal and external users to seamlessly move among different systems.

Direct to home (DTH)

Direct to home (DTH) television is a wireless system for delivering television programs directly to the viewer's house. In DTH television, the broadcast signals are transmitted from satellites orbiting the Earth to the viewer's house. Each satellite is located approximately 35,700 km above the Earth in geosynchronous orbit. These satellites receive the signals from the broadcast stations located on Earth and rebroadcast them to the Earth. The viewer's dish picks up the signal from the satellite and passes it on to the receiver located inside the viewer's house. The receiver processes the signal and passes it on to the television.The DTH provides more than 200 television channels with excellent quality of reception along with teleshopping, fax and internet facilities. DTH television is used in millions of homes across United States, Europe and South East Asia. Direct to home television is a wireless system for delivering television programming directly to a viewer's house. Usually broadcasting stations use a powerful antenna to transmit radio waves to the surrounding area. Viewer's can pickup the signal with a much smaller antenna. The main limitation of broadcast television is range. The radio signal used to broadcast television shoot out from the broadcast antenna in a straight line. Inorder to receive these signals, you have to be in the direct "line of sight" of the antenna. Small obstacles like trees or small buildings aren't a problem; but a big obstacle, such as Earth, will reflect these waves. If the Earth were perfectly flat, you could pickup broadcast television thousands of miles from the source. But because the planet is curved, it eventually breaks the signal's line of sight. The other problem with broadcast television is that the signal is often distorted even in the viewing area. To get a perfectly clear signal like you find on the cable one has to be pretty close to the broadcast antenna without too many obstacles in the wave.


A data logger (or datalogger) is an electronic instrument that records data over time or in relation to location. Increasingly, but not necessarily, they are based on a digital processor (or computer). They may be small, battery powered and portable and vary between general purpose types for a range of measurement applications to very specific devices for measuring in one environment only.It is common for general purpose types to beprogrammable.
Standardisation of protocols and data formats is growing in the industry and XML is increasingly being adopted for data exchange. The development of the Semantic Web is likely to accelerate this trend. A smart protocol, SDI-12, exists that allows some instrumentation to be connected to a variety of data loggers. The use of this standard has not gained much acceptance outside the environmental industry. SDI-12 also supports multi drop instruments.

Some datalogging companies are also now supporting the MODBUS standard, this has been used traditionally in the industrial control area there are many industrial instruments which support this communication standard. Some data loggers utilize a flexible scripting environment to adapt themselves to various non-standard protocols.
Another multi drop protocol which is now stating to become more widely used is based upon CANBUS (ISO 11898) this bus system was originally developed by Robert Bosch for the automotive industry. This protocol is ideally suited to higher speed logging, the data is divided into small individually addressed 64 bit packets of information with a very strict priority. This standard from the automotive/machine area is now seeping into more traditional data logging areas, a number of newer players and some of the more traditional players have loggers supporting sensors with this communications bus.

CT Scanning

There are two main limitations of using conventional x-rays to examine internal structures of the body. Firstly superimpositions of the 3-dimensional information onto a single plane make diagnosis confusing and often difficult. Secondly the photographic film usually used for making radiographs has a limited dynamic range and therefore only object that have large variation in the x-ray absorption relative to their surroundings will cause sufficient contrast differences on the film to be distinguished by the eye. Thus the details of bony structures can be seen, it is difficult to discern the shape and composition of soft tissue organ accurately. CT uses special x-ray equipment to obtain image data from different angles around a body and then shows a cross section of body tissues and organs. i.e., it can show several types of tissue-lung,bone,soft tissue and blood vessel with great clarity. CT of the body is a patient friendly exam that involves little radiation exposure.

Basic Principle
In CT scanning, the image is reconstructed from a large number of absorption profiles taken at regular angular intervals around a slice, each profile being made up from a parallel set of absorption values through the object. ie, CT also passes x-rays through the body of the patient but the detection method is usually electronic in nature, and the data is converted from analog signal to digital impulses in an AD converter. This digital representation of the x-ray intensity is fed in to a computer, which then reconstruct an image. The method of doing of tomography uses an x-ray detector which translates which translates linearly on a track across the x-ray beam, and when the end of the scan is reached the x-ray tube and the detector are rotated to a new angle and the linear motion is repeated. The latest generation of CT machines use a 'fan-beam' geometry with an array of detectors which simultaneously detect x-rays on a number of different paths through the patient.

After more than a century of research and development, the internal combustion (IC) engine is nearing both perfection and obsolescence: engineers continue to explore the outer limits of IC efficiency and performance, but advancements in fuel economy and emissions have effectively stalled. While many IC vehicles meet Low Emissions Vehicle standards, these will give way to new, stricter government regulations in the very near future. With limited room for improvement, automobile manufacturers have begun full-scale development of alternative power vehicles. Still, manufacturers are loath to scrap a century of development and billions or possibly even trillions of dollars in IC infrastructure, especially for technologies with no history of commercial success. Thus, the ideal interim solution is to further optimize the overall efficiency of IC vehicles.One potential solution to this fuel economy dilemma is the continuously variable transmission (CVT), an old idea that has only recently become a bastion of hope to automakers. CVTs could potentially allow IC vehicles to meet the first wave of new fuel regulations while development of hybrid electric and fuel cell vehicles continues. Rather than selecting one of four or five gears, a CVT constantly changes its gear ratio to optimize engine efficiency with a perfectly smooth torque-speed curve. This improves both gas mileage and acceleration compared to traditional transmissions.The fundamental theory behind CVTs has undeniable potential, but lax fuel regulations and booming sales in recent years have given manufacturers a sense of complacency: if consumers are buying millions of cars with conventional transmissions, why spend billions to develop and manufacture CVTs?

Cellular radio

Cellular mobile radio systems aim to provide high-mobility, wide-ranging, two-way wireless voice communications. These systems accomplish their task by integrating wireless access with large-scale networks, capable of managing mobile users. Cellular radio technology generally uses transmitter power at a level around 100 times that used by a cordless telephone (approximately 2 W for cellular).

Cellular radio has evolved into digital radio technologies, using the systems standards of GSM (at 900 and 1800 MHz) in Europe, PDC in Japan, and IS-136A and IS-95A in the United States. Third-generation systems, such as wideband code division multiple access (WCDMA) and cdma2000, are currently under development.

Design Considerations
One of the most significant consideration in designing digital systems is the high cost of cell sites. This has motivated system designers to try to maximize the number of users per megahertz, and users per cell site. Another important consideration is maintaining adequate coverage in areas of varying terrain and population density. For example, in order to cover sparsely populated regions, system designers have retained the high-power transmission requirement to provide maximum range from antenna locations.Communications engineers have also been developing very small coverage areas, or microcells. Microcells provide increased capacity in areas of high user density, as well as improved coverage of shadowed areas. Some microcell base stations are installed in places of high user concentrations, such as conference center lobbies.

Brain Chips

Science has long been able to listen into the signals the brain sends, but is just now learning to turn those signals into meaningful action. The result is restoring movement and speech to the disabled.One such effort is Cyberkinetic's BrainGate Neural Interface System, now undergoing clinical trials. The tiny chip was developed by Brown University's John Donoghue, who serves as Cyberkinetic's Chief Scientific Officer."Our research was to investigate the electrical signals in the brain," says Donoghue, "and how they are transformed as these thoughts get changed over into actual control of your arm or your hand.""One of the big breakthroughs in neuroscience is that we can tap into signals [from the brain], and we get many complex electrical impulses from those neurons," says Brown. "We can read out those signals, and by some not-to-complex mathematical techniques, we can put them back together in a way that we can interpret what the brain is trying to do.""In this trial," he explains, "we've implanted a tiny chip in the brain and that tiny chip picks up signals about moving the arm." The signal is then converted into simple commands that can be used to control computers, turn lights on and off, control a television set. Or, as Donoghue explains, "control robotic devices like an artificial hand... or a robotic arm."


The first use of Audio-Animatronics was for Walt Disney's Enchanted Tiki Room in Disneyland, which opened in June, 1963. The Tiki birds were operated using digital controls; that is, something that is either on or off. Tones were recorded onto tape, which on playback would cause a metal reed to vibrate. The vibrating reed would close a circuit and thus operate a relay. The relay sent a pulse of energy (electricity) to the figure's mechanism which would cause a pneumatic valve to operate, which resulted in the action, like the opening of a bird's beak. Each action (e.g., opening of the mouth) had a neutral position, otherwise known as the "natural resting position" (e.g., in the case of the Tiki bird it would be for the mouth to be closed). When there was no pulse of energy forthcoming, the action would be in, or return to, the natural resting position. This digital/tone-reed system used pneumatic valves exclusively--that is, everything was operated by air pressure. Audio-Animatronics' movements that were operated with this system had two limitations. First, the movement had to be simple--on or off. (e.g., The open and shut beak of a Tiki bird or the blink of an eye, as compared to the many different positions of raising and lowering an arm.) Second, the movements couldn't require much force or power. (e.g., The energy needed to open a Tiki Bird's beak could easily be obtained by using air pressure, but in the case of lifting an arm, the pneumatic system didn't provide enough power to accomplish the lift.) Walt and WED knew that this this pneumatic system could not sufficiently handle the more complicated shows of the World's Fair. A new system was devised. In addition to the digital programming of the Tiki show, the Fair shows required analog programming. This new "analog system" involved the use of voltage regulation.


Human beings extract a lot of information about their environment using their ears. In order to understand what information can be retrieved from sound, and how exactly it is done, we need to look at how sounds are perceived in the real world. To do so, it is useful to break the acoustics of a real world environment into three components: the sound source, the acoustic environment, and the listener:

1. The sound source: this is an object in the world that emits sound waves. Examples are anything that makes sound - cars, humans, birds, closing doors, and so on. Sound waves get created through a variety of mechanical processes. Once created, the waves usually get radiated in a certain direction. For example, a mouth radiates more sound energy in the direction that the face is pointing than to side of the face.

2. The acoustic environment: once a sound wave has been emitted, it travels through an environment where several things can happen to it: it gets absorbed by the air (the high frequency waves more so than the low ones. The absorption amount depends on factors like wind and air humidity); it can directly travel to a listener (direct path), bounce off of an object once before it reaches the listener (first order reflected path), bounce twice (second order reflected path), and so on; each time a sound reflects off an object, the material that the object is made of has an effect on how much each frequency component of the sound wave gets absorbed, and how much gets reflected back into the environment; sounds can also pass through objects such as water, or walls; finally, environment geometry like corners, edges, and small openings have complex effects on the physics of sound waves (refraction, scattering).

3. The listener: this is a sound-receiving object, typically a "pair of ears". The listener uses acoustic cues to interpret the sound waves that arrive at the ears, and to extract information about the sound sources and the environment.

Digital Light Processing

Large-screen, high-brightness electronic projection displays serve four broad areas of application:

(1) electronic presentations (e.g., business, education, advertising),

(2) entertainment (e.g., home theater, sports bars, theme parks, electronic cinema),

(3) status and information (e.g., military, utilities, transportation, public, sports) and (

4) simulation (e.g., training, games).

The electronic presentation market is being driven by the pervasiveness of software that has put sophisticated presentation techniques (including multimedia) into the hands of the average PC user.A survey of high-brightness (>1000 lumens) electronic projection displays for comparing the already existing three types of projection display technologies namely, Oil film, CRT-LCD, and AM-LCD was conducted. Developed in the early 1940s at the Swiss Federal Institute of Technology and later at Gretag AG, oil film projectors (including the GE Talaria) have been the workhorse for applications that require projection displays of the highest brightness. But the oil film projector has a number of limitations including size, weight, power, setup time, stability, and maintenance. In response to these limitations, LCD-based technologies have challenged the oil film projector. These LCD-based projectors are of two general types: (1) CRT-addressed LCD light valves and (2) active-matrix (AM) LCD panels. LCD-based projectors have not provided the perfect solution for the entire range of high-brightness applications. CRT-addressed LCD light valves have setup time and stability limitations. Most active-matrix LCDs used for high-bright-ness applications are transmissive and, because of this, heat generated by light absorption cannot be dissipated with a heat sink attached to the substrate. This limitation is mitigated by the use of large-area LCD panels with forced-air cooling. However, it may still be difficult to implement effective cooling at the highest brightness levels.


The heliodisplay is an interactive planar display. Though the image it projects appears much like a hologram, its inventors claim that it doesn't use holographic technology, though it does use rear projection (not lasers as originally reported) to project its image. It does not require any screen or substrate other than air to project its image, but it does eject a water-based vapour curtain for the image to be projected upon. The curtain is produced using similar ultrasonic technology as used in foggers and comprises a number of columns of fog. This curtain is sandwiched between curtains of clean air to create an acceptable screen.Heliodisplay moves through a dozen metal plates and then comes out again. (The exact details of its workings are unknown, pending patent applications.) It works as a kind of floating touch screen, making it possible to manipulate images projected in air with your fingers, and can be connected to a computer using a standard VGA connection. It can also connect with a TV or DVD by a standard RGB video cable. Though due to the turbulent nature of the curtain, not currently suitable as a workstation.The Heliodisplay is an invention by Chad Dyner, who built it as a 5-inch prototype in his apartment before founding IO2 technologies to further develop the product.

Military radar

Military radar should be an early warning, altering along with weapon control functions. It is specially designed to be highly mobile and should be such that it can be deployed within minutes. Military radar minimizes mutual interference of tasks of both air defenders and friendly air space users. This will result in an increased effectiveness of the combined combat operations. The command and control capabilities of the radar in combination with an effective ground based air defence provide maximum operational effectiveness with a safe, efficient and flexible use of the air space. The increased operational effectiveness is obtained by combining the advantages of centralized air defence management with decentralized air defence control.

Typical military radar has the following advanced features and benefits: -

All-weather day and night capability.
Multiple target handling and engagement capability.
Short and fast reaction time between target detection and ready to fire moment.
Easy to operate and hence low manning requirements and stress reduction under severe conditions.
Highly mobile system, to be used in all kind of terrain
Flexible weapon integration, and unlimited number of single air defence weapons can be provided with target data.
High resolution, which gives excellent target discrimination and accurate tracking.

Class D amplifiers

Class D amplifiers present us with a revolutionary solution, which would help us eliminate loss and distortions caused due to conversion of digital signals to analog while amplifying signals before transmitting it to speakers. This inchoate piece of knowledge could prove to detrimental in improving and redefining essence of sound and take it to a different realm. This type of amplifiers do not require the use of D-A conversion and hence reduce the costs incurred for developing state of art output technology. The digital output from sources such as CD's, DVD's and computers now can directly be sent for amplification without the need for any conversion. Another important feature of these unique and novel kind of amplifiers are that they give us a typical efficiency of 90% compared to that of the normal ones which give us a efficiency of 65-70%. This obviously means less amount of dissipation that indirectly means lower rated heat sinks and low waste of energy. This makes the use of D type amplifiers in miniature and portable devices all the more apt. All these years D type amplifiers have been used for purposes where efficiency was the key whereas now developments in this technology have made its entry possible into other domain that are less hi-fi. Thus showing up in MP3 players, portable CD players, laptop computers, cell phones, even personal digital assistants.An old idea, the Class D amplifier has taken on new life as equipment manufacturers and consumers redefine the musical experience to be as likely to occur in a car, on a personal stereo, or on an airplane as in a living room. For most consumers today, portability and style outweigh other factors in the choice of new audio gear. Class D amplifiers are ideally suited to capitalize on the trend.


AKA stands for the Authentication and Key Agreement security protocol. It is a mechanism which performs authentication and session key distribution in Universal Mobile Telecommunications System (UMTS) networks. AKA is a challenge-response based mechanism that uses symmetric cryptography. AKA is typically run in a UMTS IM Services Identity Module (ISIM), which resides on a smart card like device that also provides tamper resistant storage of shared secrets.

Exponential key exchange

The first publicly knownbpublic-key agreement protocol that meets the above criteria was the Diffie-Hellman exponential key exchange, in which two people jointly exponentiate a generator with random numbers, in such a way that an eavesdropper has no way of guessing what the key is.However, exponential key exchange in and of itself does not specify any prior agreement or subsequent authentication between the participants. It has thus been described as an anonymous key agreement protocol.


Anonymous key exchange, like Diffie-Hellman, does not provide authentication of the parties, and is thus vulnerable to Man-in-the-middle attacks.A wide variety of cryptographic authentication schemes and protocols have been developed to provide authenticated key agreement to prevent man-in-the-middle and related attacks. These methods generally mathematically bind the agreed key to other agreed-upon data, such as:
Public/private key pairs , Shared secret keys , Passwords


Network-attached storage (NAS) is a file-level computer data storage connected to a computer network providing data access to heterogeneous network clients.

Information Technology (IT) departments are looking for cost-effective storage solutions that can offer performance, scalability, and reliability. As users on the network increase and the amounts of data generated multiply, the need for an optimized storage solution becomes essential. Network Attached Storage (NAS) is becoming a critical technology in this environment.

The benefit of NAS over the older Direct Attached Storage (DAS) technology is that it separates servers and storage, resulting in reduced costs and easier implementation. As the name implies, NAS attaches directly to the LAN, providing direct access to the file system and disk storage. Unlike DAS, the application layer no longer resides on the NAS platform, but on the client itself. This frees the NAS processor from functions that would ultimately slow down its ability to provide fast responses to data requests.

In addition, this architecture gives NAS the ability to service both Network File System (NFS) and Common Internet File System (CIFS) clients. As shown in the figure below, this allows the IT manager to provide a single shared storage solution that can simultaneously support both Windows*-and UNIX*-based clients and servers. In fact, a NAS system equipped with the right file system software can support clients based on any operating system.

NAS is typically implemented as a network appliance, requiring a small form factor (both real estate and height) as well as ease of use. NAS is a solution that meets the ever-demanding needs of today s networked storage market.

A NAS unit is essentially a self-contained computer connected to a network, with the sole purpose of suppling file-based data storage services to other devices on the network. The operating system and other software on the NAS unit provide the functionality of data storage, file systems, and access to files, and the management of these functionalities. The unit is not designed to carry out general-purpose computing tasks, although it may technically be possible to run other software on it. NAS units usually do not have a keyboard or display, and are controlled and configured over the network, often by connecting a browser program to their network address.

The alternative to NAS storage on a network is to use a computer as a file server. In its most basic form a dedicated file server is no more than a NAS unit with keyboard and display and an operating system which, while optimised for providing storage services, can run other tasks; however, file servers are increasingly used to supply other functionality, such as supplying database services, email services, and so on.

A general-purpose operating system is not needed on a NAS device, and often minimal-functionality or stripped-down operating systems are used. For example FreeNAS, which is open source NAS software designed for use on standard computer hardware, is just a version of FreeBSD with all functionality not related to data storage stripped out. NASLite as the name suggests is a highly optimized version of Linux running from a floppy disk for the sole purpose of a NAS, Likewise, NexentaStor is based upon the core of the NexentaOS, an open source hybrid operating system with an OpenSolaris core and a Linux user environment.

NAS systems contain one or more hard disks, often arranged into logical, redundant storage containers or redundant RAID arrays (redundant arrays of inexpesive/independent disks). NAS removes the responsibility of file serving from other servers on the network.

NAS uses file-based protocols such as NFS (popular on UNIX systems) or SMB (Server Message Block) (used with MS Windows systems). NAS units rarely limit clients to a single protocol.

Blade Servers

Blade servers are self-contained computer servers, designed for high density. Slim, hot swappable blade servers fit in a single chassis like books in a bookshelf - and each is an independent server, with its own processors, memory, storage, network controllers, operating system and applications. The blade server simply slides into a bay in the chassis and plugs into a mid- or backplane, sharing power, fans, floppy drives, switches, and ports with other blade servers. Blade servers are self-contained computer servers, designed for high density. Whereas a standard rack-mount server can exist with (at least) a power cord and network cable, blade servers have many components removed for space, power and other considerations.

A blade enclosure provides services such as power, cooling, networking, various interconnects and management - though different blade providers have differing principles around what should and should not be included in the blade itself (and sometimes in the enclosure altogether). Together these form the blade system.

In a standard server-rack configuration, 1U (one rack unit, 19 wide and 1.75 tall) is the minimum possible size of any equipment. The principal benefit of, and the reason behind the push towards, blade computing is that components are no longer restricted to these minimum size requirements. The most common computer rack form-factor being 42U high, this limits the number of discrete computer devices directly mounted in a rack to 42 components. Blades do not have this limitation; densities of 100 computers per rack and more are achievable with the current generation of blade systems.Slim, hot swappable blade servers fit in a single chassis like books in a bookshelf - and each is an independent server, with its own processors, memory, storage, network controllers, operating system and applications. The blade server simply slides into a bay in the chassis and plugs into a mid- or backplane, sharing power, fans, floppy drives, switches, and ports with other blade servers. The benefits of the blade approach will be obvious to anyone tasked with running down hundreds of cables strung through racks just to add and remove servers.

In the purest definition of computing (a Turing machine, simplified here), a computer requires only;

Today (contrast with the first general-purpose computer) these are implemented as electrical components requiring (DC) power, which produces heat. Other components such as hard drives, power supplies, storage and network connections, basic IO (such as Keyboard, Video and Mouseserial) etc. only support the basic computing function, yet add bulk, heat and complexity—not to mention moving parts that are more prone to failure than solid-state components. and

In practice, these components are all required if the computer is to perform real-world work. In the blade paradigm, most of these functions are removed from the blade computer, being either provided by the blade enclosure (e.g. DC power supply), virtualized (e.g. iSCSI storage, remote console over IP) or discarded entirely (e.g. serial ports). The blade itself becomes vastly simpler, hence smaller and (in theory) cheaper to manufacture.

Choke packet

A specialized packet that is used for flow control along a network. A router detects congestion by measuring the percentage of buffers in use, line utilization and average queue lengths. When it detects congestion, it sends choke packets across the network to all the data sources associated with the congestion. The sources respond by reducing the amount of data they are sending.

The picture of an irate system administrator trying to choke their router is what comes to mind when you see this term. While we think this should be used in anger management seminars for network administrators, sadly the term choke packet is already taken and being used to describe a specialized packet that is used for flow control along a network.

A packet sent from a receiving router if the router has too much data to process. The sending router slows its data rate to the receiving router until it no longer receives choke packets from that router. At that point, it increases its data rate again.

ECC is a public key encryption technique based on elliptic curve theory. ECC can be used to create faster, smaller and more efficient cryptographic keys. It generates keys through the properties of the elliptic curve equation rather than the traditional method of generation, as the product of very large prime numbers. This technology can be used in conjunction with most of the public key encryption methods such as RSA and Diffie-Hellman.

ECC can yield a level of security with a 164-bit key compared with other systems that require a 1,024-bit key. Since ECC provides an equivalent security at a lower computing power and battery resource usage, it is widely used for mobile applications. ECC was developed by Certicom, a mobile e-business security provider and was recently licensed by Hifn, a manufacturer of integrated circuitry and network security products. Many manufacturers, including 3COM, Cylink, Motorola, Pitney Bowes, Siemens, TRW and VeriFone have incorporated support for ECC in their products .

Public key cryptography is based on the creation of mathematical puzzles that are difficult to solve without certain knowledge about how they were created. The creator keeps that knowledge secret (the private key) and publishes the puzzle (the public key). The puzzle can then be used to scramble a message in a way that only the creator can unscramble. Early public key systems, such as the RSA algorithm, used products of two large prime numbers as the puzzle: a user picks two large random primes as his private key, and publishes their product as his public key. While finding large primes and multiplying them together is computationally easy, reversing the RSA process is thought to be hard (see RSA problem). However, due to recent progress in factoring integers (one way to solve the problem), FIPS 186-3 recommends that DSA and RSA public keys be at least 1024 bits long to provide adequate security.

Another class of puzzle involves solving the equation ab = c for b when a and c are known. Such equations involving real or complex numbers are easily solved using logarithms (i.e. b=log(c)/log(a)). However, in some large finite groups, finding solutions to such equations is quite difficult and is known as the discrete logarithm problem.

An elliptic curve is a plane curve defined by an equation of the form

y2 = x3 + ax + b

The set of points on such a curve (i.e., all solutions of the equation together with a point at infinity) can be shown to form an abelian group (with the point at infinity as identity element). If the coordinates x and y are chosen from a large finite field, the solutions form a finite abelian group. The discrete logarithm problem on such elliptic curve groups is believed to be more difficult than the corresponding problem in (the multiplicative group of nonzero elements of) the underlying finite field. Thus keys in elliptic curve cryptography can be chosen to be much shorter for a comparable level of security. (See: cryptographic key length)

As for other popular public key cryptosystems, no mathematical proof of difficulty has been published for ECC as of 2006. However, the U.S. National Security Agency has endorsed ECC technology by including it in its Suite B set of recommended algorithms. Although the RSA patent has expired, there are patents in force covering some aspects of ECC.


A cyborg is a cybernetic organism (i.e., an organism that is a self-regulating integration of artificial and natural systems). The term was coined in 1960 when Manfred Clynes and Nathan Kline used it in an article about the advantages of self-regulating human-machine systems in outer space. D. S. Halacy's Cyborg: Evolution of the Superman in 1965 featured an introduction by Manfred Clynes, who wrote of a "new frontier" that was "not merely space, but more profoundly the relationship between 'inner space' to 'outer space' -a bridge...between mind and matter." The cyborg is often seen today merely as an organism that has enhanced abilities due to technology, but this perhaps oversimplifies the category of feedback.

Fictional cyborgs are portrayed as a synthesis of organic and synthetic parts, and frequently pose the question of difference between human and machine as one concerned with morality, free will, and empathy. Fictional cyborgs may be represented as visibly mechanical (e.g. the Borg in the Star Trek franchise); or as almost indistinguishable from humans (e.g. the Cylons from the re-imagining of Battlestar Galactica). These fictional portrayals often register our society's discomfort with its seemingly increasing reliance upon technology, particularly when used for war, and when used in ways that seem to threaten free will. They also often have abilities, physical or mental, far in advance of their human counterparts (military forms may have inbuilt weapons, amongst other things). Real cyborgs are more frequently people who use cybernetic technology to repair or overcome the physical and mental constraints of their bodies. While cyborgs are commonly thought of as mammals, they can be any kind of organism.

According to some definitions of the term, the metaphysical and physical attachments humanity has with even the most basic technologies have already made them cyborgs. In a typical example, a human fitted with a heart pacemaker or an insulin pump (if the person has diabetes) might be considered a cyborg, since these mechanical parts enhance the body's "natural" mechanisms through synthetic feedback mechanisms. Some theorists cite such modifications as contact lenses, hearing aids, or intraocular lenses as examples of fitting humans with technology to enhance their biological capabilities; however, these modifications are no more cybernetic than would be a pen, a wooden leg, or the spears used by chimps to hunt vertebrates. Cochlear implants that combine mechanical modification with any kind of feedback response are more accurately cyborg enhancements.

The prefix "cyber" is also used to address human-technology mixtures in the abstract. This includes artifacts that may not popularly be considered technology. Pen and paper, for example, as well as speech, language. Augmented with these technologies, and connected in communication with people in other times and places, a person becomes capable of much more than they were before. This is like computers, which gain power by using Internet protocols to connect with other computers. Cybernetic technologies include highways, pipes, electrical wiring, buildings, electrical plants, libraries, and other infrastructure that we hardly notice, but which are critical parts of the cybernetics that we work within.

Holographic Versatile Disc (HVD) is an optical disc technology which would hold up to 3.9 terabytes (TB) of information. It employs a technique known as collinear holography, whereby two lasers, one red and one green, are collimated in a single beam. The green laser reads data encoded as laser interference fringes from a holographic layer near the top of the disc while the red laser is used as the reference beam and to read servo information from a regular CD-style aluminum layer near the bottom. Servo information is used to monitor the position of the read head over the disc, similar to the head, track, and sector information on a conventional hard disk drive. On a CD or DVD this servo information is interspersed amongst the data.

A dichroic mirror layer between the holographic data and the servo data reflects the green laser while letting the red laser pass through. This prevents interference from refraction of the green laser off the servo data pits and is an advance over past holographic storage media, which either experienced too much interference, or lacked the servo data entirely, making them incompatible with current CD and DVD drive technology. These discs have the capacity to hold up to 3.9 terabytes (TB) of information, which is approximately 5,800 times the capacity of a CD-ROM, 850 times the capacity of a DVD, 160 times the capacity of single-layer Blu-ray Discs, and about 4 times the capacity of the largest computer hard drives as of 2007. The HVD also has a transfer rate of 1 Gbit/s (128 MB/s). Optware was expected to release a 200 GB disc in early June 2006, and Maxell in September 2006 with a capacity of 300 GB and transfer rate of 20 MB/s. On June 28 , 2007, HVD standards have been approved and published.

It's an optical disc technology still in the childhood stages of research called as the collinear holography, would soon gain upper hand over the existing technologies like the blue-ray and HD DVD optical disc systems with respect to its storage capacity. Consisting of a blue-green laser and a red laser collimated in a single beam, the blue-green laser reads data encoded as laser interference fringes from a holographic layer near the top of the disc while the red laser is used as the reference beam and to read servo information from a regular CD-style aluminium layer near the bottom.

Current optical storage saves one bit per pulse, and the HVD alliance hopes to improve this efficiency with capabilities of around 60,000 bits per pulse in an inverted, truncated cone shape that has a 200 micrometer diameter at the bottom and a 500 micrometer diameter at the top. High densities are possible by moving these closer on the tracks: 100 GB at 18 micrometers separation, 200 GB at 13 micrometers, 500 GB at 8 micrometers and a demonstrated maximum of 3.9 TB for 3 micrometer separation on a 12 cm disc.

The system uses a green laser, with an output power of 1 watt, a high power for a consumer device laser. So a major challenge of the project for widespread consumer markets is to either improve the sensitivity of the polymer used, or develop and commoditize a laser capable of higher power output and suitable for a consumer unit.