Positive Extension Matrix

The extension matrix method is that at first, we find the distinguishing between the positive examples and negative examples. The extension matrix is used to represent those distinguishes, and then according to those distinguishes, the examples are induced so that the proper assertions are obtained. The extension matrix clearly reflects the distinguishing between positive examples and negative examples. It is easy to find the heuristic of a problem relying on it.

• Nowadays there are AE1, AE5 , AE9 and AE11 algorithms that are created by relying on the extension matrix. All those algorithms are creating the heuristics starting from the nature of the path. In the algorithms, a rule is simplest with AE11, and it obtains the simpler rule than the AQ15. The algorithm AE18 we proposed in the paper also belongs to the extension matrix. It is based on the positive extension matrix (PEM). It also creates heuristics to induce starting from the nature of the path. In the inducing the algorithm prior selects the required elements.

• In order to optimize our positive matrix algorithm, this talk will presents the algorithm AE18 and makes comparisons with our experimental results

Traffic Pulse Technology

The Traffic Pulse network is the foundation for all of Mobility Technologies® applications. This network uses a process of data collection, data processing, and data distribution to generate the most unique traffic information in the industry. Digital Traffic Pulse® collects data through a sensor network, processes and stores the data in a data center, and distributes that data through a wide range of applications.

Unique among private traffic information providers in the U.S. , Mobility Technologies' real-time and archived Traffic Pulse data offer valuable tools for a variety of commericial and governmental applications:

* Telematics - for mobile professionals and others, Mobility Technologies' traffic information complements in-vehicle navigation devices, informing drivers not only how to get from point A to point B but how long it will take to get there — or even direct them to an alternate route.
* Media - for radio and TV broadcasters, cable operators, and advertisers who sponsor local programming, Traffic Pulse Networks provides traffic information and advertising opportunities for a variety of broadcasting venues.
* Intelligent Transport business solutions (ITS) - for public agencies, Mobility Technologies' applications aid in infrastructure planning, safety research, and livable community efforts; integrate with existing and future ITS technologies and deployments; and provide data reporting tools

In the age of multimedia and high-speed networks, multicast is one of the mechanisms by which the power of the Internet can be further harnessed in an efficient manner. It has been increasingly used by various continuous media applications such as teleconferencing, distance learning, and voice & video transmission. Compared with unicast and broadcast, multicast can save network bandwidth and make transmission more efficient. In this seminar, we will review the history of the multicast, present you several existing multicast routing algorithms and analysis the features of the existing Multicast routing protocols that have been proposed for best effort multicast. Some of the issues and open problems related to multicast implementation and deployment are discussed as well

Souped-Up Mesh Networks

In an effort to make a better wireless network, the Cambridge MA-based company BBN Technologies announced last week that it has built a mesh network that uses significantly less power than traditional wireless networks, such as cellular and Wi-Fi, while achieving comparable data-transfer rates.

The technology, which is being funded by the Defense Advanced Research Projects Agency (DARPA), was developed to create ad hoc communication and surveillance networks on battlefields. But aspects of it are applicable to emergency or remote cell-phone networks, and could potentially even help to extend the battery life of consumer wireless devices, says Jason Redi, a scientist at BBN.

Mesh networks -- collections of wireless transmitters and receivers that send data hopping from one node to another, without the need of a centralized base station or tower -- are most often found in research applications, in which scientists deploy hordes of sensors to monitor environments from volcanoes to rainforests. In this setting, mesh networks are ideal because they can be deployed without a large infrastructure. Because they lack the need for costly infrastructure, mesh networks can also be used for bringing communication to remote areas where there isn't a reliable form of electricity. In addition, they can be established quickly, which is useful for building networks of phones or radios during a public emergency.

While mesh networks have quite a bit of flexibility in where they can be deployed and how quickly, so far they've been less than ideal for a number of applications due to their power requirements and relatively slow data-transfer rates. All radios in a mesh network need to carry an onboard battery, and in order to conserve battery power, most low-power mesh networks send and receive data slowly -- at about tens of kilobits per second. 'You get the low power,' says Redi, 'but you also get poor performance.'

Especially in military surveillance, the data rates need to be much faster. If a soldier has set up a network of cameras, for example, he or she needs to react to the video as quickly as possible. So, to keep the power consumption to a minimum and increase data-transfer rates, the BBN team modified both the hardware and software of their prototype network. The result is a mesh network that can send megabits of data per second across a network (typical rates for Wi-Fi networks, and good enough to stream video), using one-hundredth the power of traditional networks.

Sockets are one of the most basic mechanisms of computer networking. Much of today's software relies on low level socket technology. This project includes creating a server application and a client application which uses sockets for communication. In this project, Windows socket programming is implemented using Microsoft Visual C++. The Windows socket specification defines a binary compatible network programming interface. This application permits to communicate across any network that conforms to the windows API. It can exchange data with other sockets in the same communication domain which uses the internet protocol. The sockets utilized by this application are used in the full duplex mode which increases the time sharing.
Capabilities of clint server communication

¨ The server can connect to multiple clients
¨ The server allows all-to-all communication.
¨ If a new client joins ,all the old clients get informed about his arrival whereas the new clients get the list of all the old clients.
¨ If a client quits all the existing clients get informed.

If the server sends a message, all the connected clients receive it. Similarly if the client sends a message , the server and all other clients receive it.

Teleportation

Teleportation is the transmission of a life-size image of a person to appear within a room at a distant location where the person has a telepresence for engaging in natural face to face communication with people at the distant location. The image of the person appears within a 3D environment, can make eye to eye contact with individuals and can hold true two-way conversations. This is a unique system invented by a privately held Richardson based firm by the name Teleportec Inc. They are mainly operating in USA and UK.

Teleportation systems are better than videoconferencing. Videoconferencing has never presented itself as a realistic alternative to face-to-face meetings because of its severe limitations - only one person can speak at any one time creating an amplified feeling of distance between participants. With videoconferencing people feel uncomfortable by being on camera and feel disconnected from the people shown on the screen. Teleprtation gives a sense of presence by achieving eye-to-eye contact with a distant person who is teleported into the room

A Management Information System (MIS) is an integrated, user-machine system for providing information to support operations, management and decision-making functions in an organization. The systems utilizes computer hardware and software; manual procedures; models for analysis, planning, control and decision making; and a database.Decision making is a pervasive function of anyone for achieving goals. In the case of managers decision making is of much importance because

* Managers spend a lot of time making decisions.
* Managers are evaluated in the basis of the number of and importance of the decisions made.

To facilitate scientific decision making, the managers require information, external and internal, supplied to him selectively and at his call. MIS is defined as follows:
Management-it consist of the processes or activities that describe what managers do in the operation of their organization -plan, organize, initiate and control operations.
Information-Data are facts and figures that are not currently being used in a decision process. They are records which need not to be retrieved immediately. Information is data that have been retrieved, processed and used for a decision making
System-In management concept, it optimizes the output of the organization by connecting the operating subsystems through information exchange.

Information handling takes place in every organization. But a systematic collection of the right data, its proper recording and timely retrieval to decision making are all part of a well designed MIS. The system approach is most suited to provide appropriate nformation in the right form at right time .Such a system will help to collect, discriminate, select, relate, classify and interpret the information according to the users need. The systems approach was introduced to achieve synergism, that is, the simultaneous action of separate but interrelated parts together produce a total effect greater than the sum of individual parts. In the past, the effectiveness of the business organizations was less than optimum because managers failed to relate the parts or functions of the systems to each other. The sales function was performed without a great deal of integration with design or production; production control was not coordinated with financial or personnel planning and so on

Digital Visual Interface

Digital Visual Interface adoption accelerates as Industry prepares for next wave of DVI-Product compliant. DVI is an open industry specification introduced by the DDWG, which enables high-performance, robust interfacing solutions for high-resolution digital displaysThe Digital Visual Interface (DVI) is a display interface developed in response to the proliferation of digital flat-panel-displays. For the most part, these displays are currently connected to analog video graphics array (VGA) interface and, thus, require a double conversion. The digital signal from the computer mustbe converted to an analog signal for the analog VGA interface, then converted back to a digital signal for processing by the flat-panel display. This inherently inefficient process takes a toll on performance and video quality and adds cost.

In contrast, when a flat-panel display is connected to a digital interface, no digital-to-analog conversion is required. The DVI interface is becoming more prevalent and is expected to become widely used for digital display devices, including flat-panel displays and emerging digital CRTs

Quantum teleportation

Teleportation is the name given by science fiction writers to the feat of making an object or person disintegrate in one place while a perfect replica appears somewhere else. how this is accomplished is usually not explained in detail but the general idea seems to he that the original object is scanned in such a way as to extract all the information from it, then this information is transmitted to the receiving location and used to construct the replica, not necessarily from the actual material of the original, but perhaps from atoms of the same kinds, arranged in exactly the same pattern as the original. a teleportation machine would be like a fax machine, except that it would work on 3-dimensional objects as well as documents, it would produce an exact copy rather than an approximate facsimile, and it would destroy the original in the process of scanning it.

In 1993 an international group of six scientists including IBM fellow charles h. bennett, confirmed the institution of the majority of science function writers by showing that perfect teleportation is indeed possible in principle, but only if the original is destroyed, meanwhile, other scientists are planning fiction experiments to demonstrate teleportation in microscopic objects, such as single atoms or photons, in the next few years. but science location fans will he disappointed to learn that no one expects to be able to teleport people or other macroscopic objects in the foreseeable future, for a variety of engineering reasons, even though it would not violate any fundamental law to do so

Genetic engineering

Genetic programming has recently emerged as an important paradigm for automatic generation of computer programs. GP combines metaphors drawn from biological evolution with computer science techniques in order to produce algorithms and programs automatically.From the very beginning, man has tried to develop machines that can replace the need for human beings for many applications; machines that require very little support of human beings. Research is going on to develop machines that can produce high artificial components in the results, with least human intelligence. In this context, the GP assumes a special significance.
Over the past decade the artificial evolution of computer code has become a rapidly spreading technology with many ramifications. Originally conceived as a means to enforce computer intelligence, it has now spread into many areas of machine learning and is starting to conquer many areas.
Genetic programming has a recently emerged as an important paradigm for automatic generation of computer programs. GP combines metaphors drawn from biological evolution with computer science techniques in order to produce algorithms and programs automatically.
In the long run the Genetic Programming will revolutionalize program development. Present methods do not mature enough for deployment as automatic programming systems. Nevertheless, GP has already made inroads into automatic programming and will continue to do so

Wireless Markup Language

When its time to find out how to make content available over WAP, we need to get to grips with its Markup Language. ie, WML. WML was designed from the start as a markup language to describe display of content on small screen devices.

It is a Markup language enabling the formatting of text in WAP environment using a variety of markup tags to determine the display appearance of content. WML is defined using the rules of XML-extensible markup language and therefore an XML application. WML provides a means of allowing the user to navigate around the WAP application and supports the use of anchored links as found commonly in the web pages. It also provides support for images and layout within the constraints of the device

Plastic circuitries

As researchers work towards the creation of plastic-based alternatives in order to to make technology more pervasive, silicon wafers might soon be biting the dust.

No one would need to interact with computers any more as technology would be ingrained into everyday objects like shirts, 'driverless' cars or therapeutic dolls", predicted Nicholas Negroponte, cofounder and director of MIT Media Laboratory in 1998. In his columns in the Wired magazine, (the further claimed that not only was the Digital Age upon us, but that we were already in the final stages of the digital revolution.

A big step in this/all-pervasive computing' direction is plastic re-engineering. Research in this field aims to create chips made of plastic wafers instead of silicon. Not only will such chips enable the products Negroponte talked about, it will also allow a hobbyist or a power-user to print his own PC!

E-commerce

E-commerce is the application of information technology to support business processes and the exchange of goods and services. E-cash came into being when people began to think that if we can store, forward and manipulate information, why can't we do the same with money. Both blanks and post offices centralise distribution, information and credibility. E-money makes it possible to decentralise these functions.

Electronic data interchange, which is the subset of e-com, is a set of data definitions that permits business forms to be exchanged electronically. The different payment schemes E-cash, Net-cash and PayMe system and also smart card technology is also. The foundation of all requirements for commerce over the world wide web is secured system of payment so various security measures are adopted over the Internet.
E-commerce represents a market worth potentiality hundreds of billions of dollars in just a few years to come. So it provides enormous opportunities for business. It is expected that in near future, electronic transaction will be as popular, if not more that the credit card purchases today.

Business is about information. It is about the right people having the right information at the right time. Exchanging the information efficiently and accurately will determine the success of the business.
There are three phases of implementation of E-Commerce.


" Replace manual and paper-based operations with electronic alternatives
" Rethink and simplify the information flows
" Use the information flows in new and dynamic ways


Simply replacing the existing paper-based system will reap new benefits. It may reduce administrative costs and improve the level of accuracy in exchanging data, but it does not address doing business efficiently. E-Commerce application can help to reshape the ways to do business

Voice Over Internet Protocol

VoIP, or "Voice over Internet Protocol" refers to sending voice and fax phone calls over data networks, particularly the Internet. This technology offers cost savings by making more efficient use of the existing network.

Traditionally, voice and data were carried over separate networks optimized to suit the differing characteristics of voice and data traffic. With advances in technology, it is now possible to carry voice and data over the same networks whilst still catering for the different characteristics required by voice and data.

Voice-over-Internet-Protocol (VOIP) is an emerging technology that allows telephone calls or faxes to be transported over an IP data network. The IP network could be

" A local area network in an office
" A wide area network linking the sites of a large international organization
" A corporate intranet
" The internet
" Any combination of the above

There can be no doubt that IP is here to stay. The explosive growth of the Internet, making IP the predominate networking protocol globally, presents a huge opportunity to dispense with separate voice and data networks and use IP technology for voice traffic as well as data. As voice and data network technologies merge, massive infrastructure cost savings can be made as the need to provide separate networks for voice and data can be eliminated.

Most traditional phone networks use the Public Switched Telephone Network(PSTN), this system employs circuit-switched technology that requires a dedicated voice channel to be assigned to each particular conversation. Messages are sent in analog format over this network.

Today, phone networks are on a migration path to VoIP. A VoIP system employs a packet-switched network, where the voice signal is digitized, compressed and packetized. This compressed digital message no longer requires a voice channel. Instead, a message can be sent across the same data lines that are used for the Intranet or Internet and a dedicated channels is no longer needed. The message can now share bandwidth with other messages in the network.

Normal data traffic is carried between PC's, servers, printers, and other networked devices through a company's worldwide TCP/IP network. Each device on the network has an IP address, which is attached to every packet for routing. Voice-over-IP packets are no different.
Users may use appliances such as Symbol's NetVision phone to talk to other IP phones or desktop PC-based phones located at company sites worldwide, provided that a voice-enabled network is installed at the site. Installation simply involves assigning an IP address to each wireless handset.

VOIP lets you make toll-free long distance voice and fax calls over existing IP data networks instead of the public switched telephone network (PSTN). Today business that implement their own VOIP solution can dramatically cut long distance costs between two or more locations

REAL TIME OPERATING SYSTEM

Within the last ten years real-time systems research has been transformed from a niche industry into a mainstream enterprise with clients in a wide variety of industries and academic disciplines. It will continue to grow in importance and affect an increasing number of industries as many of the reasons for the rise of its prominence will persist for the foreseeable future.

What is RTOS?
Real Time Computing and Real Time Operating Systems( RTOS ) is an
emerging discipline in software engineering. This is an embedded technology wherebythe application software does the dual function of operating system also. In RTOS thecorrectness of the system depends not only on the logical result but also on the time atwhich the results are obtained.
Real-time System

??Provides deterministic response to external events
??Has the ability to process data at its rate of occurrence
??Is deterministic in its functional & timing behavior
??Whose timing is analyzed in the worst cases not in the typical, normal cases to
guarantee a limiting response in any circumstances.

The seminar will basically provide a practical understanding of the goals, structure and operation of a real-time operating system (RTOS). The basic concepts of real-time system like the RTOS Kernel will be given a detailed description. The structure of the kernel is discussed, stressing the factors which affect response times and performance. Examples of RTOS functions such as scheduling, interrupt processing and intertask communication structures will also be discussed. Features of commercially available RTOS products are also presented.A real-time system is one where the timeliness of the result of a calculation is important Examples include military weapons systems, factory control systems, and Internet video and audio streaming. Different definitions of real-time systems exist. Here are just a few:


- Real-time computing is computing where system correctness depends not only on the correctness of the logical result of the computation but also on the result delivery time.
- A Real-Time System is an interactive system that maintains an on-going relationship with an asynchronous environment, i.e. an environment that progresses irrespective of the Real Time System, in an uncooperative manner.
- Real-time (software) (IEEE 610.12 - 1990): Pertaining a system or mode of operation in which computation is performed during the actual time that an external process occurs, in order that the computation results may be used to control, monitor, or respond in a timely manner to the external process.


From the above definitions its understood that in Real Time Systems, the
TIME is the biggest constraint. This makes real time systems different from ordinary systems. Thus in RTS data needs to be processed at some regular and timely rate. Also it should have fast response to events occurring at nonregular rates. In real world systems there is some delay between presentation of inputs and appearance of all associated outputs called the Response time. Thus a real time system must satisfy explicit response time constraints or risk severe consequences including failure.

Real - Time Systems and Real - Time Operating Systems

Timeliness is the single most important aspect of a real -time system. These systems respond to a series of external inputs, which arrive in an unpredictable fashion. The real-time systems process these inputs, take appropriate decis ions and also generate output necessary to control the peripherals connected to them. As defined by Donald Gillies "A real-time system is one in which the correctness of the computations not only depends upon the logical correctness of the computation but also upon the time in which the result is produced. If the timing constraints are not met, system failure is said to have occurred."

It is essential that the timing constraints of the system are guaranteed to be met. Guaranteeing timing behaviour requires that the system be predictable.

The design of a real -time system must specify the timing requirements of the system and ensure that the system performance is both correct and timely. There are three types of time constraints:

¢ Hard: A late response is incor rect and implies a system failure. An example of such a system is of medical equipment monitoring vital functions of a human body, where a late response would be considered as a failure.

¢ Soft: Timeliness requirements are defined by using an average respons e time. If a single computation is late, it is not usually significant, although repeated late computation can result in system failures. An example of such a system includes airlines reservation systems.

¢ Firm: This is a combination of both hard and soft t imeliness requirements. The computation has a shorter soft requirement and a longer hard requirement. For example, a patient ventilator must mechanically ventilate the patient a certain amount in a given time period. A few seconds' delay in the initiation of breath is allowed, but not more than that.

One need to distinguish between on -line systems such as an airline reservation system, which operates in real-time but with much less severe timeliness constraints than, say, a missile control system or a telephone switch. An interactive system with better response time is not a real-time system. These types of systems are often referred to as soft real time systems. In a soft real -time system (such as the airline reservation system) late data is still good dat a. However, for hard real -time systems, late data is bad data. In this paper we concentrate on the hard and firm real-time systems only.

Most real -time systems interface with and control hardware directly. The software for such systems is mostly custom -developed. Real -time Applications can be either embedded applications or non -embedded (desktop) applications. Real -time systems often do not have standard peripherals associated with a desktop computer, namely the keyboard, mouse or conventional display monitors. In most instances, real-time systems have a customized version of these devices

Compiler writing techniques have undergone a number of major revisions over the past forty years. The introduction of object-oriented design and implementation techniques promises to improve the quality of compilers, while making large-scale compiler development more manageable.

In this seminar you want to show that a new way of thinking of a compiler's structure is required to achieve complete object-orientation. This new view on compiling can lead to alternative formulations of parsing and code generation. In practice, the object-oriented formulations have not only proven to be highly efficient, but the have also been particularly easy to teach to students.

Millipedes

Using innovative nanotechnology, IBM scientists have demonstrated a data storage density of a trillion bits per square inch -- 20 times higher than the densest magnetic storage available today. IBM achieved this remarkable density -- enough to store 25 million printed textbook pages on a surface the size of a postage stamp -- in a research project code-named "Millipede".
Rather than using traditional magnetic or electronic means to store data, Millipede uses thousands of nano-sharp tips to punch indentations representing individual bits into a thin plastic film. The result is akin to a nanotech version of the venerable data processing 'punch card' developed more than 110 years ago, but with two crucial differences: the 'Millipede' technology is re-writeable (meaning it can be used over and over again), and may be able to store more than 3 billion bits of data in the space occupied by just one hole in a standard punch card.
Although this unique approach is smaller than today's traditional technologies and can be operated at lower power, IBM scientists believe still higher levels of storage density are possible. "Since a nanometer-scale tip can address individual atoms, further improvements far beyond even this fantastic terabit milestone can be achieved. While current storage technologies may be approaching their fundamental limits, this nanomechanical approach is potentially valid for a thousand-fold increase in data storage density.
The terabit demonstration employed a single "nano-tip" making indentations only 10 nanometers (millionth of a millimeter) in diameter -- each mark being 50,000 times smaller than the period at the end of this sentence. While the concept has been proven with an experimental setup using more than 1,000 tips, the research team is now building a prototype, due to be completed early next year, which deploys more than 4,000 tips working simultaneously over a 7 mm-square field. Such dimensions would enable a complete high-capacity data storage system to be packed into the smallest format used now for flash memory.

While flash memory is not expected to surpass 1-2 gigabytes of capacity in the near term, Millipede technology could pack 10 - 15 gigabytes of data into the same tiny format, without requiring more power for device operation.
The Millipede project could bring tremendous data capacity to mobile devices such as personal digital assistants, cellular phones, and multifunctional watches. In addition, we are also exploring the use of this concept in a variety of other applications, such as large-area microscopic imaging, nanoscale lithography or atomic and molecular manipulation.

Multiterabit networks

The explosive demand for bandwidth for data networking applications continues to drive photonics technology toward ever increasing capacity in the backbone fiber network and toward flexible optical networking. Already commercial Tb/s (per fiber) transmission systems have been announced, and it can be expected that in the next several years, we will begin to be limited by the 50 THz transmission bandwidth of silca optical fiber. Efficient bandwidth utilization will be one of the challenges of photonics research. Since the communication will be dominated by data, we can expect the network of the future to consist of multiterabit packet switches to aggregate traffic at the edge of the network and cross connects with wavelength granularity and tens of terabits throughout the core.
The infrastructure required to govern Internet traffic volume, which doubles every six months, consists of two complementary elements: fast point-to-point links and high-capacity switches and routers. Dense wavelength division multiplexing (DWDM) technology, which permits transmission of several wave-lengths over the same optical media, will enable optical point-to-point links to achieve an estimated 10 terabits per second by 2008. However, the rapid growth of Internet traffic coupled with the avail-ability of fast optical links threatens to cause a bottleneck at the switches and routers.
Multiterabit packet-switched networks will require high-performance scheduling algorithms and architectures. With port densities and data rates growing at an unprecedented rate, future prioritized scheduling schemes will be necessary to pragmatically scale toward multiterabit capacities. Further, support of strict QoS requirements for the diverse traffic loads characterizing emerging multimedia Internet traffic will increase. Continuous improvements in VLSI and optical technologies will stimulate innovative solutions to the intricate packet-scheduling task.

Nanorobotics

Nanorobotics is concerned with:
1) Design and fabrication of nanorobots with overall dimensions at or below the micrometer range and made of nanoscopic components
2) Programming and coordination of large numbers (swarms) of such nanorobots
3) Programmable assembly of nanometer-scale components either by manipulation with macro or micro devices, or by self-assembly on programmed templates.
Nanorobots have overall dimensions comparable to those of biological cells and organelles. This opens a vast array of potential applications in environmental monitoring for microorganisms and in health care. For example, imagine Artificial cells (nanorobots) that patrol the circulatory system, detect small concentrations of pathogens, and destroy them. This would amount to a programmable immune system, and might have far-reaching implications in medicine, causing a paradigm shift from treatment to prevention. Other applications such as Cell repair might be possible if nanorobots were small enough to penetrate the cells.

smart dust

Advance in hardware technology and engineering design have led to dramatic reduction in size, power consumption and cost for digital circuiting, wireless communication and micro electro mechanics sensors (MEMS). This has enabled very compact, autonomous and mobile nodes each containing of one or more sensors. This millimeter scale nodes each called as SMARTDUST. It was discovered at Berkeley at University of California by a team led by Prof.Kristofer.S.J. Pister This device is around size of grain of sand and contains sensors, computing ability, bi-directional wireless communication and a power supply.As tiny as dust particles, smart dust motes can be spread throughout buildings or into the atmosphere to collect and monitor data. Thus smart dust as small as a grain of rice is able to sense, think, talk and listen. Smart dust devices have applications in everything from military to meteorological to medical fields.

From light to the vibrations can be recognized using small wireless micro electromechanical sensors (MEMS), which are broadly classified as Smart Dust devices. As innovative ideas in silicon and fabrication followed, the smart dust devices, which carries out the communication, computation, and sensing into an all-in-one package, has been able to reduce its size to that of a sand grain.

These motes would collect the data compute and then pass the information using the two-way band radio between motes at distances approaching 1,000 feet. Some of the uses of these smart dust devices are identifying the manufacturing defects using vibrations, tracking patient movements in hospitals etc.


Microbotics

Microbotics (or microrobotics) is the field of miniature robotics, in particular mobile robots with characteristic dimensions less than 1 mm. The term can also be used for robots capable of handling micrometer size components. While the 'micro' prefix has been used subjectively to mean small, standardizing on length scales avoids confusion. Thus a nanorobot would have characteristic dimensions at or below 1 micrometer, or manipulate components on the 1 to 1000 nm size range. A microrobot would have characteristic dimensions less than 1 millimeter, a millirobot would have dimensions less than a cm, a minirobot would have dimensions less than 10 cm, and a small robot would have dimensions less than 100 cm. Due to their small size, microbots are potentially very cheap, and could be used in large numbers to explore environments which are too small or too dangerous for people or larger robots. It is expected that microbots will be useful in applications such as looking for survivors in collapsed buildings after an earthquake, or crawling through the digestive tract. What microbots lack in brawn or computational power, they can make up for by using large numbers, as in swarms of microbots. Microbots were born thanks to the appearance of the microcontroller in the last decade of the 20th century, and the appearance of miniature mechanical systems on silicon (MEMS), although many microbots do not use silicon for mechanical components other than sensors. One of the major challenges in developing a microrobot is to achieve motion using a very limited power supply. The microrobots can use a small light battery source like a coin cell or can scavenge power from the surrounding in the form of vibration or light energy.

Plastic electronics

Plastic electronics, is a branch of electronics that deals with conductive polymers, or plastics. It is called 'organic' electronics because the molecules in the polymer are carbon-based, like the molecules of living things. This is as opposed to traditional electronics which relies on inorganic conductors such as copper or silicon. In addition to organic Charge transfer complexes, technically, electrically conductive polymers are mainly derivatives of polyacetylene black (the 'simplest melanin'). Examples include PA (more specificially iodine-doped trans-polyacetylene); polyaniline: PANI, when doped with a protonic acid; and poly(dioctyl-bithiophene): PDOT. Conduction mechanisms involve resonance stabilization and delocalization of pi electrons along entire polymer backbones, as well as mobility gaps, tunneling, and phonon-assisted hopping. Conductive polymers are lighter, more flexible, and less expensive than inorganic conductors. This makes them a desirable alternative in many applications. It also creates the possibility of new applications that would be impossible using copper or silicon. New applications include smart windows and electronic paper. Conductive polymers are expected to play an important role in the emerging science of molecular computing. In general organic conductive polymers have a higher resistance and therefore conduct electricity poorly and inefficiently, as compared to inorganic conductors. Researchers currently are exploring ways of 'doping' organic semiconductors, like melanin, with relatively small amounts of conductive metals to boost conductivity. However, for many applications, inorganic conductors will remain the only viable option.

Ground bounce

In electronic engineering, ground bounce is a phenomenon associated with transistor switching where the gate voltage can appear to be less than the local ground potential, causing the unstable operation of a logic gate. Ground bounce is usually seen on high density VLSI where insufficient precautions have been taken to supply a logic gate with a sufficiently low resistance connection (or sufficiently high capacitance) to ground. In this phenomenon, when the gate is turned on, enough current flows through the emitter-collector circuit that the silicon in the immediate vicinity of the emitter is pulled high, sometimes by several volts, thus raising the local ground, as perceived by the transistor, to a value significantly above true ground. Relative to this local ground, the gate voltage can go negative, thus shutting off the transistor. As the excess local charge dissipates, the transistor turns back on, possibly causing a repeat of the phenomenon, sometimes up to a half-dozen bounces. Ground bounce is one of the leading causes of 'hung' or metastable gates in modern digital circuit design. This happens because the ground bounce puts the input of a flip flop effectively at voltage level that is neither a one or a zero at clock time, or causes untoward effects in the clock itself. A similar phenomenon may be seen on the collector side, called VCC sag, where VCC is pulled unnaturally low.


Lenses of Liquid

Fluid droplets could replace plastic lenses in cell-phone cameras, banishing blurry photos.

We don't expect much from a cell-phone camera. For one thing, only a handful of camera phones have a lens system capable of automatically focusing on objects at different distances -- causing many fuzzy snapshots.

But there may be a solution to the problem of camera phone focus -- and one that could find uses in other devices as well. Saman Dharmatilleke, Isabel Rodriguez, and colleagues at the Institute of Materials Research and Engineering in Singapore have proposed replacing the stationary plastic lens in most camera phones with a drop of liquid, such as water, that could be auto-focused by varying the amount of pressure applied to the drop. The team's lens has no moving parts, making it rugged, and it uses only minimal electricity, so it would not drain a cell-phone battery.

Additionally, the optical properties of liquids can be better than standard lens material. 'Water is more transparent to light than glass or plastic,' Rodriguez says. 'Water cannot be scratched and, in principle, is defect free.'

The technology, which appeared online in the January 26 issue of Applied Physics Letters, is based on the fact that a drop of a liquid with a high surface tension has a natural curvature similar to that of a conventional lens. When the drop is placed in a small well, and pressure is applied to it, the curvature of the drop alters; more pressure increases the curvature, and less flattens out the drop. As the curvature changes, so does the lens's focal length, allowing a clear image to be captured from various distances. In most cameras, the auto-focus feature mechanically moves the solid lens forward or back in order to adjust focal length. But in a liquid lens camera, the droplet stays put and only its curvature changes.

The researchers tested varying sizes of drops, from 100 microns to 3 millimeters: all responded to pressure changes within milliseconds. The bigger the lens, of course, the more light it collects, and more light produces better pictures. But when a droplet becomes too large, it is more difficult to keep stable. 'Up to two millimeters the lens stays perfectly in the aperture by surface tension,' Rodriguez says. 'You need to shake it very hard for it to move out.' She suspects that lenses one to two millimeters in diameter are ideal for most miniaturized imaging systems.

Stein Kuiper, the Philips researcher who developed the electrowetting technique for his company's liquid lenses, sees advantages in using pressure instead. 'The electrical properties of the liquid are not relevant, which allows for a wider range of liquids, and thus optical and mechanical properties of the lens.' Additionally, Kuiper says, the voltage required to change the pressure within a liquid lens system may be less than is required in a system using electrowetting. For these reasons, he says, Philips has 'built up' intellectual property rights on both types of lenses.

Tablet PC

A tablet PC is a notebook- or slate-shaped mobile computer. Its touchscreen or digitizing tablet technology allows the user to operate the computer with a stylus or digital pen instead of a keyboard or mouse.

The form factor presents an alternate method of interacting with a computer, the main intent being to increase mobility and productivity. Tablet PCs are often used in places where normal notebooks are impractical or unwieldy, or do not provide the needed functionality.

The tablet PC is a culmination of advances in miniaturization of notebook hardware and improvements in integrated digitizers as methods of input. A digitizer is typically integrated with the screen, and correlates physical touch or digital pen interaction on the screen with the virtual information portrayed on it. A tablet's digitizer is an absolute pointing device rather than a relative pointing device like a mouse or touchpad. A target can be virtually interacted with directly at the point it appears on the screen.


Light Pen

A light pen is a computer input device in the form of a light-sensitive wand used in conjunction with the computer's CRT monitor. It allows the user to point to displayed objects, or draw on the screen, in a similar way to a touch screen but with greater positional accuracy. A light pen can work with any CRT-based monitor, but not with LCD screens, projectors or other display devices.

A light pen is fairly simple to implement. The light pen works by sensing the sudden small change in brightness of a point on the screen when the electron gun refreshes that spot. By noting exactly where the scanning has reached at that moment, the X,Y position of the pen can be resolved. This is usually achieved by the light pen causing an interrupt, at which point the scan position can be read from a special register, or computed from a counter or timer. The pen position is updated on every refresh of the screen.

The light pen became moderately popular during the early 1980s. It was notable for its use in the Fairlight CMI, and the BBC Micro. However, due to the fact that the user was required to hold his or her arm in front of the screen for long periods of time, the light pen fell out of use as a general purpose input device.


Plasma Television

Television has been around since 19th century and for the past 50 years it held a pretty common place in our leaving room. Since the invention of television engineers have been striving to produce slim & flat displays that would deliver as good or even better images than the bulky C.R.T. Scores of research teams all over the world have been working to achieve this. Plasma television has achieved this goal. Technologies inside it are plasma and hi-definition which are just two of the latest technologies to hit stores. The main contenders in the flat race are PDP (Plasma Display Panel) and flat CRT with LCD and FED (Field Emission Display) To get an idea of what makes a plasma display different it needs to understand how a conventional TV set works. Conventional TV’s used CRT to create the images we see on the screen. The cathode is a heated filament, like the one in a light bulb. It is housed inside a vacuum created in a tube of thick glass….that is what makes your TV so big and heavy. The newest entrant in the field of flat panel display systems is Plasma display. Plasma display panels don’t contain cathode ray tubes and pixels are activated differently.

Astrophotography

Astrophotography is a specialised type of photography that entails making photographs of astronomical objects in the night sky such as planets, stars, and deep sky objects such as star clusters and galaxies.

Astrophotography is used to reveal objects that are too faint to observe with the naked eye, as both film and digital cameras can accumulate and sum photons over long periods of time.

Astrophotography poses challenges that are distinct from normal photography, because most subjects are usually quite faint, and are often small in angular size. Effective astrophotography requires the use of many of the following techniques:

  • Mounting the camera at the focal point of a large telescope
  • Emulsions designed for low light sensitivity
  • Very long exposure times and/or multiple exposures (often more than 20 per image).
  • Tracking the subject to compensate for the rotation of the Earth during the exposure
  • Gas hypersensitizing of emulsions to make them more sensitive (not common anymore)
  • Use of filters to reduce background fogging due to light pollution of the night sky.

Free Space Optics

Mention optical communication and most people think of fiber optics. But light travels through air for a lot less money. So it is hardly a surprise that clever entrepreneurs and technologists are borrowing many of the devices and techniques developed for fiber-optic systems and applying them to what some call fiber-free optical communication. Although it only recently, and rather suddenly, sprang into public awareness, free-space optics is not a new idea. It has roots that go back over 30 years--to the era before fiber-optic cable became the preferred transport medium for high-speed communication. In those days, the notion that FSO systems could provide high-speed connectivity over short distances seemed futuristic, to say the least. But research done at that time has made possible today's free-space optical systems, which can carry full-duplex (simultaneous bidirectional) data at gigabit-per-second rates over metropolitan distances of a few city blocks to a few kilometers.

FSO first appeared in the 60's, for military applications. At the end of 80's, it appeared as a commercial option but technological restrictions prevented it from success. Low reach transmission, low capacity, severe alignment problems as well as vulnerability to weather interferences were the major drawbacks at that time. The optical communication without wire, however, evolved! Today, FSO systems guarantee 2.5 Gb/s taxes with carrier class availability. Metropolitan, access and LAN networks are reaping the benefits. FSO success can be measured by its market numbers: forecasts predict it will reach a USS 2.5 billion market by 2006.

The use of free space optics is particularly interesting when we perceive that the majority of customers does not possess access to fibers as well as fiber installation is expensive and demands long time. Moreover, right-of-way costs, difficulties in obataining government licenses for new fiber installation etc. are further problems that has turned FSO into the option of choice for short reach applications.

FSO uses lasers, or light pulses, to send packetized data in the terahertz (THz) spectrum range. Air, ot fiber, is the transport medium. This means that urban businesses needing fast data and Internet access have a significantly lower-cost option.

An FSO system for local loop access comprises several laser terminals, each one residing at a network node to create a single, point-to-point link; an optical mesh architecture; or a star topology, which is usually point-to-multipoint. These laser terminals, or nodes, are installed on top of customers' rooftops or inside a window to complete the last-mile connection. Signals are beamed to and from hubs or central nodes throughout a city or urban area. Each node requires a Line-Of-Sight (LOS) view of the hub.

Active pixel sensor

An active pixel sensor (APS) is an image sensor consisting of an integrated circuit containing an array of pixels, each containing a photodetector as well as three or more transistors. Since it can be produced by an ordinary CMOS process, APS is emerging as an inexpensive alternative to CCDs.
Architecture
Pixel
The standard CMOS APS pixel consists of three transistors as well as a photodetector.
The photodetector is usually a photodiode, though photogate detectors are used in some devices and can offer lower noise through the use of correlated double sampling. Light causes an accumulation, or integration of charge on the 'parasitic' capacitance of the photodiode, creating a voltage change related to the incident light.
One transistor, Mrst, acts as a switch to reset the device. When this transistor is turned on, the photodiode is effectively connected to the power supply, VRST, clearing all integrated charge. Since the reset transistor is n-type, the pixel operates in soft reset.
The second transistor, Msf, acts as a buffer (specifically, a source follower), an amplifier which allows the pixel voltage to be observed without removing the accumulated charge. Its power supply, VDD, is typically tied to the power supply of the reset transistor.
The third transistor, Msel, is the row-select transistor. It is a switch that allows a single row of the pixel array to be read by the read-out electronics.
Array
A typical two-dimensional array of pixels is organized into rows and columns. Pixels in a given row share reset lines, so that a whole row is reset at a time. The row select lines of each pixel in a row are tied together as well. The outputs of each pixel in any given column are tied together. Since only one row is selected at a given time, no competition for the output line occurs. Further amplifier circuitry is typically on a column basis.

Wibro

Designed and integrated by the Korean telecom industry as an answer to the drawbacks of speed curbs in the likes of CDMA 1x mobile phones and to increase the flow rate of broadband Internet like the ADSL or Wireless LAN, the technology uses TDD for duplexing, OFDMA for multiple access and 8.75MHz as a channel bandwidth.

As the wibro base stations, provides a dataflow rate of 30 to 50 Mbit/s and also allow the usage of portable internet with in arrange of 1 – 5 Km, obviously the data flow rate, of devices in motion can be in a range of 120 km/hr and about 250 km/hr for wireless lan’s having a slow speed and for mobile phones. This figures were actually higher when compared with the range and bandwidth it offered during the testing of this technology in connection with the APEC summit in Busan in 2005. The main advantage this technology has over the WIMAX standard is its Quality of service (QoS). This QoS gives more reliability for the streaming video content and for other loss-sensitive data. WiBro is quite demanding, in its needs varying from the spectrum use to equipment design, WiMAX leaves much of this up to the equipment provider while supplying enough information to confirm the interoperability between designs.

In Korea, the government by 2001 recognized the advent of this innovative technology by giving a 100 MHz of electromagnetic spectrum in the 2.3 - 2.4 GHz band. By the end of 2004, WiBro Phase 1 was standardized by the TTA of Korea and in late 2005, ITU reflected WiBro as IEEE 802.16e. By June 2006, two major telecom companies in Korea namely the KT and the SKT Two Korean Telco (KT, SKT) began the commercial operations in the country, starting with a charge rate of 30 US$.

Since then, many telecom giants around the world namely the TI (Italia), TVA (Brazil), Omnivision (Venezuela), PORTUS (Croatia), and Arialink (Michigan) have started plans to come out with the commercial operations of the technology.


hydrophone

A hydrophone is a sound-to-electricity transducer for use in water or other liquids, analogous to a microphone for air. Note that a hydrophone can sometimes also serve as a projector (emitter), but not all hydrophones have this capability, and may be destroyed if used in such a manner.The first device to be called a 'hydrophone' was developed when the technology matured, and used ultrasonic waves, which would provide for higher overall acoustic output, as well as increasing detection. The ultrasonic waves were produced by a mosaic of thin quartz crystals glued between two steel plates, having a resonant frequency of about 150 kHz. Contemporary hydrophones more often use barium titanate, a piezoelectric ceramic material, giving higher sensitivity than quartz. Hydrophones are an important part of the SONAR system used to detect submarines by both surface vessels and other submarines. A large number of hydrophones were used in the building of various fixed location detection networks such as SOSUS.

Wearable computers

Wearable computing facilitates a new form of human - computer interaction based on a small body-worn computer system that is always ON and always ready and accessible. In this regard, the new computational framework differs from that of hand held devices, laptop computers and Personal Digital Assistants (PDA's).

The "always ready" capability leads to a new form of synergy between human and computer, characterized by long-term adaptation through constancy of user-interface. This new technology has a lot in store for you. You can do a lot of amazing things like typing your document while jogging, shoot a video from a horse-back, or while riding your mountain-bike over the railroad ties. And quite amazingly, you can even recall scenes that ceased to exist.

The whole of a wearable computer spreads all over the body, with the main unit situated in front of the user's eye. Wearable computers find a variety of applications by providing the user mediated augmented reality, helping people with poor eyesight etc. The MediWear and ENGwear are two models that highlight the applications of Wearable computers. However, some disadvantages do exist. With the introduction of "Under wearable computers" by Covert Systems, you can surely look ahead at the future of wearable computers in a optimistic way.

Tunable lasers

Tunable lasers are still a relatively young technology, but as the number of wavelengths in networks increases so will their importance. Each different wavelength in an optical network will be separated by a multiple of 0.8 nanometers (sometimes referred to as 100GHz spacing. Current commercial products can cover maybe four of these wavelengths at a time. While not the ideal solution, this still cuts your required number of spare lasers down. More advanced solutions hope to be able to cover larger number of wavelengths, and should cut the cost of spares even further.

The devices themselves are still semiconductor-based lasers that operate on similar principles to the basic non-tunable versions. Most designs incorporate some form of grating like those in a distributed feedback laser. These gratings can be altered in order to change the wavelengths they reflect in the laser cavity, usually by running electric current through them, thereby altering their refractive index. The tuning range of such devices can be as high as 40nm, which would cover any of 50 different wavelengths in a 0.8nm wavelength spaced system. Technologies based on vertical cavity surface emitting lasers (VCSELs) incorporate moveable cavity ends that change the length of the cavity and hence the wavelength emitted. Current designs of tunable VCSELs have similar tuning ranges

Neural Networks

Neural networks include the capacity to map the perplexed & extremely non - linear relationship between load levels of zone and the system topologies, which is required for the feeder reconfiguration in distribution systems.

This study is intended to purpose the strategies to reconfigure the feeders by using artificial neural n/w s with the mapping ability. Artificial neural n/w's determine the appropriate system topology that reduces the power loss according to the variation of load pattern. The control strategy can be easily obtained on the basis of system topology which is provided by artificial neural networks.
Artificial neural networks determine the most appropriate system topology according to the load pattern on the basis of trained knowledge in the training set . This is contrary to the repetitive process of transferring the load & estimating power loss in conventional algorithm.

ANN are designed to two groups:
1) The first group is to estimate the proper load data of each zone .
2)The second is to determine the appropriate system topology from input load level .

In addition, several programs with the training set builder are developed for the design the training & accuracy test of A.N.N.This paper will present the strategy of feeder reconfiguration to reduce power loss, by using A.N.N. The approach developed here is basically different from methods reviewed above on flow solution during search process are not required .The training set of A.N.N is the optimal system topology corresponding to various load patterns which minimizes the loss under given conditions

Sun Spot

Sun SPOT (Sun Small Programmable Object Technology) is a wireless sensor network (WSN) mote developed by Sun Microsystems. The device is built upon the IEEE 802.15.4 standard. Unlike other available mote systems, the Sun SPOT is built on the Java 2 Micro Edition Virtual Machine (JVM).
Hardware
The completely assembled device should be able to fit in the palm of your hand.
Processing
180 MHz 32 bit ARM920T core - 512K RAM - 4M Flash
2.4 GHz IEEE 802.15.4 radio with integrated antenna
USB interface
Sensor Board
2G/6G 3-axis accelerometer
Temperature sensor
Light sensor
8 tri-color LEDs
6 analog inputs
2 momentary switches
5 general purpose I/O pins and 4 high current output pins
Networking
The motes communicate using the IEEE 802.15.4 standard including the base-station approach to sensor networking. This implementation of 802.15.4 is not ZigBee-compliant.
Software
The device's use of Java device drivers is particularly remarkable as Java is known for its ability to be hardware-independent. Sun SPOT uses a small J2ME which runs directly on the processor without an OS.

MEMS in space

The satellite industry could experience its biggest revolution since it joined the ranks of commerce, thanks to some of the smallest machines in existence. Researchers are performing experiments designed to convince the aerospace industry that microelectromechanical systems (MEMS) could open the door to low-cost, high-reliability, mass-produced satellites.
MEMS combine conventional semiconductor electronics with beams, gears, levers, switches, accelerometers, diaphragms, microfluidic thrusters, and heat controllers, all of them microscopic in size."We can do a whole new array of things with MEMS that cannot be done any other way," said Henry Helvajian, a senior scientist with Aerospace Corp., a nonprofit aerospace research and development organization in El Segundo, Calif.

Microelectromechanical Systems, or MEMS, are integrated micro devices or systems combining electrical and mechanical components. They are fabricated using integrated circuit (IC) batch processing techniques and can range in size from micrometers to millimeters. These systems can sense, control and actuate on the micro scale, and function individually or in arrays to generate effects on the macro scale.

MEMS is an enabling technology and current applications include accelerometers, pressure, chemical and flow sensors, micro-optics, optical scanners, and fluid pumps. Generally a satellite consists of battery, internal state sensors, communication systems and control units. All these can be made of MEMS so that size and cost can be considerably reduced. Also small satellites can be constructed by stacking wafers covered with MEMS and electronics components. These satellites are called 'I' Kg class satellites or Picosats. These satellites having high resistance to radiation and vibration compared to conventional devices can be mass-produced there by reducing the cost. These can be used for various space applications.
Also small satellites can be constructed by stacking wafers covered with MEMS and electronics components. These satellites are called 'I' Kg

TECHNOLOGY

Although MEMS devices are extremely small MEMS technology is not about size. Instead, MEMS is a manufacturing technology; a new way of making complex electromechanical systems using batch fabrication techniques similar to the way integrated circuits are made and making these electromechanical elements along with electronics.

Material used
The material used for manufacturing MEMS is Silicon. Silicon possesses excellent materials properties making it an attractive choice for many high-performance mechanical applications (e.g. the strength-to-weight ratio for silicon is higher than many other engineering materials allowing very high bandwidth mechanical devices to be realized).

Components of MEMS
Micro-Electro-Mechanical Systems (MEMS) is the integration of mechanical elements, sensors, actuators, and electronics on a common silicon substrate through the utilization of micro fabrication technology. MEMS is truly an enabling technology allowing the development of smart products by augmenting the computational ability of microelectronics with the perception and control capabilities of micro sensors and micro actuators

Autonomic Computing

"Civilization advances by extending the number of important operations which we can perform without thinking about them." - Alfred North Whitehead.

This quote made by the preeminent mathematician Alfred Whitehead holds both the lock and the key to the next era of computing. It implies a threshold moment surpassed only after humans have been able to automate increasingly complex tasks in order to achieve forward momentum.
We are at just such a threshold right now in computing. The millions of businesses, billions of humans that compose them, and trillions of devices that they will depend upon all require the services of the IT industry to keep them running. And it's not just a matter of numbers. It's the complexity of these systems and the way they work together that is creating a shortage of skilled IT workers to manage all of the systems. It's a problem that is not going away, but will grow exponentially, just as our dependence on technology has.
The solution is to build computer systems that regulate themselves much in the same way our autonomic nervous system regulates and protects our bodies. This new model of computing is called autonomic computing. The good news is that some components of this technology are already up and running. However, complete autonomic systems do not yet exist. Autonomic computing calls for a whole new area of study and a whole new way of conducting business.
Autonomic computing was conceived to lessen the spiraling demands for skilled IT resources, reduce complexity and to drive computing into a new era that may better exploit its potential to support higher order thinking and decision making.
Immediate benefits will include reduced dependence on human intervention to maintain complex systems accompanied by a substantial decrease in costs. Long-term benefits will allow individuals, organizations and businesses to collaborate on complex problem solving.

Short-term IT related benefits

  • Simplified user experience through a more responsive, real-time system.
  • Cost-savings - scale to use.
  • Scaled power, storage and costs that optimize usage across both ardware and software.
  • Full use of idle processing power, including home PC's, through networked system.
  • Natural language queries allow deeper and more accurate returns.
  • Seamless access to multiple file types. Open standards will allow users to pull data from all potential sources by re-formatting on the fly.
  • Stability. High availability. High security system. Fewer system or network errors due to self-healing.

Long-term, Higher Order Benefits

  • Realize the vision of enablement by shifting available resources to higher-order business.
  • Embedding autonomic capabilities in client or access devices, servers, storage systems, middleware, and the network itself. Constructing autonomic federated systems.
  • Achieving end-to-end service level management.
  • Collaboration and global problem solving. Distributed computing allows for more immediate sharing of information and processing power to use complex mathematics to solve problems.
  • Massive simulation - weather, medical, complex calculations like protein folding - that require processors to run 24/7 for as long as a year at a time

Quantum dot lasers

The infrastructure of the Information Age has to date relied upon advances in microelectronics to produce integrated circuits that continually become smaller, better, and less expensive. The emergence of photonics, where light rather than electricity is manipulated, is posed to further advance the Information Age. Central to the photonic revolution is the development of miniature light sources such as the Quantum dots(QDs).

Today, Quantum Dots manufacturing has been established to serve new datacom and telecom markets. Recent progress in microcavity physics, new materials, and fabrication technologies has enabled a new generation of high performance QDs. This presentation will review commercial QDs and their applications as well as discuss recent research, including new device structures such as composite resonators and photonic crystals Semiconductor lasers are key components in a host of widely used technological products, including compact disk players and laser printers, and they will play critical roles in optical communication schemes.

The basis of laser operation depends on the creation of non-equilibrium populations of electrons and holes, and coupling of electrons and holes to an optical field, which will stimulate radiative emission. . Other benefits of quantum dot active layers include further reduction in threshold currents and an increase in differential gain-that is, more efficient laser operation

Valvetronic

The Valvetronic system is the first variable valve timing system to offer continuously variable timing (on both intake and exhaust camshafts) along with continuously variable intake valve lift, from ~0 to 10 mm, on the intake camshaft only. Valvetronic-equipped engines are unique in that they rely on the amount of valve lift to throttle the engine rather than a butterfly valve in the intake tract. In other words, in normal driving, the 'gas pedal' controls the Valvetronic hardware rather than the throttle plate

First introduced by BMW on the 316ti compact in 2001, Valvetronic has since been added to many of BMW's engines. The Valvetronic system is coupled with BMW's proven double-VANOS, to further enhance both power and efficiency across the engine speed range. Valvetronic will not be coupled to BMW's N53 and N54, 'High Precision Injection' (gasoline direct injection) technology due to lack of room in the cylinder head.

Cylinder heads with Valvetronic use an extra set of rocker arms, called intermediate arms (lift scaler), positioned between the valve stem and the camshaft. These intermediate arms are able to pivot on a central point, by means of an extra, electronicly actuated camshaft. This movement alone, without any movement of the intake camshaft, can open or close the intake valves.

Because the intake valves now have the ability to move from fully closed to fully open positions, and everywhere in between, the primary means of engine load control is transferred from the throttle plate to the intake valvetrain. By eliminating the throttle plate's 'bottleneck' in the intake track, pumping losses are reduced, fuel economy and responsiveness are improved.