Quantum dot lasers

The infrastructure of the Information Age has to date relied upon advances in microelectronics to produce integrated circuits that continually become smaller, better, and less expensive. The emergence of photonics, where light rather than electricity is manipulated, is posed to further advance the Information Age. Central to the photonic revolution is the development of miniature light sources such as the Quantum dots(QDs).

Today, Quantum Dots manufacturing has been established to serve new datacom and telecom markets. Recent progress in microcavity physics, new materials, and fabrication technologies has enabled a new generation of high performance QDs. This presentation will review commercial QDs and their applications as well as discuss recent research, including new device structures such as composite resonators and photonic crystals
Semiconductor lasers are key components in a host of widely used technological products, including compact disk players and laser printers, and they will play critical roles in optical communication schemes.

The basis of laser operation depends on the creation of non-equilibrium populations of electrons and holes, and coupling of electrons and holes to an optical field, which will stimulate radiative emission. . Other benefits of quantum dot active layers include further reduction in threshold currents and an increase in differential gain-that is, more efficient laser operation.

Semiconductor Devices

Billions of chips in today s computers , automobiles, cellphones , media cards ,and those cleverkey chain memories are powerless when idle , yet they dispense immense data and instructions at the flick of the switch. They are flash kind of memorychips , a type of electrically erasable and programmable read -only memory also used as DRAM, dynamic random access memory. Non volatility , flash s property , is the most crucial for electronic systems like cell phones to hold instructions and data needed to send , receive calls,and store phone numbers.

Electronic products of all types , from micro wave ovens to industrial machinery make use of flash memory . Its programmability is the main feature that lets users add addresses , calender entries and memos to personal digital assistants and erase and reuse the media cards that store pictures taken with a digital camera.But flash technology is being overpowered by technologies bent to prove their dominancy. These random access memories have a little in common. These new RAMS constitute

Quantum cryptography

Quantum cryptography is an effort to allow two users of a common communication channel to create a body of shared and secret information. This information, which generally takes the form of a random string of bits, can then be used as a conventional secret key for secure communication. It is useful to assume that the communicating parties initially share a small amount of secret information, which is used up and then renewed in the exchange process, but even without this assumption exchanges are possible.


The advantage of quantum cryptography over traditional key exchange methods is that the exchange of information can be shown to be secure in a very strong sense, without making assumptions about the intractability of certain mathematical problems. Even when assuming hypothetical eavesdroppers with unlimited computing power, the laws of physics guarantee (probabilistically) that the secret key exchange will be secure, given a few other assumptions

Optical networking

Here we explains about SONET -Synchronous Digital Network, that has been greeted with unparalleled enthusiasm throughout the world. It also explains how it came into existence and in which way it differs from others. What does synchronous mean?" Bits from one telephone call are always in the same location inside a digital transmission frame".

This material is assumed to be comfortable to the reader as the basic concepts of a public telecommunications network, with its separate functions of transmission and switching, and is assumed to be aware of the context for the growth of broadband traffic.

In the early 1970's digital transmission systems began to appear, utilizing a method known as Pulse Code Modulation (PCM), first proposed by STC in 1937. As demand for voice telephony increased, and levels of traffic in the network grew ever higher, it became clear that standard 2 Mbit/s signal was not sufficient. To cope with the traffic loads occurring in the trunk network. As the need arose, further levels multiplexing were added to the standard at much higher speed and thus SONET came into existence. For the first time in telecommunications history there will be a worldwide, uniform and seamless transmission standard for service delivery. SONET provides the capability to send data at multi-gigabit rate over today's single-mode fiber-optic links

As end-users become ever more dependent on effective communications, there has been an explosion in the demand for sophisticated telecom services. Services such as videoconferencing remote database access, and multimedia file transfer require a flexible network with the availability of virtually unlimited bandwidth. The complexity of the network, means that network operators are unable to meet this demand. At present SONET is being implemented for long-haul traffic, but there is no reason it cannot be used for short distances

Open RAN

The vision of the OpenRAN architecture is to design radio access network architecture with the following characteristics:

pen,Flexible,Distributed,Scalable.

Such architecture would be open because it defines open, standardized interfaces at key points that in past architectures were closed and proprietary. It would be flexible because it admits of several implementations, depending on the wired network resources available in the deployment situation. It would be distributed because monolithic network elements in architectures would have been broken down into their respective functional entities, and the functional entities would have been grouped into network elements that can be realized as a distributed system.

The architecture would define an interface with the core network that allows the core network to be designed independently from the RAN, preserving access network independence in the core. Finally, the architecture would not require changes in radio link protocols; in particular, a radio link protocol based on IP would not be necessary.

This document presents the first steps in developing the OpenRAN vision. In its first phase, the subject of this document, the OpenRAN architecture is purely concerned with distributing RAN functions to facilitate achieving open interfaces and flexible deployment. The transport substrate for implementing the architecture is assumed to be IP but no attempts is made to optimize the use of IP protocols, nor are specific interfaces designated as open. The architecture could as well be implemented on top of existing functional architectures that maintain a strict isolation between the transport layer and radio network layer, by splitting an existing radio network layer into control and bearer parts.

In addition, interoperation with existing core and RAN networks is supported via interworking functions. Chapters 7 through 11 in this report are exclusively concerned with this first phase of the architecture, and it is possible that the architecture may change as the actual implementation of the OpenRAN is considered and For Further Study items are resolved

Class-D Amplifiers

Class D amplifiers present us with a revolutionary solution, which would help us eliminate loss and distortions caused due to conversion of digital signals to analog while amplifying signals before transmitting it to speakers. This inchoate piece of knowledge could prove to detrimental in improving and redefining essence of sound and take it to a different realm.

This type of amplifiers do not require the use of D-A conversion and hence reduce the costs incurred for developing state of art output technology. The digital output from sources such as CD's, DVD's and computers now can directly be sent for amplification without the need for any conversion.

Another important feature of these unique and novel kind of amplifiers are that they give us a typical efficiency of 90% compared to that of the normal ones which give us a efficiency of 65-70%. This obviously means less amount of dissipation that indirectly means lower rated heat sinks and low waste of energy. This makes the use of D type amplifiers in miniature and portable devices all the more apt.

All these years D type amplifiers have been used for purposes where efficiency was the key whereas now developments in this technology have made its entry possible into other domain that are less hi-fi. Thus showing up in MP3 players, portable CD players, laptop computers, cell phones, even personal digital assistants

WINS

Wireless Integrated Network Sensors (WINS) now provide a new monitoring and control capability for monitoring the borders of the country. Using this concept we can easily identify a stranger or some terrorists entering the border.

The border area is divided into number of nodes. Each node is in contact with each other and with the main node. The noise produced by the foot-steps of the stranger are collected using the sensor. This sensed signal is then converted into power spectral density and the compared with reference value of our convenience. Accordingly the compared value is processed using a microprocessor, which sends appropriate signals to the main node. Thus the stranger is identified at the main node.

A series of interface, signal processing, and communication systems have been implemented in micro power CMOS circuits. A micro power spectrum analyzer has been developed to enable low power operation of the entire WINS system.

Wikipedia

Chances are that you have heard of Wikis by now -- they seem to be popping up everywhere. For example, The most famous wiki is called Wikipedia, a massive online encyclopedia. Wikipedia has become so large (more than a million articles) that you run across it all the time in Google. It is so popular that it is now one of the Top 100 web sites in the world!

Despite their popularity, Wikis seem very strange to many people. Where does all the information come from? Is it reliable? What stops people from vandalizing a wiki until it dies? These questions and many others will be answered as we dive into the world of wikis...
Wikis are growing because, at their core, they are about as simple as can be. That simplicity means that people find them easy to use, just like e-mail and blogs. Like e-mail and blogs, wikis also perform a very useful service in a simple way. A wiki allows a group of people to enter and communally edit bits of text. These bits of text can be viewed and edited by anyone who visits the wiki.

That's it. What it means is that, when you come to a wiki, you are able to read what the wiki's community has written. By clicking an "edit" button on an article, you are able to edit the article's text. You can add or change anything you like in the article you are reading.

Packet Telephony

Packet Telephony consists of telephony and data tightly coupled on packet based switched multimedia networks. The packet telephony simply refers to the use of personal computers and a packet data network to produce a voice conversation. The goal of packet switched fabric in both LAN and WAN, the vision in to drive voice and data over a single multimedia (packet based N/W) allowing waves to engage in a media rich communication in a natural and straight-forwarded manner. The packet & based fabric is capable of supporting future applications such as video streaming and video conferencing. The transaction to a new paradigm will take years to complete. However technology matures and new application proliferate packet technology will appear in broader market. There is a major distinction between Intranet telephony and VoIP. While VoIP, which usually takes place in managed networks of large corporate is allowed in India. Internet telephony which occurs in public network is not allowed in India.

Multicasting

Imagine a scenario where a professor wants to conduct a real-time class with 50 students participating through the network. If the multimedia application for the conferencing employs unicasting, the professor's computer repeatedly sends out 50 audio streams to the student's computers. Unicasting wastes bandwidth because it sends 50 duplicate copies over the network, and causes a significant delay before the last student hears the professor. The audio stream could also flood every corner of the network and possibly bring the network down.

Multicasting comes to the rescue by allowing the multicast host to send out only one copy of the information, and only those hosts that are part of that group receive it. In the class example, the professor's computer sends only one audio stream to the network, and only the targeted 50 students receive the stream. The information utilizes the minimum required network bandwidth and arrives at every student's computer without any noticeable delay.
This application is an example of the practical use of multicast in everyday life. The same is true for other applications like audio/video conferencing, multiplayer online gaming, online/offline video distribution, news and so on.Even if there are only three receivers of a multimedia application, the bandwidth utilization between routers can be roughly reduced up to one-third if we use multicasting.
The concept of multicast was introduced by Steve Deering in the '80's. Adding multicast to the internet does not alter the basic model of the network. Any host can send multicast data, but with a new type of address called a host group address. IPv4 has reserved class D addresses to support multicasting. A user can dynamically subscribe to the group to receive multicast traffic by informing a local router that it is interested in a particular multicast group. However, it is not necessary to belong to a group to send multicast. The delivery of multicast traffic in the internet is accomplished by creating a multicast tree, wit all of its leaf nodes as recipients.

Embedded systems

Embedded system is a combination of computer hardware, software and, perhaps, additional mechanical parts, designed to perform a specific function.

Embedded systems are usually programmed in high level language that is compiled (and/or assembled) into an executable ("machine") code. These are loaded into Read Only Memory (ROM) and called "firmware", "microcode" or a "microkernel".

The microprocessor is 8-bit or 16-bit.The bit size refers to the amount of memory accessed by the processor. There is usually no operating system and perhaps 0.5k of RAM. The functions implemented normally have no priorities. As the need for features increases and/or as the need to establish priorities arises, it becomes more important to have some sort of decision making mechanism be part of the embedded system. The most advanced systems actually have a tiny, streamlined OS running the show, executing on a 32-bit or 64-bit processor. This is called RTOS

Blue Tooth

Bluetooth wireless technology is a cable replacement technology that provides wireless communication between portable devices, desktop devices and peripherals. It is used to swap data and synchronize files between devices without having to connect each other with cable. The wireless link has a range of 10m which offers the user mobility. There is no need for the user to open an application or press button to initiate a process. Bluetooth wireless technology is always on and runs in the background. Bluetooth devices scan for other Bluetooth devices and when these devices are in range they start to exchange messages so they can become aware of each others capabilities. These devices do not require a line of sight to transmit data with each other. Within a few years about 80 percent of the mobile phones are expected to carry the Bluetooth chip. The Bluetooth transceiver operates in the globally available unlicensed ISM radio band of 2.4GHz, which do not require operator license from a regulatory agency. This means that Bluetooth technology can be used virtually anywhere in the world. Bluetooth is an economical, wireless solution that is convenient, reliable, and easy to use and operates over a longer distance.
The initial development started in 1994 by Ericsson. Bluetooth now has a special interest group (SIG) which has 1800 companies worldwide. Bluetooth technology enables voice and data transmission in a short-range radio. There is a wide range of devises which can be connected easily and quickly without the need for cables. Soon people world over will enjoy the convenience, speed and security of instant wireless connection. Bluetooth is expected to be embedded in hundreds of millions mobile phones, PCs, laptops and a whole range of other electronic devices in the next few years. This is mainly because of the elimination of cables and this makes the work environment look and feel comfortable and inviting.

Robocode

Robocode is an Open Source educational game by Mathew Nelson (originally R was provided by IBM). It is designed to help people learn to program in Java and enjoy the experience. It is very easy to start - a simple robot can be written in just a few minutes - but perfecting a bot can take months or more. Competitors write software that controls a miniature tank that fights other identically-built (but differently programmed) tanks in a playing field. Robots move, shoot at each other, scan for each other, and hit the walls (or other robots) if they aren't careful. Though the idea of this 'game' may seem simple, the actual strategy needed to win is not. Good robots have hundreds of lines in their code dedicated to strategy. Some of the more successful robots use techniques such as statistical analysis and attempts at neural networks in their designs. One can test a robot against many other competitors by downloading their bytecode, so design competition is fierce. Robocode provides a security sandbox (bots are restricted in what they can do on the machine they run on) which makes this a safe thing to do

Choreography

Choreography, in a Web services context, refers to specifications for how messages should flow among diverse, interconnected components and applications to ensure optimum interoperability. The term is borrowed from the dance world, in which choreography directs the movement and interactions of dancers.

Web services choreography can be categorized as abstract, portable or concrete:

  • In abstract choreography, exchanged messages are defined only according to the data type and transmission sequence.
  • Portable choreography defines the data type, transmission sequence, structure, control methods and technical parameters.
  • Concrete choreography is similar to portable choreography but includes, in addition, the source and destination URLs as well as security information such as digital certificates.

BitTorrent

BitTorrent is the name of a peer-to-peer (P2P) file distribution client application and also of its related file sharing protocol, both of which were created by programmer Bram Cohen. BitTorrent is designed to distribute large amounts of data widely without incurring the corresponding consumption in costly server and bandwidth resources. The original BitTorrent application was written in Python.
BitTorrent clients are programs which implement the BitTorrent protocol. Each BitTorrent client is capable of preparing, requesting, and transmitting any type of computer file over a network using the BitTorrent protocol. This includes text, audio, video, encrypted content, and other types of digital information.
Creating and publishing torrents
To share a file or group of files through BitTorrent, clients first create a “torrent”. Each torrent contains meta information about the file to be shared, and about the host computer that provides the initial copy of the file. The exact information contained in the tracker file depends on the version of the BitTorrent protocol. However, a torrent file always has the extension .torrent. Torrent files contain an “announce” section, which specifies the URL of the tracker, and an “info” section which contains a suggested name for the file, fragment size, a key length, file length, and a pass. A single torrent can contain information on one or more files. Clients who have finished downloading the file may also choose to act as seeders, providing a complete copy of the file. After the torrent file is created, a link to it is placed on a website, and it is registered with a tracker. BitTorrent trackers maintain lists of the clients currently downloading the file. The computer with the initial copy of the file is referred to as the initial seeder.
Downloading torrents and sharing files
Using a web browser, users navigate to the site listing the torrent, download it, and open it in a BitTorrent client. After opening the torrent, the BitTorrent client connects to the tracker, which provides it with a list of clients currently downloading the file or files. A group of peers on a BitTorrent or P2P connected with each other to share a particular torrent is generally referred to as a swarm.
Initially, there may be no other peers in the swarm, in which case the client connects directly to the initial seeder and begins to request fragments. The BitTorrent protocol breaks down files into a number of small fragments, typically a quarter of a megabyte (256 KB) in size. Larger file sizes typically have larger fragments. For example, a 4.37 GB file will often have a fragment size of up to 4 MB (4096 KB). File fragments are checked as they are received using a hash algorithm to ensure that they are error free.
As peers enter the swarm, they begin sharing fragments with one another. Because clients share fragments with one another, instead of directly from the seeder, BitTorrent networks easily scale to large numbers of clients. The protocol incorporates mechanisms so that clients choose peers with the best network connections for the fragment they are requesting. One major innovation that adds to the scalability of BitTorrent is the concept of “rare fragments.” The BitTorrent protocol specifies that clients should always request fragments that are the rarest, meaning they are held by the fewest number of clients in the swarm. By requesting the rarest fragments, the BitTorrent protocol ensures that one machine will not be swamped with requests, eliminating potential network bottlenecks.

SIP

Session Initiation Protocol (SIP) is a protocol developed by IETF MMUSIC Working Group and proposed standard for initiating, modifying, and terminating an interactive user session that involves multimedia elements such as video, voice, instant messaging, online games, and virtual reality.
SIP clients traditionally use TCP and UDP port 5060 to connect to SIP servers and other SIP endpoints. SIP is primarily used in setting up and tearing down voice or video calls. However, it can be used in any application where session initiation is a requirement. These include, Event Subscription and Notification, Terminal mobility and so on. There are a large number of SIP-related RFCs that define behavior for such applications. All voice/video communications are done over RTP.
A motivating goal for SIP was to provide a signaling and call setup protocol for IP-based communications that can support a superset of the call processing functions and features present in the public switched telephone network (PSTN).
SIP enabled telephony networks can also implement many of the more advanced call processing features present in Signalling System 7 (SS7), though the two protocols themselves are very different. SS7 is a highly centralized protocol, characterized by highly complex central network architecture and dumb endpoints (traditional telephone handsets). SIP is a peer-to-peer protocol.
SIP network elements
Hardware endpoints, devices with the look, feel, and shape of a traditional telephone, but that use SIP and RTP for communication, are commercially available from several vendors. Some of these can use Electronic Numbering (ENUM) or DUNDi to translate existing phone numbers to SIP addresses using DNS, so calls to other SIP users can bypass the telephone network, even though your service provider might normally act as a gateway to the PSTN network for traditional phone numbers (and charge you for it).
SIP makes use of elements called proxy servers to help route requests to the user's current location, authenticate and authorize users for services, implement provider call-routing policies, and provide features to users.'
'SIP also provides a registration function that allows users to upload their current locations for use by proxy servers. '
'Since registrations play an important role in SIP, a User Agent Server that handles a REGISTER is given the special name registrar.'
'It is an important concept that the distinction between types of SIP servers is logical, not physical.

Contiki

Contiki is a small open source, yet fully featured, operating system developed for use on a number of smallish systems ranging from 8-bit computers to embedded microcontrollers, including sensor network motes. The name Contiki comes from Thor Heyerdahl's famous Kon-Tiki raft.
Despite providing multitasking and a built-in TCP/IP stack, Contiki only requires a few kilobytes of code and a few hundred bytes of RAM. A fully fledged system complete with a graphical user interface (GUI) will require about 30 kilobytes of code memory.
The basic kernel and most of the core functions are developed by Adam Dunkels.
Features
A full installation of Contiki includes the following features:
Multitasking kernel
Optional pre-emptive multitasking (on a per-application basis)
Protothreads
TCP/IP networking
Windowing system and GUI
Networked remote display using Virtual Network Computing (VNC)
Web browser (claimed to be the world smallest)
Personal webserver
Simple telnet client
Screensaver
More applications are developed constantly. Known planned developments include:
an email client
an IRC client

Fusebox

Fusebox is a popular web development framework for ColdFusion and other web development languages. Fusebox provides Web application developers with a standardised, structured way of developing their applications using a relatively straightforward and easy to learn set of core files and encouraged conventions. In addition to the framework itself, Fusebox has become closely associated with a Web application development methodology developed by its proponents known as 'FLiP'. (Many people refer to Fusebox as a 'methodology', but in fact, as stated, it's a development framework. FLiP, however, is a methodology). Many frameworks provide comparable advantages, however, Fusebox (probably on account of both its relatively long history and the sizeable and active community that supports it) seems to be the most popular one for ColdFusion. Also the framework itself has been ported and used in ASP, JSP and PHP as well.

The concepts behind Fusebox are based on the household idiom of an electrical fusebox that controls a number of circuits, each one with its own fuse. In a Fusebox web application, all requests are routed through a single point (usually index.cfm for ColdFusion) and processed by the Fusebox core files. The application is divided into a number of circuits (usually in sub-directories) which are intended to contain related functionality. Each circuit in the application is further divided into small files called fuses that should perform simple tasks. URLs within a Fusebox web application are usually of the form index.cfm?fuseaction=cname.fname where 'cname' is the name of a circuit and 'fname' is an XML-defined 'method' within that circuit known as a fuseaction

WDDX

WDDX (Web Distributed Data eXchange) is a programming-language-neutral data interchange mechanism to pass data between different environments and different computers. It supports simple data types such as number, string, boolean, etc., and complex aggregates of these in forms such as structures and arrays. There are WDDX interfaces for a wide variety of languages. The data is encoded into XML using an XML 1.0 DTD, producing a platform-independent but relatively bulky representation. The XML-encoded data can then be sent to another computer using HTTP, FTP, or other transmission mechanism. The receiving computer must have WDDX-aware software to translate the encoded data into the receiver's native data representation. The WDDX protocol was developed in connection with the ColdFusion server environment. Python, PHP, Java, C++, .NET, lisp, Haskell and various platforms support it very well.

Symfony

Symfony is a web application framework for PHP5 projects.

It aims to speed up the creation and maintenance of web applications, and to replace the repetitive coding tasks by power, control and pleasure.

The very small number of prerequisites make symfony easy to install on any configuration; you just need Unix or Windows with a web server and PHP 5 installed. It is compatible with almost every database system. In addition, it has a very small overhead, so the benefits of the framework don't come at the cost of an increase of hosting costs.

Using symfony is so natural and easy for people used to PHP and the design patterns of Internet applications that the learning curve is reduced to less than a day. The clean design and code readability will keep your delays short. Developers can apply agile development principles (such as DRY, KISS or the XP philosophy) and focus on applicative logic without losing time to write endless XML configuration files.

Symfony is aimed at building robust applications in an enterprise context. This means that you have full control over the configuration: from the directory structure to the foreign libraries, almost everything can be customized. To match your enterprise's development guidelines, symfony is bundled with additional tools helping you to test, debug and document your project.

Last but not least, by choosing symfony you get the benefits of an active open-source community. It is entirely free and published under the MIT license.

Symfony is sponsored by Sensio, a French Web Agency

Hybrid ARQ

Hybrid ARQ (HARQ) is a variation of the ARQ error control method, which gives better performance than ordinary ARQ, particularly over wireless channels, at the cost of increased implementation complexity.

The simplest version of HARQ, Type I HARQ, simply combines FEC and ARQ by encoding the data block plus error-detection information (such as CRC) with an error-correction code (such as Reed-Solomon code or Turbo code) prior to transmission. When the coded data block is received, the receiver first decodes the error-correction code. If the channel quality is good enough, all transmission errors should be correctable, and the receiver can obtain the correct data block. If the channel quality is bad and not all transmission errors can be corrected, the receiver will detect this situation using the error-detection code, then the received coded data block is discarded and a retransmission is requested by the receiver, similar to ARQ.

In practice, the incorrectly received coded data blocks are often stored at the receiver rather than discarded, and when the retransmitted coded data block is received, the information from both coded data blocks are combined (Chase combining) before being fed to the decoder of the error-correction code, which can increase the probability of successful decoding. To further improve performance, Type II/III HARQ, or incremental redundancy HARQ, has also been proposed. In this scheme, different (re)transmissions are coded differently rather than simply repeating the same coded bits as in Chase combining, which gives better performance since coding is effectively done across retransmissions. The difference between type III HARQ and type II HARQ is that the retransmission packets in Type III HARQ can be decoded by themselves.

An example of incremental redundancy HARQ is HSDPA: the data block is first coded with a punctured 1/3 Turbo code, then during each (re)transmission the coded block is (usually) punctured further (i.e. only a fraction of the coded bits are chosen) and sent. The punctuation pattern used during each (re)transmission is different, so different coded bits are sent at each time.

HARQ can be used in stop-and-wait mode or in selective repeat mode. Stop-and-wait is simpler, but waiting for the receiver's acknowledgement reduces efficiency, thus multiple stop-and-wait HARQ processes are often done in parallel in practice: when one HARQ process is waiting for an acknowledgement, another process can use the channel to send some more data

OFDMA

Orthogonal Frequency Division Multiple Access (OFDMA) is a multiple access scheme for OFDM systems. It works by assigning a subset of subcarriers to individual users.


OFDMA features

  • OFDMA is the 'multi-user' version of OFDM

  • Functions by partitioning the resources in the time-frequency space, by assigning units along the OFDM signal index and OFDM sub-carrier index

  • Each OFDMA user transmits symbols using sub-carriers that remain orthogonal to those of other users

  • More than one sub-carrier can be assigned to one user to support high rate applications

  • Allows simultaneous transmission from several users ⇒ better spectral efficiency

  • Multiuser interference is introduced if there is frequency synchronization error

The term 'OFDMA' is claimed to be a registered trademark by Runcom Technologies Ltd., with various other claimants to the underlying technologies through patents.

Genetic programming

Genetic programming (GP) is an automated methodology inspired by biological evolution to find computer programs that best perform a user-defined task. It is therefore a particular machine learning technique that uses an evolutionary algorithm to optimize a population of computer programs according to a fitness landscape determined by a program's ability to perform a given computational task. The first experiments with GP were reported by Stephen F. Smith (1980) and Nichael L. Cramer (1985), as described in the famous book Genetic Programming: On the Programming of Computers by Means of Natural Selection by John Koza (1992).

Computer programs in GP can be written in a variety of programming languages. In the early (and traditional) implementations of GP, program instructions and data values were organized in tree-structures, thus favoring the use of languages that naturally embody such a structure (an important example pioneered by Koza is Lisp). Other forms of GP have been suggested and successfully implemented, such as the simpler linear representation which suits the more traditional imperative languages [see, for example, Banzhaf et al. (1998)]. The commercial GP software Discipulus, for example, uses linear genetic programming combined with machine code language to achieve better performance. Differently, the MicroGP uses an internal representation similar to linear genetic programming to generate programs that fully exploit the syntax of a given assembly language.

GP is very computationally intensive and so in the 1990s it was mainly used to solve relatively simple problems. However, more recently, thanks to various improvements in GP technology and to the well known exponential growth in CPU power, GP has started delivering a number of outstanding results. At the time of writing, nearly 40 human-competitive results have been gathered, in areas such as quantum computing, electronic design, game playing, sorting, searching and many more. These results include the replication or infringement of several post-year-2000 inventions, and the production of two patentable new inventions.

Developing a theory for GP has been very difficult and so in the 1990s genetic programming was considered a sort of pariah amongst the various techniques of search. However, after a series of breakthroughs in the early 2000s, the theory of GP has had a formidable and rapid development. So much so that it has been possible to build exact probabilistic models of GP (schema theories and Markov chain models) and to show that GP is more general than, and in fact includes, genetic algorithms.

Genetic Programming techniques have now been applied to evolvable hardware as well as computer programs.

Meta-Genetic Programming is the technique of evolving a genetic programming system using genetic programming itself. Critics have argued that it is theoretically impossible, but more research is needed.

Air Cylinders



Air cylinders resemble any other cylinder in steam engine or cylinder in hydraulic circuit. Air cylinders are called actuators or motors. Air Cylinder is a device, which converts pneumatic power in to respired Mechanical power by reducing the pressure of the compressed air to atmospheric pressure.

The advantage of air cylinder is that it cannot be over loaded. It simply stalls when the cylinder is over loaded.
The essential parts are of Air Cylinder

1. Cylinder tube
2. Piston
3. Piston rod
4. Air inlet
5. Air outlet

Classification of Air Cylinder

1. According to motion obtained
a. Rotating b. Non rotating

2. According to operating loads
a. Light duty b. Medium duty
c. Heavy duty

3. According to number of pressure faces of piston
a. Single acting b. Double acting

4. According to the piston of arrangement
a. Tanden b. Duplex
c. Double ended d. air cushion
e. Multi position cylinder
f. Impact cylinder
g. Cable cylinder

5. According to mounting of cylinder
a. Centre line mounting
b. Rod end flange mounting
c. Trunnion mounting
d. Hinged mounting
e. Horizontal pedestal mounting
f. Rabbetted mounting
6. According to construction
a. Piston type b. Diaphram type


Air Suspension System

Welded joints in a welded structure are expected to possess certain service-related capabilities. Welded joints are generally required to carry loading of various types in which the weld is subjected to stress of either a simple or complex character. More over a finished weld is not always as good or as bad as it may appear to be on its surface. It is therefore necessary to find out how satisfactory or sound the weld is. For this purpose certain weld inspection and testing procedures have been discovered and standardized to estimate the expected performance of the welded structures.

The examination and test applied to welded joints range from relatively simple ones such as visual inspection of the surface of the weld which provides some information on the quality of the workmanship and presence or absence of surface defects to the more elaborate procedures carried out for the purpose of obtaining some knowledge of the behavior of the welded joint under operating conditions.

Sensotronic brake control

Recently, automotive industry pays more attention on the improvement of the safety and comfort of their vehicle models. The new car model of Mercedes- Benz SL 500 justifies this, by incorporating the new technological innovations like Active body suspension (ABS), Electronic Stability Program (ESP), Sensotronic brake control (SBC) etc. Sensotronic brake control is an innovative electro hydraulic brake system which gives maximum safety and comfort on braking.

This Seminar illustrates mechanism and performance characteristic of Sensotronic brake control with the aid of the theory and mechanism of a common hydraulic brake system. In SBC, various factors while braking such as wheel speed, braking force for each wheel, steering angle etc are senses by electronic means. With electronic impulses are used to pass drivers braking commands onto a micro computer, which processes various sensor signal simultaneously, and depends on particular driving situation, calculates optimum pressure for each wheel. As a result SBC offers even greater active safety then conventional braking systems when braking in a corner and slippery surface

Oil Drilling

Oil and natural gas furnish about three-fourths of our energy needs, fueling our homes, work places, factories and transportation systems. Using variety of methods, on land and at sea, small crews of specialized workers search for geologic formations that are likely to contain oil and gas. Seismic prospecting - a technique based on measuring the time it takes sound waves to travel through underground formations and return to surface-has revolutionized oil and gas exploration.

After finding the presence of oil beneath the ground, the most important conventional technique known as Rotary Drilling is employed and oil is extracted from the oil well. Advanced techniques like Horizontal Directional Drilling and Drilling with Laser can also be employed in Oil drilling

Ball Piston machines

From the day machines with reciprocating piston has come into existence efforts have been undertaken to improve the efficiency of the machines .The main drawbacks of reciprocating machines are the considerably large number of moving parts due to the presence of valves , greater inertial loads which reduces dynamic balance and leakage and friction due to the presence of piston rings . The invention urges has reached on Rotary machines .

One main advantage to be gained with a rotary machine is reduction of inertial loads and better dynamic balance. The Wankel rotary engine has been the most successful example to date , but sealing problems contributed to its decline . There , came the ideas of ball piston machines . In the compressor and pump arena, reduction of reciprocating mass in positive displacement machines has always been an objective, and has been achieved most effectively by lobe, gear, sliding vane, liquid ring, and screw compressors and pumps , but at the cost of hardware complexity or higher losses. Lobe, gear, and screw machines have relatively complex rotating element shapes and friction losses. Sliding vane machines have sealing and friction issues . Liquid ring compressors have fluid turbulence losses.


The new design concept of the Ball Piston Engine uses a different approach that has many advantages, including low part count and simplicity of design , very low friction , low heat loss, high power to weight ratio , perfect dynamic balance , and cycle thermodynamic tailoring capability

Atkinson cycle engine


The Atkinson cycle engine is a type of Internal combustion engine invented by James Atkinson in 1882. The Atkinson cycle is designed to provide efficiency at the expense of power.

The Atkinson cycle allows the intake, compression, power, and exhaust strokes of the four-stroke cycle to occur in a single turn of the crankshaft. Owing to the linkage, the expansion ratio is greater than the compression ratio, leading to greater efficiency than with engines using the alternative Otto cycle.

The Atkinson cycle may also refer to a four stroke engine in which the intake valve is held open longer than normal to allow a reverse flow of intake air into the intake manifold. This reduces the effective compression ratio and, when combined with an increased stroke and/or reduced combustion chamber volume, allows the expansion ratio to exceed the compression ratio while retaining a normal compression pressure. This is desirable for improved fuel economy because the compression ratio in a spark ignition engine is limited by the octane rating of the fuel used. A high expansion ratio delivers a longer power stroke, allowing more expansion of the combustion gases and reducing the amount of heat wasted in the exhaust. This makes for a more efficient engine.

The disadvantage of the four-stroke Atkinson cycle engine versus the more common Otto cycle engine is reduced power density. Because a smaller portion of the intake stroke is devoted to compressing the intake air, an Atkinson cycle engine does not intake as much air as would a similarly-designed and sized Otto cycle engine.

Four stroke engines of this type with this same type of intake valve motion but with forced induction (supercharging) are known as Miller cycle engines.

Multiple production vehicles use Atkinson cycle engines:

Toyota Prius hybrid electric (front-wheel-drive)

Ford Escape hybrid electric (front- and four-wheel drive)

In all of these vehicles, the lower power level of the Atkinson cycle engine is compensated for through the use of electric motors in a hybrid electric drive train. These electric motors can be used independent of, or in combination with, the Atkinson cycle engine.


In this topic we present a general theory of anharmonic lattice statics for analysis of defective complex lattices. This theory differs from the classical treatments of lattice statics in that it does not rely on knowledge of force constants for a limited number of nearest neighbor interactions. Instead, the only thing needed as input is an interatomic potential that models the interaction of atoms this theory takes into account the fact that close to defects force constants are different from those in the bulk crystal. This formulation of lattice statics reduces the analysis of defective crystals to solving discrete boundary-value problems which consist of system of difference equations with some boundary conditions. To be able to solve the governing equations analytically, the discrete governing equations are linearized about a reference configuration that resembles a nominal defect. Fully nonlinear solutions are obtained by modified Newton-Raphson iterations of the harmonic solutions. In this theory, defective crystals are classified into three groups: defective crystals with 1-D symmetry reduction, defective crystals with 2-D symmetry reduction, and defective crystals with no symmetry reduction. Our theory systematically reduces the discrete governing equations for defective crystals with 1-D and 2-D symmetry reductions to ordinary difference equations and partial difference equations in two independent variables, respectively. Solution techniques for the discrete governing equations are demonstrated through some examples for ferroelectric domain walls. This formulation of lattice statics is very similar to continuum mechanics and we hope that developing this theory would be one step forward for doing lattice scale calculations analytically


Practical Fuel-Cell Vehicles

The future of fuel-cell vehicles is already happening in an unlikely proving ground: forklifts used in warehouses. Several manufacturers are testing forklifts powered by a combination of fuel cells and batteries -- and finding that these hybrids perform far better than the lead-acid battery systems now typically used. In some situations, in fact, they could pay for themselves in cost savings and added productivity within two or three years.

The adoption of the technology points to a promising hybrid strategy for finally making fuel cells economically practical for all sorts of vehicles. While researchers have speculated for years that hydrogen fuel cells could power clean, electric vehicles, cutting emissions and decreasing our dependence on oil, manufacturing fuel cells big enough to power a car is prohibitively expensive -- one of the main reasons they are not yet in widespread use. But by relying on batteries or ultracapacitors to deliver peak power loads, such as for acceleration, fuel cells can be sized as much as four times smaller, slashing manufacturing costs and helping to bring fuel cell-powered vehicles to market.

The forklift hybrids use ultracapacitors, devices similar to batteries but able to deliver higher bursts of power. The fuel cell powers the forklift as it drives through a warehouse, while at the same time the cell charges the ultracapacitors. The ultracapacitors kick in to lift a pallet.

'If you had to do that with just fuel-cell power, you'd need a fuel cell about four times as large, which would be too big,' says Michael Sund, spokesperson for Maxwell Technologies, an ultracapacitor manufacturer. 'It would dwarf the forklift, and it would also be very expensive. Being able to downsize the fuel cell makes it smaller, lighter, and cheaper.'

The use of the fuel-cell hybrids in forklifts could bode well for the auto industry. Cars and SUVs, like forklifts, have peak power demands. When cruising, they use less than one-quarter of an engine's maximum power, which is sized to provide acceleration and sustained power up long hills, says Brian Wicke, who's developing fuel-cell systems at GM.

Batteries and ultracapacitors could provide at least some of the accelerating power, allowing the fuel cell to be smaller. Last year, GM rolled out a concept car featuring a hybrid system, although it will be after the end of the decade before such a vehicle is available. Other major automakers are also pursuing the hybrid technology.

In addition to supplying peak power, ultracapacitors and batteries give fuel-cell vehicles the ability to recapture energy from braking, as happens now with commercial gasoline-battery hybrid vehicles. This can make the system much more efficient, especially in applications such as city driving. A vehicle powered by a fuel cell alone would not have this ability.

'You can't take energy into a fuel cell. You've got to have a battery,' says Brian Barnett at Tiax in Cambridge, MA, a company that has provided analyses of fuel cells for the U.S. Department of Energy. 'Why you would put an electric drive train system on the road, and not have the ability to accept regenerative braking is beyond me.'

The future of fuel-cell vehicles is already happening in an unlikely proving ground: forklifts used in warehouses. Several manufacturers are testing forklifts powered by a combination of fuel cells and batteries -- and finding that these hybrids perform far better than the lead-acid battery systems now typically used. In some situations, in fact, they could pay for themselves in cost savings and added productivity within two or three years.


The adoption of the technology points to a promising hybrid strategy for finally making fuel cells economically practical for all sorts of vehicles. While researchers have speculated for years that hydrogen fuel cells could power clean, electric vehicles, cutting emissions and decreasing our dependence on oil, manufacturing fuel cells big enough to power a car is prohibitively expensive -- one of the main reasons they are not yet in widespread use. But by relying on batteries or ultracapacitors to deliver peak power loads, such as for acceleration, fuel cells can be sized as much as four times smaller, slashing manufacturing costs and helping to bring fuel cell-powered vehicles to market.


The forklift hybrids use ultracapacitors, devices similar to batteries but able to deliver higher bursts of power. The fuel cell powers the forklift as it drives through a warehouse, while at the same time the cell charges the ultracapacitors. The ultracapacitors kick in to lift a pallet.


'If you had to do that with just fuel-cell power, you'd need a fuel cell about four times as large, which would be too big,' says Michael Sund, spokesperson for Maxwell Technologies, an ultracapacitor manufacturer. 'It would dwarf the forklift, and it would also be very expensive. Being able to downsize the fuel cell makes it smaller, lighter, and cheaper.'


The use of the fuel-cell hybrids in forklifts could bode well for the auto industry. Cars and SUVs, like forklifts, have peak power demands. When cruising, they use less than one-quarter of an engine's maximum power, which is sized to provide acceleration and sustained power up long hills, says Brian Wicke, who's developing fuel-cell systems at GM.


Batteries and ultracapacitors could provide at least some of the accelerating power, allowing the fuel cell to be smaller. Last year, GM rolled out a concept car featuring a hybrid system, although it will be after the end of the decade before such a vehicle is available. Other major automakers are also pursuing the hybrid technology.


In addition to supplying peak power, ultracapacitors and batteries give fuel-cell vehicles the ability to recapture energy from braking, as happens now with commercial gasoline-battery hybrid vehicles. This can make the system much more efficient, especially in applications such as city driving. A vehicle powered by a fuel cell alone would not have this ability.


'You can't take energy into a fuel cell. You've got to have a battery,' says Brian Barnett at Tiax in Cambridge, MA, a company that has provided analyses of fuel cells for the U.S. Department of Energy. 'Why you would put an electric drive train system on the road, and not have the ability to accept regenerative braking is beyond me.'

The Atomic Battery

The typical future-tech scenario calls for millions of low-powered radio frequency devices scattered throughout our environment -- from factory-floor sensor arrays to medical implants to smart devices for battlefields.


Because of the short and unpredictable lifespans of chemical batteries, however, regular replacements would be required to keep these devices humming. Fuel cells and solar cells require little maintenance, but the former are too expensive for such modest, low-power applications, and the latter need plenty of sun.

A third option, though, may provide a powerful -- and safe -- alternative. It's called the Direct Energy Conversion (DEC) Cell, a betavoltaics-based 'nuclear' battery that can run for over a decade on the electrons generated by the natural decay of the radioactive isotope tritium. It's developed by researchers at the University of Rochester and a startup, BetaBatt, in a project described in the May 13 issue of Advanced Materials and funded in part by the National Science Foundation.

Because tritium's half-life is 12.3 years (the time in which half of its radioactive energy has been emitted), the DEC Cell could provide a decade's worth of power for many applications. Clearly, that would be an economic boon -- especially for applications in which the replacement of batteries is highly inconvenient, such as in medicine and oil and mining industries, which often place sensors in dangerous or hard-to-reach locations.

'One of our main markets is for remote, very difficult to replace sensors,' says Larry Gadeken, chief inventor and president of BetaBatt. 'You could place this [battery] once and leave it alone.'

Betavoltaic devices use radioisotopes that emit relatively harmless beta particles, rather than more dangerous gamma photons. They've actually been tested in labs for 50 years -- but they generate so little power that a larger commercial role for them has yet to be found. So far, tritium-powered betavoltaics, which require minimal shielding and are unable to penetrate human skin, have been used to light exit signs and glow-in-the-dark watches. A commercial version of the DEC Cell will likely not have enough juice to power a cell phone -- but plenty for a sensor or pacemaker.

The key to making the DEC Cell more viable is increasing the efficiency with which it creates power. In the past, betavoltaics researchers have used a design similar to a solar cell: a flat wafer is coated with a diode material that creates electric current when bombarded by emitted electrons. However, all but the electron particles that shoot down toward the diodes are lost in that design, says University of Rochester professor of electrical and computer engineering Phillipe Fauchet, who developed the more-efficient design based on Gadeken's concept.

The solution was to expose more of the reactive surface to the particles by creating a porous silicon diode wafer sprinkled with one-micron wide, 40 micron-deep pits. When the radioactive gas occupies these pits, it creates the maximum opportunity for harnessing the reaction.

As importantly, the process is easily reproducible and cheap, says Fauchet -- a necessity if the DEC Cell is to be commercially viable.

The fabrication techniques may be affordable, but the tritium itself -- a byproduct of nuclear power production -- is still more expensive than the lithium in your cell-phone battery. The cost is less of an issue, however, for devices designed specifically to collect hard-to-get data.

Cost is only one reason why Gadeken says he will not pursue the battery-hungry consumer electronics market. Other issues include the regulatory and marketing obstacles posed by powering mass-market devices with radioactive materials and the large battery size that would be required to generate sufficient power. Still, he says, the technology might some day be used as a trickle-recharging device for lithium-ion batteries.


Instead, his company is targeting market sectors that need long-term battery power and have a comfortable familiarity with nuclear materials.

'We're targeting applications such as medical technology, which are already using radioactivity,' says Gadeken.

For instance, many implant patients continue to outlive their batteries and require costly and risky replacement surgery.

Eventually, Gadeken hopes to serve NASA as well, if the company can find a way to extract enough energy from tritium to power a space-faring object. Space agencies are interested in safer and lighter power sources than the plutonium-powered Radioisotope Thermal Generators (RTG) used in robotic missions, such as Voyager, which has an RTG power source that is intended to run until around 2020.

Furthermore, a betavoltaics power source would likely alleviate environmental concerns, such as those voiced at the launch of the Cassini satellite mission to Saturn, when protestors feared that an explosion might lead to fallout over Florida.

For now, though, Gadeken hopes to interest the medical field and a variety of niche markets in sub-sea, sub-surface, and polar sensor applications, with a focus on the oil industry.

And the next step is to adapt the technology for use in very tiny batteries that could power micro-electro-mechanical Systems (MEMS) devices, such as those used in optical switches or the free-floating 'smart dust' sensors being developed by the military.

In fact, another betavoltaics device, under development at Cornell University, is also targeting the MEMS power market. The Radioisotope-Powered Piezoelectric Generator, due in prototype form in a few years, will combine a betavoltaics cell with a tritium-powered electromechanical cantilever device first demonstrated in 2002.

Amit Lal, one of the Cornell researchers, offers both praise and cautious skepticism about the DEC Cell. While he's impressed with the power output from the DEC Cell, he said that there are still issues with power leakage. To avoid those potential leakage problems, Cornell is using a slightly larger-scale wafer design. They're also planning to move to a porous design and either solid or liquid tritium to improve efficiency.

Lal also notes that the market for either Cornell's device or the DEC Cell might be squeezed by newer, longer-lasting lithium batteries. Still, there's a niche for very small devices, he believes, especially those that must run longer than ten years.


Semisolid Casting

Most metal parts are manufactured by either fully-liquid (e.g., casting) or fully-solid (e.g., forging) processes. Semi-Solid Metalworking (SSM) incorporates elements of both casting and forging for the manufacture of near-net-shape discrete parts. Applications are fuel rails, suspension arms, engine brackets, steering Knuckles,rear axle components and motor cycle upper fork plates.


SSM casting was selected for each of these applications for different reasons - high integrity, pressure tightness and design simplification. In each case SSM processing provides several unique advantages over other candidates


The process capitalises on thixotropy, a physical state wherein a solid material behaves like a fluid when a shear force is applied. The SSM process requires a nondendritic feedstock that can be produced by applying mechanical or electromechanical stirring during alloy solidification at a controlled rate, or from fine-grained materials produced by powder metallurgy or spray forming methods. This feedstock, usually in billet form, is then heated to a temperature between its solidus and liquidus and formed in dies to make near-net-shape parts.


Tyre Threading

A tyre is a cushion provided with an automobile wheel. It consists of mainly the outer tyre and the inner tube. The air inside the tube carries the entire load and provides the cushion.

The functions of a tyre are


  1. To support the vehicle load
  2. To provide cushion against shocks
  3. To transmit driving and braking forces on the road
  4. To provide cornering power for smooth steering


Thread Patterns


1. Rib Shape

2. Lug Shape

3. Rib-Lug Shape

4. Block shape

5. Asymmetric Pattern

6. Directional Pattern