Positive Extension Matrix

The extension matrix method is that at first, we find the distinguishing between the positive examples and negative examples. The extension matrix is used to represent those distinguishes, and then according to those distinguishes, the examples are induced so that the proper assertions are obtained. The extension matrix clearly reflects the distinguishing between positive examples and negative examples. It is easy to find the heuristic of a problem relying on it.

• Nowadays there are AE1, AE5 , AE9 and AE11 algorithms that are created by relying on the extension matrix. All those algorithms are creating the heuristics starting from the nature of the path. In the algorithms, a rule is simplest with AE11, and it obtains the simpler rule than the AQ15. The algorithm AE18 we proposed in the paper also belongs to the extension matrix. It is based on the positive extension matrix (PEM). It also creates heuristics to induce starting from the nature of the path. In the inducing the algorithm prior selects the required elements.

• In order to optimize our positive matrix algorithm, this talk will presents the algorithm AE18 and makes comparisons with our experimental results

Traffic Pulse Technology

The Traffic Pulse network is the foundation for all of Mobility Technologies® applications. This network uses a process of data collection, data processing, and data distribution to generate the most unique traffic information in the industry. Digital Traffic Pulse® collects data through a sensor network, processes and stores the data in a data center, and distributes that data through a wide range of applications.

Unique among private traffic information providers in the U.S. , Mobility Technologies' real-time and archived Traffic Pulse data offer valuable tools for a variety of commericial and governmental applications:

* Telematics - for mobile professionals and others, Mobility Technologies' traffic information complements in-vehicle navigation devices, informing drivers not only how to get from point A to point B but how long it will take to get there — or even direct them to an alternate route.
* Media - for radio and TV broadcasters, cable operators, and advertisers who sponsor local programming, Traffic Pulse Networks provides traffic information and advertising opportunities for a variety of broadcasting venues.
* Intelligent Transport business solutions (ITS) - for public agencies, Mobility Technologies' applications aid in infrastructure planning, safety research, and livable community efforts; integrate with existing and future ITS technologies and deployments; and provide data reporting tools

In the age of multimedia and high-speed networks, multicast is one of the mechanisms by which the power of the Internet can be further harnessed in an efficient manner. It has been increasingly used by various continuous media applications such as teleconferencing, distance learning, and voice & video transmission. Compared with unicast and broadcast, multicast can save network bandwidth and make transmission more efficient. In this seminar, we will review the history of the multicast, present you several existing multicast routing algorithms and analysis the features of the existing Multicast routing protocols that have been proposed for best effort multicast. Some of the issues and open problems related to multicast implementation and deployment are discussed as well

Souped-Up Mesh Networks

In an effort to make a better wireless network, the Cambridge MA-based company BBN Technologies announced last week that it has built a mesh network that uses significantly less power than traditional wireless networks, such as cellular and Wi-Fi, while achieving comparable data-transfer rates.

The technology, which is being funded by the Defense Advanced Research Projects Agency (DARPA), was developed to create ad hoc communication and surveillance networks on battlefields. But aspects of it are applicable to emergency or remote cell-phone networks, and could potentially even help to extend the battery life of consumer wireless devices, says Jason Redi, a scientist at BBN.

Mesh networks -- collections of wireless transmitters and receivers that send data hopping from one node to another, without the need of a centralized base station or tower -- are most often found in research applications, in which scientists deploy hordes of sensors to monitor environments from volcanoes to rainforests. In this setting, mesh networks are ideal because they can be deployed without a large infrastructure. Because they lack the need for costly infrastructure, mesh networks can also be used for bringing communication to remote areas where there isn't a reliable form of electricity. In addition, they can be established quickly, which is useful for building networks of phones or radios during a public emergency.

While mesh networks have quite a bit of flexibility in where they can be deployed and how quickly, so far they've been less than ideal for a number of applications due to their power requirements and relatively slow data-transfer rates. All radios in a mesh network need to carry an onboard battery, and in order to conserve battery power, most low-power mesh networks send and receive data slowly -- at about tens of kilobits per second. 'You get the low power,' says Redi, 'but you also get poor performance.'

Especially in military surveillance, the data rates need to be much faster. If a soldier has set up a network of cameras, for example, he or she needs to react to the video as quickly as possible. So, to keep the power consumption to a minimum and increase data-transfer rates, the BBN team modified both the hardware and software of their prototype network. The result is a mesh network that can send megabits of data per second across a network (typical rates for Wi-Fi networks, and good enough to stream video), using one-hundredth the power of traditional networks.

Sockets are one of the most basic mechanisms of computer networking. Much of today's software relies on low level socket technology. This project includes creating a server application and a client application which uses sockets for communication. In this project, Windows socket programming is implemented using Microsoft Visual C++. The Windows socket specification defines a binary compatible network programming interface. This application permits to communicate across any network that conforms to the windows API. It can exchange data with other sockets in the same communication domain which uses the internet protocol. The sockets utilized by this application are used in the full duplex mode which increases the time sharing.
Capabilities of clint server communication

¨ The server can connect to multiple clients
¨ The server allows all-to-all communication.
¨ If a new client joins ,all the old clients get informed about his arrival whereas the new clients get the list of all the old clients.
¨ If a client quits all the existing clients get informed.

If the server sends a message, all the connected clients receive it. Similarly if the client sends a message , the server and all other clients receive it.


Teleportation is the transmission of a life-size image of a person to appear within a room at a distant location where the person has a telepresence for engaging in natural face to face communication with people at the distant location. The image of the person appears within a 3D environment, can make eye to eye contact with individuals and can hold true two-way conversations. This is a unique system invented by a privately held Richardson based firm by the name Teleportec Inc. They are mainly operating in USA and UK.

Teleportation systems are better than videoconferencing. Videoconferencing has never presented itself as a realistic alternative to face-to-face meetings because of its severe limitations - only one person can speak at any one time creating an amplified feeling of distance between participants. With videoconferencing people feel uncomfortable by being on camera and feel disconnected from the people shown on the screen. Teleprtation gives a sense of presence by achieving eye-to-eye contact with a distant person who is teleported into the room

Management Information System

A Management Information System (MIS) is an integrated, user-machine system for providing information to support operations, management and decision-making functions in an organization. The systems utilizes computer hardware and software; manual procedures; models for analysis, planning, control and decision making; and a database.Decision making is a pervasive function of anyone for achieving goals. In the case of managers decision making is of much importance because

* Managers spend a lot of time making decisions.
* Managers are evaluated in the basis of the number of and importance of the decisions made.

To facilitate scientific decision making, the managers require information, external and internal, supplied to him selectively and at his call. MIS is defined as follows:
Management-it consist of the processes or activities that describe what managers do in the operation of their organization -plan, organize, initiate and control operations.
Information-Data are facts and figures that are not currently being used in a decision process. They are records which need not to be retrieved immediately. Information is data that have been retrieved, processed and used for a decision making
System-In management concept, it optimizes the output of the organization by connecting the operating subsystems through information exchange.

Information handling takes place in every organization. But a systematic collection of the right data, its proper recording and timely retrieval to decision making are all part of a well designed MIS. The system approach is most suited to provide appropriate nformation in the right form at right time .Such a system will help to collect, discriminate, select, relate, classify and interpret the information according to the users need. The systems approach was introduced to achieve synergism, that is, the simultaneous action of separate but interrelated parts together produce a total effect greater than the sum of individual parts. In the past, the effectiveness of the business organizations was less than optimum because managers failed to relate the parts or functions of the systems to each other. The sales function was performed without a great deal of integration with design or production; production control was not coordinated with financial or personnel planning and so on

Digital Visual Interface

Digital Visual Interface adoption accelerates as Industry prepares for next wave of DVI-Product compliant. DVI is an open industry specification introduced by the DDWG, which enables high-performance, robust interfacing solutions for high-resolution digital displaysThe Digital Visual Interface (DVI) is a display interface developed in response to the proliferation of digital flat-panel-displays. For the most part, these displays are currently connected to analog video graphics array (VGA) interface and, thus, require a double conversion. The digital signal from the computer mustbe converted to an analog signal for the analog VGA interface, then converted back to a digital signal for processing by the flat-panel display. This inherently inefficient process takes a toll on performance and video quality and adds cost.

In contrast, when a flat-panel display is connected to a digital interface, no digital-to-analog conversion is required. The DVI interface is becoming more prevalent and is expected to become widely used for digital display devices, including flat-panel displays and emerging digital CRTs

Quantum teleportation

Teleportation is the name given by science fiction writers to the feat of making an object or person disintegrate in one place while a perfect replica appears somewhere else. how this is accomplished is usually not explained in detail but the general idea seems to he that the original object is scanned in such a way as to extract all the information from it, then this information is transmitted to the receiving location and used to construct the replica, not necessarily from the actual material of the original, but perhaps from atoms of the same kinds, arranged in exactly the same pattern as the original. a teleportation machine would be like a fax machine, except that it would work on 3-dimensional objects as well as documents, it would produce an exact copy rather than an approximate facsimile, and it would destroy the original in the process of scanning it.

In 1993 an international group of six scientists including IBM fellow charles h. bennett, confirmed the institution of the majority of science function writers by showing that perfect teleportation is indeed possible in principle, but only if the original is destroyed, meanwhile, other scientists are planning fiction experiments to demonstrate teleportation in microscopic objects, such as single atoms or photons, in the next few years. but science location fans will he disappointed to learn that no one expects to be able to teleport people or other macroscopic objects in the foreseeable future, for a variety of engineering reasons, even though it would not violate any fundamental law to do so

Genetic engineering

Genetic programming has recently emerged as an important paradigm for automatic generation of computer programs. GP combines metaphors drawn from biological evolution with computer science techniques in order to produce algorithms and programs automatically.From the very beginning, man has tried to develop machines that can replace the need for human beings for many applications; machines that require very little support of human beings. Research is going on to develop machines that can produce high artificial components in the results, with least human intelligence. In this context, the GP assumes a special significance.
Over the past decade the artificial evolution of computer code has become a rapidly spreading technology with many ramifications. Originally conceived as a means to enforce computer intelligence, it has now spread into many areas of machine learning and is starting to conquer many areas.
Genetic programming has a recently emerged as an important paradigm for automatic generation of computer programs. GP combines metaphors drawn from biological evolution with computer science techniques in order to produce algorithms and programs automatically.
In the long run the Genetic Programming will revolutionalize program development. Present methods do not mature enough for deployment as automatic programming systems. Nevertheless, GP has already made inroads into automatic programming and will continue to do so

Wireless Markup Language

When its time to find out how to make content available over WAP, we need to get to grips with its Markup Language. ie, WML. WML was designed from the start as a markup language to describe display of content on small screen devices.

It is a Markup language enabling the formatting of text in WAP environment using a variety of markup tags to determine the display appearance of content. WML is defined using the rules of XML-extensible markup language and therefore an XML application. WML provides a means of allowing the user to navigate around the WAP application and supports the use of anchored links as found commonly in the web pages. It also provides support for images and layout within the constraints of the device

Plastic circuitries

As researchers work towards the creation of plastic-based alternatives in order to to make technology more pervasive, silicon wafers might soon be biting the dust.

No one would need to interact with computers any more as technology would be ingrained into everyday objects like shirts, 'driverless' cars or therapeutic dolls", predicted Nicholas Negroponte, cofounder and director of MIT Media Laboratory in 1998. In his columns in the Wired magazine, (the further claimed that not only was the Digital Age upon us, but that we were already in the final stages of the digital revolution.

A big step in this/all-pervasive computing' direction is plastic re-engineering. Research in this field aims to create chips made of plastic wafers instead of silicon. Not only will such chips enable the products Negroponte talked about, it will also allow a hobbyist or a power-user to print his own PC!


E-commerce is the application of information technology to support business processes and the exchange of goods and services. E-cash came into being when people began to think that if we can store, forward and manipulate information, why can't we do the same with money. Both blanks and post offices centralise distribution, information and credibility. E-money makes it possible to decentralise these functions.

Electronic data interchange, which is the subset of e-com, is a set of data definitions that permits business forms to be exchanged electronically. The different payment schemes E-cash, Net-cash and PayMe system and also smart card technology is also. The foundation of all requirements for commerce over the world wide web is secured system of payment so various security measures are adopted over the Internet.
E-commerce represents a market worth potentiality hundreds of billions of dollars in just a few years to come. So it provides enormous opportunities for business. It is expected that in near future, electronic transaction will be as popular, if not more that the credit card purchases today.

Business is about information. It is about the right people having the right information at the right time. Exchanging the information efficiently and accurately will determine the success of the business.
There are three phases of implementation of E-Commerce.

" Replace manual and paper-based operations with electronic alternatives
" Rethink and simplify the information flows
" Use the information flows in new and dynamic ways

Simply replacing the existing paper-based system will reap new benefits. It may reduce administrative costs and improve the level of accuracy in exchanging data, but it does not address doing business efficiently. E-Commerce application can help to reshape the ways to do business

Voice Over Internet Protocol

VoIP, or "Voice over Internet Protocol" refers to sending voice and fax phone calls over data networks, particularly the Internet. This technology offers cost savings by making more efficient use of the existing network.

Traditionally, voice and data were carried over separate networks optimized to suit the differing characteristics of voice and data traffic. With advances in technology, it is now possible to carry voice and data over the same networks whilst still catering for the different characteristics required by voice and data.

Voice-over-Internet-Protocol (VOIP) is an emerging technology that allows telephone calls or faxes to be transported over an IP data network. The IP network could be

" A local area network in an office
" A wide area network linking the sites of a large international organization
" A corporate intranet
" The internet
" Any combination of the above

There can be no doubt that IP is here to stay. The explosive growth of the Internet, making IP the predominate networking protocol globally, presents a huge opportunity to dispense with separate voice and data networks and use IP technology for voice traffic as well as data. As voice and data network technologies merge, massive infrastructure cost savings can be made as the need to provide separate networks for voice and data can be eliminated.

Most traditional phone networks use the Public Switched Telephone Network(PSTN), this system employs circuit-switched technology that requires a dedicated voice channel to be assigned to each particular conversation. Messages are sent in analog format over this network.

Today, phone networks are on a migration path to VoIP. A VoIP system employs a packet-switched network, where the voice signal is digitized, compressed and packetized. This compressed digital message no longer requires a voice channel. Instead, a message can be sent across the same data lines that are used for the Intranet or Internet and a dedicated channels is no longer needed. The message can now share bandwidth with other messages in the network.

Normal data traffic is carried between PC's, servers, printers, and other networked devices through a company's worldwide TCP/IP network. Each device on the network has an IP address, which is attached to every packet for routing. Voice-over-IP packets are no different.
Users may use appliances such as Symbol's NetVision phone to talk to other IP phones or desktop PC-based phones located at company sites worldwide, provided that a voice-enabled network is installed at the site. Installation simply involves assigning an IP address to each wireless handset.

VOIP lets you make toll-free long distance voice and fax calls over existing IP data networks instead of the public switched telephone network (PSTN). Today business that implement their own VOIP solution can dramatically cut long distance costs between two or more locations


Within the last ten years real-time systems research has been transformed from a niche industry into a mainstream enterprise with clients in a wide variety of industries and academic disciplines. It will continue to grow in importance and affect an increasing number of industries as many of the reasons for the rise of its prominence will persist for the foreseeable future.

What is RTOS?
Real Time Computing and Real Time Operating Systems( RTOS ) is an
emerging discipline in software engineering. This is an embedded technology wherebythe application software does the dual function of operating system also. In RTOS thecorrectness of the system depends not only on the logical result but also on the time atwhich the results are obtained.
Real-time System

??Provides deterministic response to external events
??Has the ability to process data at its rate of occurrence
??Is deterministic in its functional & timing behavior
??Whose timing is analyzed in the worst cases not in the typical, normal cases to
guarantee a limiting response in any circumstances.

The seminar will basically provide a practical understanding of the goals, structure and operation of a real-time operating system (RTOS). The basic concepts of real-time system like the RTOS Kernel will be given a detailed description. The structure of the kernel is discussed, stressing the factors which affect response times and performance. Examples of RTOS functions such as scheduling, interrupt processing and intertask communication structures will also be discussed. Features of commercially available RTOS products are also presented.A real-time system is one where the timeliness of the result of a calculation is important Examples include military weapons systems, factory control systems, and Internet video and audio streaming. Different definitions of real-time systems exist. Here are just a few:

- Real-time computing is computing where system correctness depends not only on the correctness of the logical result of the computation but also on the result delivery time.
- A Real-Time System is an interactive system that maintains an on-going relationship with an asynchronous environment, i.e. an environment that progresses irrespective of the Real Time System, in an uncooperative manner.
- Real-time (software) (IEEE 610.12 - 1990): Pertaining a system or mode of operation in which computation is performed during the actual time that an external process occurs, in order that the computation results may be used to control, monitor, or respond in a timely manner to the external process.

From the above definitions its understood that in Real Time Systems, the
TIME is the biggest constraint. This makes real time systems different from ordinary systems. Thus in RTS data needs to be processed at some regular and timely rate. Also it should have fast response to events occurring at nonregular rates. In real world systems there is some delay between presentation of inputs and appearance of all associated outputs called the Response time. Thus a real time system must satisfy explicit response time constraints or risk severe consequences including failure.

Real - Time Systems and Real - Time Operating Systems

Timeliness is the single most important aspect of a real -time system. These systems respond to a series of external inputs, which arrive in an unpredictable fashion. The real-time systems process these inputs, take appropriate decis ions and also generate output necessary to control the peripherals connected to them. As defined by Donald Gillies "A real-time system is one in which the correctness of the computations not only depends upon the logical correctness of the computation but also upon the time in which the result is produced. If the timing constraints are not met, system failure is said to have occurred."

It is essential that the timing constraints of the system are guaranteed to be met. Guaranteeing timing behaviour requires that the system be predictable.

The design of a real -time system must specify the timing requirements of the system and ensure that the system performance is both correct and timely. There are three types of time constraints:

¢ Hard: A late response is incor rect and implies a system failure. An example of such a system is of medical equipment monitoring vital functions of a human body, where a late response would be considered as a failure.

¢ Soft: Timeliness requirements are defined by using an average respons e time. If a single computation is late, it is not usually significant, although repeated late computation can result in system failures. An example of such a system includes airlines reservation systems.

¢ Firm: This is a combination of both hard and soft t imeliness requirements. The computation has a shorter soft requirement and a longer hard requirement. For example, a patient ventilator must mechanically ventilate the patient a certain amount in a given time period. A few seconds' delay in the initiation of breath is allowed, but not more than that.

One need to distinguish between on -line systems such as an airline reservation system, which operates in real-time but with much less severe timeliness constraints than, say, a missile control system or a telephone switch. An interactive system with better response time is not a real-time system. These types of systems are often referred to as soft real time systems. In a soft real -time system (such as the airline reservation system) late data is still good dat a. However, for hard real -time systems, late data is bad data. In this paper we concentrate on the hard and firm real-time systems only.

Most real -time systems interface with and control hardware directly. The software for such systems is mostly custom -developed. Real -time Applications can be either embedded applications or non -embedded (desktop) applications. Real -time systems often do not have standard peripherals associated with a desktop computer, namely the keyboard, mouse or conventional display monitors. In most instances, real-time systems have a customized version of these devices

Compiler writing techniques have undergone a number of major revisions over the past forty years. The introduction of object-oriented design and implementation techniques promises to improve the quality of compilers, while making large-scale compiler development more manageable.

In this seminar you want to show that a new way of thinking of a compiler's structure is required to achieve complete object-orientation. This new view on compiling can lead to alternative formulations of parsing and code generation. In practice, the object-oriented formulations have not only proven to be highly efficient, but the have also been particularly easy to teach to students.


Using innovative nanotechnology, IBM scientists have demonstrated a data storage density of a trillion bits per square inch -- 20 times higher than the densest magnetic storage available today. IBM achieved this remarkable density -- enough to store 25 million printed textbook pages on a surface the size of a postage stamp -- in a research project code-named "Millipede".
Rather than using traditional magnetic or electronic means to store data, Millipede uses thousands of nano-sharp tips to punch indentations representing individual bits into a thin plastic film. The result is akin to a nanotech version of the venerable data processing 'punch card' developed more than 110 years ago, but with two crucial differences: the 'Millipede' technology is re-writeable (meaning it can be used over and over again), and may be able to store more than 3 billion bits of data in the space occupied by just one hole in a standard punch card.
Although this unique approach is smaller than today's traditional technologies and can be operated at lower power, IBM scientists believe still higher levels of storage density are possible. "Since a nanometer-scale tip can address individual atoms, further improvements far beyond even this fantastic terabit milestone can be achieved. While current storage technologies may be approaching their fundamental limits, this nanomechanical approach is potentially valid for a thousand-fold increase in data storage density.
The terabit demonstration employed a single "nano-tip" making indentations only 10 nanometers (millionth of a millimeter) in diameter -- each mark being 50,000 times smaller than the period at the end of this sentence. While the concept has been proven with an experimental setup using more than 1,000 tips, the research team is now building a prototype, due to be completed early next year, which deploys more than 4,000 tips working simultaneously over a 7 mm-square field. Such dimensions would enable a complete high-capacity data storage system to be packed into the smallest format used now for flash memory.

While flash memory is not expected to surpass 1-2 gigabytes of capacity in the near term, Millipede technology could pack 10 - 15 gigabytes of data into the same tiny format, without requiring more power for device operation.
The Millipede project could bring tremendous data capacity to mobile devices such as personal digital assistants, cellular phones, and multifunctional watches. In addition, we are also exploring the use of this concept in a variety of other applications, such as large-area microscopic imaging, nanoscale lithography or atomic and molecular manipulation.

Multiterabit networks

The explosive demand for bandwidth for data networking applications continues to drive photonics technology toward ever increasing capacity in the backbone fiber network and toward flexible optical networking. Already commercial Tb/s (per fiber) transmission systems have been announced, and it can be expected that in the next several years, we will begin to be limited by the 50 THz transmission bandwidth of silca optical fiber. Efficient bandwidth utilization will be one of the challenges of photonics research. Since the communication will be dominated by data, we can expect the network of the future to consist of multiterabit packet switches to aggregate traffic at the edge of the network and cross connects with wavelength granularity and tens of terabits throughout the core.
The infrastructure required to govern Internet traffic volume, which doubles every six months, consists of two complementary elements: fast point-to-point links and high-capacity switches and routers. Dense wavelength division multiplexing (DWDM) technology, which permits transmission of several wave-lengths over the same optical media, will enable optical point-to-point links to achieve an estimated 10 terabits per second by 2008. However, the rapid growth of Internet traffic coupled with the avail-ability of fast optical links threatens to cause a bottleneck at the switches and routers.
Multiterabit packet-switched networks will require high-performance scheduling algorithms and architectures. With port densities and data rates growing at an unprecedented rate, future prioritized scheduling schemes will be necessary to pragmatically scale toward multiterabit capacities. Further, support of strict QoS requirements for the diverse traffic loads characterizing emerging multimedia Internet traffic will increase. Continuous improvements in VLSI and optical technologies will stimulate innovative solutions to the intricate packet-scheduling task.