Water turbines

A water turbine is a rotary engine that takes energy from moving water.

Water turbines were developed in the nineteenth century and were widely used for industrial power prior to electrical grids. Now they are mostly used for electric power generation. They harness a clean and renewable energy source.

Flowing water is directed on to the blades of a turbine runner, creating a force on the blades. Since the runner is spinning, the force acts through a distance (force acting through a distance is the definition of work). In this way, energy is transferred from the water flow to the turbine.

Water turbines are divided into two groups; reaction turbines and impulse turbines.

The precise shape of water turbine blades is a function of the supply pressure of water, and the type of impeller selected.

Reaction turbines

Reaction turbines are acted on by water, which changes pressure as it moves through the turbine and gives up its energy. They must be encased to contain the water pressure (or suction), or they must be fully submerged in the water flow.

Newton's third law describes the transfer of energy for reaction turbines.

Most water turbines in use are reaction turbines. They are used in low and medium head applications.

Impulse turbines

Impulse turbines change the velocity of a water jet. The jet impinges on the turbine's curved blades which change the direction of the flow. The resulting change in momentum (impulse) causes a force on the turbine blades. Since the turbine is spinning, the force acts through a distance (work) and the diverted water flow is left with diminished energy.

Prior to hitting the turbine blades, the water's pressure (potential energy) is converted to kinetic energy by a nozzle and focused on the turbine. No pressure change occurs at the turbine blades, and the turbine doesn't require a housing for operation.

Newton's second law describes the transfer of energy for impulse turbines.


Large modern water turbines operate at mechanical efficiencies greater than 90% (not to be confused with thermodynamic efficiency).

Wind Turbines

A wind turbine is a rotating machine that converts the kinetic energy in wind into mechanical energy. If the mechanical energy is used directly by machinery, such as a pump or grinding stones, the machine is usually called a windmill. If the mechanical energy is then converted to electricity, the machine is called a wind generator, wind turbine, wind power unit (WPU) or wind energy converter (WEC).

This article discusses electric power generation machinery. Windmill discusses machines used for grain-grinding, water pumping, etc. The article on wind power describes turbine placement, economics, public concerns, and controversy. The wind energy section of that article describes the distribution of wind energy over time, and how that affects wind-turbine design. See environmental concerns with electricity generation for discussion of environmental problems with wind-energy production.

Wind machines were used for grinding grain in Persia as early as 200 B.C. This type of machine was introduced into the Roman Empire by 250 A.D. By the 14th century Dutch windmills were in use to drain areas of the Rhine River delta. In Denmark by 1900 there were about 2500 windmills for mechanical loads such as pumps and mills, producing an estimated combined peak power of about 30 MW. The first windmill for electricity production was built in Cleveland, Ohio by Charles F Brush in 1888, and in 1908 there were 72 wind-driven electric generators from 5 kW to 25 kW. The largest machines were on 24 m (79 ft) towers with four-bladed 23 m (75 ft) diameter rotors. Around the time of World War I, American windmill makers were producing 100,000 farm windmills each year, most for water-pumping.[1] By the 1930s windmills for electricity were common on farms, mostly in the United States where distribution systems had not yet been installed. In this period, high-tensile steel was cheap, and windmills were placed atop prefabricated open steel lattice towers.

A forerunner of modern horizontal-axis wind generators was in service at Yalta, USSR in 1931. This was a 100 kW generator on a 30 m (100 ft) tower, connected to the local 6.3 kV distribution system. It was reported to have an annual capacity factor of 32 per cent, not much different from current wind machines.

The very first electricity generating windmill operated in the UK was a battery charging machine installed in 1887 by James Blyth in Scotland. The first utility grid-connected wind turbine operated in the UK was built by the John Brown Company in 1954 in the Orkney Islands. It had an 18 metre diameter, three-bladed rotor and a rated output of 100 kW.

Blue Gene

The approach taken in BlueGene/L (BG/L) is substantially different. The system is built out of a very large number of nodes, each of which has a relatively modest clock rate. Those nodes present both low power consumption and low cost. The design point of BG/L utilizes IBM PowerPC embedded CMOS processors, embedded DRAM, and system-on-a-chip techniques that allow for integration of all system functions including compute processor, communications processor, 3 cache levels, and multiple high speed interconnection networks with sophisticated routing onto a single ASIC. Because of a relatively modest processor cycle time, the memory is close, in terms of cycles, to the processor. This is also advantageous for power consumption, and enables construction of denser packages in which 1024 compute nodes can be placed within a single rack. Integration of the inter-node communications network functions onto the same ASIC as the processors reduces cost, since the need for a separate, high-speed switch is eliminated.

The current design goals of BG/L aim for a scalable supercomputer having up to 65,536 compute nodes and target peak performance of 360 teraFLOPS with extremely cost effective characteristics and low power (~1 MW), cooling (~300 tons) and floor space (<2,500>

In December 1999, IBM announced a $100 million research initiative for a five-year effort to build a massively parallel computer, to be applied to the study of biomolecular phenomena such as protein folding. The project has two main goals: to advance our understanding of the mechanisms behind protein folding via large-scale simulation, and to explore novel ideas in massively parallel machine architecture and software. This project should enable biomolecular simulations that are orders of magnitude larger than current technology permits. Major areas of investigation include: how to use this novel platform to effectively meet its scientific goals, how to make such massively parallel machines more usable, and how to achieve performance targets at a reasonable cost, through novel machine architectures. The design is built largely around the previous QCDSP and QCDOC supercomputers.

The Blue Gene/L supercomputer is unique in the following aspects:

  • Trading the speed of processors for lower power consumption.
  • Dual processors per node with two working modes: co-processor (1 user process/node: computation and communication work is shared by two processors) and virtual node (2 user processes/node)
  • System-on-a-chip design
  • A large number of nodes (scalable in increments of 1024 up to at least 65,536)
  • Three-dimensional torus interconnect with auxiliary networks for global communications, I/O, and management
  • Lightweight OS per node for minimum system overhead (computational noise)
  • Blue Gene is a computer architecture project designed to produce several supercomputers, designed to reach operating speeds in the PFLOPS (petaFLOPS) range, and currently reaching sustained speeds of nearly 500 TFLOPS (teraFLOPS). It is a cooperative project among IBM (particularly IBM Rochester MN, and the Thomas J. Watson Research Center), the Lawrence Livermore National Laboratory, the United States Department of Energy (which is partially funding the project), and academia. There are four Blue Gene projects in development: BlueGene/L, BlueGene/C, BlueGene/P, and BlueGene/Q.

The Blue Gene/L supercomputer is unique in the following aspects:

  • Trading the speed of processors for lower power consumption.
  • Dual processors per node with two working modes: co-processor (1 user process/node: computation and communication work is shared by two processors) and virtual node (2 user processes/node)
  • System-on-a-chip design
  • A large number of nodes (scalable in increments of 1024 up to at least 65,536)
  • Three-dimensional torus interconnect with auxiliary networks for global communications, I/O, and management
  • Lightweight OS per node for minimum system overhead (computational noise)

The PIVOT VECTOR SPACE APPROACH is a novel technique of audio-video mixing which automatically selects the best audio clip from the available database, to be mixed with the given video shot. Till the development of this technique, audio-video mixing is a process that could be done only by professional audio-mixing artists. However employing these artists is very expensive and is not feasible for home video mixing. Besides, the process is time-consuming and tedious.

In today’s era, significant advances are happening constantly in the field of Information Technology. The development in the IT related fields such as multimedia is extremely vast. This is evident with the release of a variety of multimedia products such as mobile handsets, portable MP3 players, digital video camcorders, handicams etc. Hence, certain activities such as production of home videos is easy due to products such as handicams, digital video camcorders etc. Such a scenario was not there a decade ago ,since no such products were available in the market. As a result production of home videos is not possible since it was reserved completely for professional video artists.

So in today’s world, a large amount of home videos are being made and the number of amateur and home video enthusiasts is very large.A home video artist can never match the aesthetic capabilities of a professional audio mixing artist. However employing a professional mixing artist to develop home video is not feasible as it is expensive, tedious and time consuming.

The PIVOT VECTOR SPACE APPROACH is a technique that all amateur and home video enthusiasts can use in the creation of video footage that gives a professional look and feel. This technique saves cost and is fast. Since it is fully automatic, the user need not worry about his aesthetic capabilities. The PIVOT VECTOR SPACE APPROACH uses a pivot vector space mixing framework to incorporate the artistic heuristics for mixing audio with video .These artistic heuristics use high level perceptual descriptors of audio and video characteristics. Low-level signal processing techniques compute these descriptors.

Pervasive computing

Ubiquitous computing or pervasive computing is the result of computer technology advancing at exponential speeds -- a trend toward all man-made and some natural products having hardware and software. With each day computing devices become progressively smaller and more powerful. Pervasive computing goes beyond the realm of personal computers: it is the idea that almost any device, from clothing to tools to appliances to cars to homes to the human body to your coffee mug, can be imbedded with chips to connect the device to an infinite network of other devices.

The main aim of pervasive computing, which combines current network technologies with wireless computing, voice recognition, Internet capability and artificial intelligence, is to create an environment where the connectivity of devices is embedded in such a way that the connectivity is unobtrusive and always available.

The idea that technology is moving beyond the personal computer to everyday devices with embedded technology and connectivity as computing devices become progressively smaller and more powerful. Also called ubiquitous computing, pervasive computing is the result of computer technology advancing at exponential speeds -- a trend toward all man-made and some natural products having hardware and software. Pervasive computing goes beyond the realm of personal computers: it is the idea that almost any device, from clothing to tools to appliances to cars to homes to the human body to your coffee mug, can be imbedded with chips to connect the device to an infinite network of other devices. The goal of pervasive computing, which combines current network technologies with wireless computing, voice recognition, Internet capability and artificial intelligence, is to create an environment where the connectivity of devices is embedded in such a way that the connectivity is unobtrusive and always available.

Pervasive computing is the next generation computing environments with information & communication technology everywhere, for everyone, at all times.

Information and communication technology will be an integrated part of our environments: from toys, milk cartons and desktops to cars, factories and whole city areas - with integrated processors, sensors, and actuators connected via high-speed networks and combined with new visualisation devices ranging from projections directly into the eye to large panorama displays.

The Centre for Pervasive Computing contributes to the development of

  • new concepts, technologies, products and services
  • innovative interaction between universities and companies
  • a strong future basis for educating IT specialists.

Pervasive computing goes beyond the traditional user interfaces, on the one hand imploding them into small devices and appliances, and on the other hand exploding them onto large scale walls, buildings and furniture.

The activities in the centre are based on competencies from a broad spectrum of Research Areas of relevance for pervasive computing.

Most of the work in the centre is organised as Research Projects involving both companies and universities.


COOPERATIVE LINUX, abbrieviated as coLinux, is a software that lets Microsoft Windows cooperate with the Linux kernel, to run both in parallel, on the same machine. Cooperative Linux utilizes the concept of a Cooperative Virtual Machine (CVM). In contrast to the traditional Virtual Machines(VMs), the CVM shares, the resources that already exist in the host OS. In traditional (host) VMs, resources are virtualized for every (guest) OS. The Cooperative Virtual Machine(CVM) gives both operating systems complete control of the host machine, while the traditional VM sets every guest OS in an unprivileged state to access the real machine.

The term cooperative is used to describe two entities working in parallel. In effect, Cooperative Linux turns the two different operating system kernels into two big *coroutines. Each kernel has its own complete CPU context and address space. Each kernel also decides when to give control back to its partner. However, while both kernels theoretically have full access to the real hardware, modern PC hardware is incompatibly designed to be controlled by two different operating systems at the same time. Therefore the host kernel is left in control of the real hardware, while the guest kernel contains special drivers that communicates with the host and provide various important devices to the guest OS. *program components that generalize subroutines to allow multiple entry points and suspending and resuming of execution at certain locations.

Cooperative Linux is significantly different from traditional virtualization solutions such as VMware, Plex86, Virtual PC, QEMU and other methods such as Xen, which generally work by running the guest OS in a less privileged mode than that of the host kernel. In contrast, the CPL0 approach simplified design with an early-beta development cycle of only one month -- starting from scratch by modifying the vanilla Linux 2.4.23-pre9 release until reaching to the point where KDE could run.

The only downsides to the CPL0 approach are stability and security. If it's unstable, it has the potential to crash the system (on earlier releases before ioperm was disabled, attempting to start a normal X server under coLinux would crash the host). However, measures can be taken, such as cleanly shutting it down on the first internal Oops or panic. Another disadvantage is security. Acquiring root user access on a Cooperative Linux machine can potentially lead to root on the host machine if the attacker loads specially crafted kernel module or (if the coLinux kernel was compiled without module support) the attacker finds some other way to inject code into the running coLinux kernel

coLinux is an extremely interesting new approach to virtualization allowing you to run Linux parallel to your Windows platform. coLinux approaches the experimenting Linux novice - who does not actually want to install the operating system on a fresh machine. Another target group is the Linux enthusiast, in that coLinux also makes it possible for him to run Linux on his Windows machine without using a standard virtualization product with large requirements to system resources.
coLinux seems to be a promising ongoing project - and a worthy competitor to other virtualization products on the market. It might not be completely fool proof for beginners, but it may become more so as it matures.

Blu Ray Disc

Blu-ray, also known as Blu-ray Disc (BD), is the name of a next-generation optical disc format jointly developed by the Blu-ray Disc Association (BDA), a group of the world's leading consumer electronics, personal computer and media manufacturers (including Apple, Dell, Hitachi, HP, JVC, LG, Mitsubishi, Panasonic, Pioneer, Philips, Samsung, Sharp, Sony, TDK and Thomson). The format was developed to enable recording, rewriting and playback of high-definition video (HD), as well as storing large amounts of data. The format offers more than five times the storage capacity of traditional DVDs and can hold up to 25GB on a single-layer disc and 50GB on a dual-layer disc. This extra capacity combined with the use of advanced video and audio codecs will offer consumers an unprecedented HD experience.

While current optical disc technologies such as DVD, DVD±R, DVD±RW, and DVD-RAM rely on a red laser to read and write data, the new format uses a blue-violet laser instead, hence the name Blu-ray. Despite the different type of lasers used, Blu-ray products can easily be made backwards compatible with CDs and DVDs through the use of a BD/DVD/CD compatible optical pickup unit. The benefit of using a blue-violet laser (405nm) is that it has a shorter wavelength than a red laser (650nm), which makes it possible to focus the laser spot with even greater precision. This allows data to be packed more tightly and stored in less space, so it's possible to fit more data on the disc even though it's the same size as a CD/DVD. This together with the change of numerical aperture to 0.85 is what enables Blu-ray Discs to hold 25GB/50GB.

Blu-ray is currently supported by more than 180 of the world's leading consumer electronics, personal computer, recording media, video game and music companies. The format also has broad support from the major movie studios as a successor to today's DVD format. In fact, seven of the eight major movie studios (Disney, Fox, Warner, Paramount, Sony, Lionsgate and MGM) have released movies in the Blu-ray format and six of them (Disney, Fox, Sony, Warner, Lionsgate and MGM) are releasing their movies exclusively in the Blu-ray format. Many studios have also announced that they will begin releasing new feature films on Blu-ray Disc day-and-date with DVD, as well as a continuous slate of catalog titles every month. For more information about Blu-ray movies, check out our
Blu-ray movies and Blu-ray reviews section which offers information about new and upcoming Blu-ray releases, as well as what movies are currently available in the Blu-ray format.

Optical disks share a major part among the secondary storage devices.Blu .ray Disc is a next .generation optical disc format. The technology utilizes a blue laser diode operating at a wavelength of 405 nm to read and write data. Because it uses a blue laser it can store enormous more amounts of data on it than was ever possible.Data is stored on Blu .Ray disks in the form of tiny ridges on the surface of an opaque 1.1 .millimetre .thick substrate. This lies beneath a transparent 0.1mm protective layer. With the help of Blu .ray recording devices it is possible to record up to 2.5 hours of very high quality audio and video on a single BD. Blu ray also promises some added security, making ways for copyright protections. Bluray discs can have a unique ID written on them to have copyright protection inside the recorded streams. Blu .ray disc takes the DVD technology one step further, just by using a laser with a nice color.

Blu-ray Disc (also known as Blu-ray or BD) is an optical disc storage media format. Its main uses are high-definition video and data storage. The disc has the same dimensions as a standard DVD or CD.

The name Blu-ray Disc is derived from the blue laser used to read and write this type of disc. Because of its shorter wavelength (405 nm), substantially more data can be stored on a Blu-ray Disc than on the DVD format, which uses a red (650 nm) laser. A dual layer Blu-ray Disc can store 50 GB, almost six times the capacity of a dual layer DVD.

Blu-ray Disc was developed by the Blu-ray Disc Association, a group of companies representing consumer electronics, computer hardware, and motion picture production. The standard is covered by several patents belonging to different companies. As of April 2008, a joint licensing agreement for all the relevant patents had not yet been finalized.

As of April 5, 2008, more than 530Blu-ray Disc titles have been released in the United States, and more than 250 in Japan.

During the high definition optical disc format war, Blu-ray Disc competed with the HD DVD format. On February 19, 2008, Toshiba — the main company supporting HD DVD — announced it would no longer develop, manufacture and market HD DVD players and recorders, leading almost all other HD DVD supporters to follow suit, effectively naming Blu-ray the victor of the format war.

In 1998, commercial HDTV sets began to appear in the consumer market; however, there was no commonly accepted, inexpensive way to record or play HD content. In fact, there was no medium with the storage required to accommodate HD codecs, except JVC's Digital VHS and Sony's HDCAM. Nevertheless, it was well known that using lasers with shorter wavelengths would enable optical storage with higher density. When Shuji Nakamura invented practical blue laser diodes, it was a sensation, although a lengthy patent lawsuit delayed commercial introduction.

Sony started two projects applying the new diodes: UDO (Ultra Density Optical) and DVR Blue (together with Pioneer), a format of rewritable discs which would eventually become Blu-ray Disc (more specifically, BD-RE), The core technologies of the formats are essentially similar.

The first DVR Blue prototypes were unveiled at the CEATEC exhibition in October 2000. Because the Blu-ray Disc standard places the data recording layer close to the surface of the disc, early discs were susceptible to contamination and scratches and had to be enclosed in plastic cartridges for protection. In February 2002, the project was officially announced as Blu-ray and the Blu-ray Disc Association was founded by the nine initial members.

The first consumer devices were in stores on April 10, 2003. This device was the Sony BDZ-S77; a BD-RE recorder that was made available only in Japan. The recommended price was US$3800; however, there was no standard for pre-recorded video and no movies were released for this player. The Blu-ray Disc standard was still years away, since a new and secure DRM system was needed before Hollywood studios would accept it, not wanting to repeat the failure of the Content Scramble System for DVDs.

The Blu-ray Disc physical specifications were finished in 2004. In January 2005, TDK announced that they had developed a hard coating polymer for Blu-ray Discs. The cartridges, no longer necessary, were scrapped. The BD-ROM specifications were finalized in early 2006. AACS LA, a consortium founded in 2004, had been developing the DRM platform that could be used to securely distribute movies to consumers. However, the final AACS standard was delayed, and then delayed again when an important member of the Blu-ray Disc group voiced concerns. At the request of the initial hardware manufacturers, including Toshiba, Pioneer and Samsung, an interim standard was published which did not include some features, like managed copy.


Conventional surround sound is based on using audio compression technology (for example Dolby or Digital AC-) to encode and deliver a multi-channel soundtrack, and audio decompression technology to decode the soundtrack for delivery on a surround sound 5-speaker setup. Additionally, virtual surround sound systems use 3D audio technology to create the illusion of five speakers emanating from a regular set of stereo speakers, therefore enabling a surround sound listening experience without the need for a five speaker setup. The virtual surround systems have many advantages over conventional surround systems. One is a conventional system contain five speakers, each must be positioned properly, wired to the main amplifier or receiver, balanced -daunting task for someone who just wants to watch a good movie. This seminar deals with different methods to produce virtualization of surround sound, especially Real-Time Partitioned Convolution for Surround sound.

Ambiophonics is a method in the public domain that employs digital signal processing (DSP) and two loudspeakers directly in front of the listener in order to improve reproduction of stereophonic and 5.1 surround sound for music, movies, and games in home theaters, gaming PCs, workstations, or studio monitoring applications. First implemented using mechanical means in 1988 [1,2]. Ambiophonics eliminates crosstalk inherent in the conventional “stereo triangle” speaker placement, and thereby generates a speaker-binaural soundfield that emulates headphone-binaural, and creates for the listener improved perception of “reality” of recorded auditory scenes. A second speaker pair can be added in back in order to enable 360° surround sound reproduction. Additional surround speakers may be used for hall ambience, including height, if desired.

In stereophonics, the reproduced sound is distorted by crosstalk, where signals from either speaker reach not only the intended ear, but the opposite ear, causing comb filtering that distorts timbre of central voices, and creating false “early reflections” due to the delay of sound reaching the opposite ear. In addition, auditory images are bounded between left (L) and right (R) speakers, usually positioned at ±30° with respect to the listener, thereby including 60°, only 1/6 of the horizontal circle, with the listener at the center. (It should be noted that human hearing can locate sound from directions not only in a 360° circle, but a full sphere.)

Ambiophonics eliminates speaker crosstalk and its deleterious effects. Using Ambiophonics, auditory images can extend in theory all the way to the sides, at ±90° left and right and including the front hemi-circle of 180°, depending on listening acoustics and to what degree the recording has captured the interaural level differences (ILD) and the interaural time differences (ITD) that characterize two-eared human hearing. Most existing two channel discs (LPs as well as CDs) include ILD and ITD data that cannot be reproduced by the stereo loudspeaker “triangle” due to inherent crosstalk. When reproduced using Ambiophonics, such existing recordings’ true qualities are revealed, with natural solo voices and wider images, up to 150° in practice.

It is also possible to make new recordings using binaurally-based main microphones, such as an Ambiophone [3], which is optimized for Ambiophonic reproduction (stereo-compatible) since it captures and preserves the same ILD and ITD that one would experience with one’s own ears at the recording session. Along with life-like spatial qualities, more correct timbre (tone color) of sounds is preserved. Use of ORTF, Jecklin Disk, and sphere microphones without pinna (outer ear) can produce similar results. (Note that microphone techniques such as these that are binaural-based but without pinna also produce compatible results using conventional speaker-stereo, 5.1 surround, and mp3 players.)

By repositioning speakers closer together, and using digital signal processing (DSP) such as free RACE (Recursive Ambiophonic Crosstalk Elimination [5]) or similar software, Ambiophonic reproduction is able to generate wide auditory images from most ordinary CDs/LPs/DVDs or MP-3s of music, movies, or games and, depending upon the recording, restore the life-like localization, spatiality, and tone color they have captured. For most test subjects, results are dramatic, suggesting that Ambiophonics has the potential to revitalize interest in high-fidelity sound reproduction, both in stereo and surround.

LonWorks Protocol

A technology initiated by the Echelon Corporation in 1990, the LonWorks provides a platform for the for building industrial, transportation, home automation and public utility control networks to communicate with each other. Built on the Local Operating Network, it uses the LonTalk protocol, in order to have a peer to peer communication with each other, with out actually having a gateway or other hardware.

LonWorks is a protocol developed by Echelon Corporation for manufacturers who wish to use an open protocol with off-the-shelf chips, operating systems, and parts to build products that feature improved reliability, flexibility, system cost, and performance. LonWorks technology is accelerating the trend away from proprietary control schemes and centralized systems by providing interoperability, robust technology, faster development, and scale economies.

A major goal of LonWorks is to give developers, from the same or different companies, the ability to design products that will be able to interact with one another. The LonWorks protocol provides a common applications framework that ensures interoperability using powerful concepts called network variables and Standard Network Variable Types (SNVTs).

Communication between nodes on a network takes place using the network variables that are defined in each node. The product developer defines the network variables when the application program is created as part of the Application layer of the protocol. Network variables are shared by multiple nodes.

The use of Standard Network Variable Types (SNVTs) contributes to the interoperability of LONWORKS products from different manufacturers. If all manufacturers use this variable type in their application when a network variable for continuous level is defined, any device reading a continuous level can communicate with other devices on the network that may be using the variable as a sensor output to initiate an actuator.

· LonWorks Requires an Isolated Transceiver Design. Understanding the different supported communication channels, how to use the Echelon Transceivers and creating a noise immune design can challenge the novice LonWorks designer.

· The LonWorks protocol can be complex. LonWorks network variables (SNVTs) are a unique way to transfer data among nodes. Understanding what SNVTs to support and how to implement them come with many surprises, especially if you start "cold" from the documentation.

· The Neuron Processors Support a Distinctive OS. The OS is tightly integrated into NodeBuilder, the software development tool, the Neuron Processors, the LonWorks protocol and the transfer of SNVTs around the network. Understanding all these components and how they work together requires significant effort.

Bayesian Networks

A Bayesian network is a graphical model that encodes probabilistic relationships among variables of interest. When used in conjunction with statistical techniques, the graphical model has several advantages for data analysis. One, because the model encodes dependencies among all variables, it readily handles situations where some data entries are missing. Two, a Bayesian network can be used to learn causal relationships, and hence can be used to gain understanding about a problem domain and to predict the consequences of intervention. Three, because the model has both a causal and probabilistic semantics, it is an ideal representation for combining prior knowledge (which often comes in causal form) and data. Four, Bayesian statistical methods in conjunction with bayesian networks offer an efficient and principled approach for avoiding the overfitting of data. In this paper, we discuss methods for constructing Bayesian networks from prior knowledge and summarize Bayesian statistical methods for using data to improve these models. With regard to the latter task, we describe methods for learning both the parameters and structure of a Bayesian network, including techniques for learning with incomplete data. In addition, we relate Bayesian-network methods for learning to techniques for supervised and unsupervised learning. We illustrate the graphical-modeling approach using a real-world case study.

Bayesian Networks are becoming an increasingly important area for research and application in the entire field of Artificial Intelligence. This paper explores the nature and implications for Bayesian Networks beginning with an overview and comparison of inferential statistics and Bayes' Theorem. The nature, relevance and applicability of Bayesian Network theory for issues of advanced computability forms the core of the current discussion. A number of current applications using Bayesian networks is examined. The paper concludes with a brief discussion of the appropriateness and limitations of Bayesian Networks for human-computer interaction and automated learning.

Inferential statistics is a branch of statistics that attempts to make valid predictions based on only a sample of all possible observations[1]. For example, imagine a bag of 10,000 marbles. Some are black and some white, but of which the exact proportion of these colours is unknown. It is unnecessary to count all the marbles in order to make some statement about this proportion. A randomly acquired sample of 1,000 marbles may be sufficient to make an inference about the proportion of black and white marbles in the entire population. If 40% of our sample are white, then we may be able to infer that about 40% of the population are also white.

To the layperson, this process seems rather straight forward. In fact, it might seem that there is no need to even acquire a sample of 1,000 marbles. A sample of 100 or even 10 marbles might do.

This is assumption is not necessarily correct. As the sample size becomes smaller, the potential for error grows. For this reason, inferential statistics has developed numerous techniques for stating the level of confidence that can be placed on these inferences.

Classical inferential models do not permit the introduction of prior knowledge into the calculations. For the rigours of the scientific method, this is an appropriate response to prevent the introduction of extraneous data that might skew the experimental results. However, there are times when the use of prior knowledge would be a useful contribution to the evaluation process.

Assume a situation where an investor is considering purchasing some sort of exclusive franchise for a given geographic territory. Her business plan suggests that she must achieve 25% of market saturation for the enterprise to be profitable. Using some of her investment funds, she hires a polling company to conduct a randomized survey. The results conclude that from a random sample of 20 consumers, 25% of the population would indeed be prepared to purchase her services. Is this sufficient evidence to proceed with the investment?

If this is all the investor has to go on, she could find herself on her break-even point and could just as easily turn a loss instead of a profit. She may not have enough confidence in this survey or her plan to proceed.

Limitations of Bayesian Networks

In spite of their remarkable power and potential to address inferential processes, there are some inherent limitations and liabilities to Bayesian networks.

In reviewing the Lumiere project, one potential problem that is seldom recognized is the remote possibility that a system's user might wish to violate the distribution of probabilities upon which the system is built. While an automated help desk system that is unable to embrace unusual or unanticipated requests is merely frustrating, an automated navigation system that is unable to respond to some previously unforeseen event might put an aircraft and its occupants in mortal peril. While these systems can update their goals and objectives based on prior distributions of goals and objectives among sample groups, the possibility that a user will make a novel request for information in a previously unanticipated way must also be accommodated.

Two other problems are more serious. The first is the computational difficulty of exploring a previously unknown network. To calculate the probability of any branch of the network, all branches must be calculated. While the resulting ability to describe the network can be performed in linear time, this process of network discovery is an NP-hard task which might either be too costly to perform, or impossible given the number and combination of variables.

The second problem centers on the quality and extent of the prior beliefs used in Bayesian inference processing. A Bayesian network is only as useful as this prior knowledge is reliable. Either an excessively optimistic or pessimistic expectation of the quality of these prior beliefs will distort the entire network and invalidate the results. Related to this concern is the selection of the statistical distribution induced in modelling the data. Selecting the proper distribution model to describe the data has a notable effect on the quality of the resulting network.

A Bayesian network (or a belief network) is a probabilistic graphical model that represents a set of variables and their probabilistic independencies. For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.

Formally, Bayesian networks are directed acyclic graphs whose nodes represent variables, and whose arcs encode conditional independencies between the variables. Nodes can represent any kind of variable, be it a measured parameter, a latent variable or a hypothesis. They are not restricted to representing random variables, which represents another "Bayesian" aspect of a Bayesian network. Efficient algorithms exist that perform inference and learning in Bayesian networks. Bayesian networks that model sequences of variables (such as for example speech signals or protein sequences) are called dynamic Bayesian networks. Generalizations of Bayesian networks that can represent and solve decision problems under uncertainty are called influence diagrams.

Bayesian networks are used for modelling knowledge in bioinformatics (gene regulatory networks, protein structure), medicine, document classification, image processing, data fusion, decision support systems, engineering, and law.


A computer that has been implanted with a daemon that puts it under the control of a malicious hacker without the knowledge of the computer owner. Zombies are used by malicious hackers to launch DoS attacks. The hacker sends commands to the zombie through an open port. On command, the zombie computer sends an enormous amount of packets of useless information to a targeted Web site in order to clog the site's routers and keep legitimate users from gaining access to the site. The traffic sent to the Web site is confusing and therefore the computer receiving the data spends time and resources trying to understand the influx of data that has been transmitted by the zombies. Compared to programs such as viruses or worms that can eradicate or steal information, zombies are relatively benign as they temporarily cripple Web sites by flooding them with information and do not compromise the site's data. Such prominent sites as Yahoo!, Amazon and CNN.com were brought down in 2000 by zombie DoS attacks.

In the world of UNIX, a zombie refers to a 'child' program that was started by a 'parent' program but then abandoned by the parent. Zombie is also used to describe a computer that has been implanted with a daemon that puts it under the control of a malicious hacker without the knowledge of the computer owner.

A zombie computer (often shortened as zombie) is a computer attached to the Internet that has been compromised by a hacker, a computer virus, or a trojan horse. Generally, a compromised machine is only one of many in a botnet, and will be used to perform malicious tasks of one sort or another under remote direction. Most owners of zombie computers are unaware that their system is being used in this way. Because the owner tends to be unaware, these computers are metaphorically compared to zombies.

(1) Spammer's web site (2) Spammer (3) Spamware (4) Infected computers (5) Virus or trojan (6) Mail servers (7) Users (8) Web traffic

Zombies have been used extensively to send e-mail spam; as of 2005, an estimated 50–80% of all spam worldwide was sent by zombie computers. This allows spammers to avoid detection and presumably reduces their bandwidth costs, since the owners of zombies pay for their own bandwidth.

For similar reasons zombies are also used to commit click fraud against sites displaying pay per click advertising. Others can host phishing or money mule recruiting websites.

Zombies have also conducted distributed denial of service attacks, such as the attack upon the SPEWS service in 2003, and the one against Blue Frog service in 2006. In 2000, several prominent Web sites (Yahoo, eBay, etc) were clogged to a standstill by a distributed denial of service attack mounted by a Canadian teenager. An attack on grc.com is discussed at length, and the perpetrator, a 13-year old probably from Kenosha, Wisconsin, identified on the Gibson Research Web site. Steve Gibson disassembled a 'bot' which was a zombie used in the attack, and traced it to its distributor. In his clearly written account about his research, he describes the operation of a 'bot' controlling IRC channel.

Network Intrusion-prevention systems (NIPS) are purpose-built hardware/software platforms that are designed to analyze, detect, and report on security related events. NIPS are designed to inspect traffic and based on their configuration or security policy, they can drop malicious traffic while an ASIC-based Intrusion-prevention system (IPS) could detect and block denial of service attacks; these have the processing power and the granularity to analyze the attacks and act like a circuit breaker in an automated way.

Satellites have been used for years to provide communication network links. Historically, the use of satellites in the Internet can be divided into two generations. In the first generation, satellites were simply used to provide commodity links (e.g., T1) between countries. Internet Protocol (IP) routers were attached to the link endpoints to use the links as single-hop alternatives to multiple terrestrial hops. Two characteristics marked these first-generation systems: they had limited bandwidth, and they had large latencies that were due to the propagation delay to the high orbit position of a geosynchronous satellite.In the second generation of systems now appearing, intelligence is added at the satellite link endpoints to overcome these characteristics. This intelligence is used as the basis for a system for providing Internet access engineered using a collection or fleet of satellites, rather than operating single satellite channels in isolation. Examples of intelligent control of a fleet include monitoring which documents are delivered over the system to make decisions adaptively on how to schedule satellite time; dynamically creating multicast groups based on monitored data to conserve satellite bandwidth; caching documents at all satellite channel endpoints; and anticipating user demands to hide latency.

Two scaling problems face the Internet today. First, it will be years before terrestrial networks are able to provide adequate bandwidth uniformly around the world, given the explosive growth in Internet bandwidth demand and the amount of the world that is still unwired. Second, the traffic distribution is not uniform worldwide: Clients in all countries of the world access content that today is chiefly produced in a few regions of the world (e.g., North America). A new generation of Internet access built around geosynchronous satellites can provide immediate relief. The satellite system can improve service to bandwidth-starved regions of the globe where terrestrial networks are insufficient and supplement terrestrial networks elsewhere. This new generation of satellite system manages a set of satellite links using intelligent controls at the link endpoints. The intelligence uses feedback obtained from monitoring end-user behavior to adapt the use of resources. Mechanisms controlled include caching, dynamic construction of push channels, use of multicast, and scheduling of satellite bandwidth. This paper discusses the key issues of using intelligence to control satellite links, and then presents as a case study the architecture of a specific system: the Internet Delivery System, which uses INTELSAT's satellite fleet to create Internet connections that act as wormholes between points on the globe.

Satellites have been used for years to provide communication network links. Historically, the use of satellites in the Internet can be divided into two generations. In the first generation, satellites were simply used to provide commodity links (e.g., T1) between countries. Internet Protocol (IP) routers were attached to the link endpoints to use the links as single-hop alternatives to multiple terrestrial hops. Two characteristics marked these first-generation systems: they had limited bandwidth, and they had large latencies that were due to the propagation delay to the high orbit position of a geosynchronous satellite.

In the second generation of systems now appearing, intelligence is added at the satellite link endpoints to overcome these characteristics. This intelligence is used as the basis for a system for providing Internet access engineered using a collection or fleet of satellites, rather than operating single satellite channels in isolation. Examples of intelligent control of a fleet include monitoring which documents are delivered over the system to make decisions adaptively on how to schedule satellite time; dynamically creating multicast groups based on monitored data to conserve satellite bandwidth; caching documents at all satellite channel endpoints; and anticipating user demands to hide latency.

The first question is whether it makes sense today to use geosynchronous satellite links for Internet access. Alternatives include wired terrestrial connections, low earth orbiting (LEO) satellites, and wireless wide area network technologies (such as Local Multipoint Distribution Service or 2.4-GHz radio links in the U.S.).

We see three reasons why geosynchronous satellites will be used for some years to come for international Internet connections.

The first reason is obvious: it will be years before terrestrial networks are able to provide adequate bandwidth uniformly around the world, given the explosive growth in Internet bandwidth demand and the amount of the world that is still unwired. Geosynchronous satellites can provide immediate relief. They can improve service to bandwidth-starved regions of the globe where terrestrial networks are insufficient and can supplement terrestrial networks elsewhere.

Second, geosynchronous satellites allow direct single-hop access to the Internet backbone, bypassing congestion points and providing faster access time and higher net throughputs. In theory, a bit can be sent the distance of an international connection over fiber in a time on the order of tens of microseconds. In practice today, however, international connections via terrestrial links are an order of magnitude larger. For example, in experiments we performed in December 1998, the mean round trip time between the U.S. and Brazil (vt.edu to embr.net.br) over terrestrial links were 562.9 msec (via teleglobe.net) and 220.7 (via gzip.net) [Habib]. In contrast, the mean latency between the two routers at the two endpoints of a satellite link between Bangledesh and Singapore measured in February 1999 was 348.5 msec. Therefore, a geosynchronous satellite has a sufficiently large footprint over the earth that it can be used to create wormholes in the Internet: constant-latency transit paths between distant points on the globe [Chen]. The mean latency of an international connection via satellite is competitive with today's terrestrial-based connections, but the variance in latency can be reduced.

As quality-of-service (QoS) guarantees are introduced by carriers, the mean and variance in latency should go down for international connections, reducing the appeal of geosychronous satellites. However, although QoS may soon be widely available within certain countries, it may be some time until it is available at low cost between most countries of the world.

A third reason for using geosynchronous satellites is that the Internet's traffic distribution is not uniform worldwide: clients in all countries of the world access content (e.g., Web pages, streaming media) that today is chiefly produced in a few regions of the world (e.g., North America). This implies that a worldwide multicast architecture that caches content on both edges of the satellite network (i.e., near the content providers as well as near the clients) could provide improved response time to clients worldwide. We use this traffic pattern in the system described in the case study (Section 3).

One final point of interest is to ask whether LEO satellites that are being deployed today will displace the need for geosynchronous satellites. The low orbital position makes the LEO footprint relatively small. Therefore, international connections through LEOs will require multiple hops in space, much as today's satellite-based wireless phone systems operate. The propagation delay will eliminate any advantage that LEOs have over geosynchronous satellites. On the other hand, LEOs have an advantage: they are not subject to the constraint in orbital positions facing geosynchronous satellite operators. So the total available LEO bandwidth could one day surpass that of geosynchronous satellites.

The overall system must achieve a balance between the throughput of the terrestrial Internet connection going into the warehouse, the throughput of the warehouse itself, the throughput of the satellite link, the throughput of each kiosk, and the throughput of the connection between a kiosk and its end users. In addition, a balance among the number of end users, the number of kiosks, and the number of warehouses is required.

Consider some examples. As the number of end users grows, so will the size of the set of popular Web pages that must be delivered, and the bandwidth required for push, real time, and timely traffic. Let's look at Web traffic in detail. Analysis of end-user traffic to proxy servers at America Online done at Virginia Tech shows that an average user requests one URL about every 50 seconds, which indicates a request rate of 0.02 URLs per second. (This does not mean that a person clicks on a link or types a new URL every 50 seconds; instead, each URL requested typically embeds other URLs, such as images. The average rate of the individual URLs requested either by a person or indirectly as an embedded object is one every 50 seconds.) Thus, a kiosk supporting 1,000 concurrent users must handle a request rate of 200 per second. The median file size from the set of traces cited above (DEC, America Online, etc.) is 2 kilobytes [Abdulla]. Thus, the kiosk Hypertext Transfer Protocol (HTTP)-level throughput to end users must be 400 kilobytes per second. At the other end, the warehouse has a connection to the Internet. The bandwidth of this connection must exceed that of the satellite connection, because the warehouse generates cache consistency traffic. The servers within the warehouse and kiosk have limited throughput, for example, the throughput at which the cache engines can serve Web pages. To do multicast transmission, a collection of content (Web pages, pushed documents) must be bundled up at the application layer at the warehouse into a unit for transmission to a multicast group, then broken down into individual objects at the kiosk. This assembly and disassembly process also limits throughput.

A second issue is how to handle Web page misses as kiosks. If the kiosk has no terrestrial Internet connection, then these situations obviously must be satisfied over the satellite channel. This reduces the number of kiosks that a satellite link can handle. On the other hand, if the kiosk does have a terrestrial connection, an adaptive decision might be to choose the satellite over the terrestrial link if there is unused satellite capacity and if the performance of the territorial link is erratic.

A third issue is how to handle Domain Name System (DNS) lookups. A DNS server is necessary at kiosks to avoid the delay of sending lookups over a satellite. However, how should misses or lookups of invalidated entries in the kiosk's DNS server be handled? One option is for the DNS traffic to go over a terrestrial link at the kiosk, if one is available. An alternative is for the warehouse to multicast DNS entries to the kiosks, based on host names encountered in the logs transmitted from the kiosks to the warehouse.

A fourth issue is fault tolerance. If a kiosk goes down and reboots, or a new kiosk is brought up, there must be a mechanism for that kiosk to obtain information missed during the failure.

The idea for the IDS was conceived at INTELSAT, an international organization that owns a fleet of geostationary satellites and sells space segment bandwidth to its international signatories. Work on the prototype started in February 1998. In February 1999, the prototype system stands poised for international trials involving ten signatories of INTELSAT. A commercial version of IDS will be released in May 1999.

The building blocks of IDS are warehouses and kiosks. A warehouse is a large repository (terabytes of storage) of Web content. The warehouse is connected to the content-provider edge of the Internet by a high-bandwidth link. Given the global distribution of Web content today, an excellent choice for a warehouse could be a large data-center or large-scale bandwidth reseller situated in the U.S. The warehouse will use its high-bandwidth link to the content providers to crawl and gather Web content of interest in its Web cache. The warehouse uses an adaptive refreshing technique to assure the freshness of the content stored in its Web cache. The Web content stored in the warehouse cache is continuously scheduled for transmission via a satellite and multicast to a group of kiosks that subscribe to the warehouse.

The centerpiece of the kiosk architecture is also a Web cache. Kiosks represent the service-provider edge of the Internet and can therefore reside at national service providers or ISPs. The storage size of a kiosk cache can therefore vary from a low number of gigabytes to terabytes. Web content multicast by the warehouse is received, is filtered for subscription, and is subsequently pushed in the kiosk cache. The kiosk Web cache also operates in the traditional pull mode. All user requests for Web content to the service provider are transparently intercepted and redirected to the kiosk Web cache. The cache serves the user request directly if it has the requested content; otherwise, it uses its link to the Internet to retrieve the content from the origin Web site. The cache stores a copy of the requested content while passing it back to the user who requested it.


High Speed OFDM Packet Access (HSOPA) also called as super 3G can be said as the successor of HSDPA and HSUPA technologies specified in 3GPP releases 5 and 6. This expected fraction of 3GPP s Long Term Evolution (LTE) upgrade path for UMTS systems has given a totally different air interface system with that of the W-CDMA.

The characteristics of HSOPA includes -:

· Utilizes 1.25 MHz to 20 MHz bandwidth compared with the 5MHz usage of W-CDMA.

· The upload and downlink transfer rates is 50 Mbps and 100Mbps and has a spectral efficiency of 2-4 times compared to 3GPP release 6.

· When compared with W-CDMA. Has a better latency time of around 20ms for round trip time from user terminal to RAN which is same as of a combined HSDPA/HSUPA system

The 3GPP LTE project by incorporating the HSOPA achieves the quadruple play services - voice, high-speed interactive applications including large data transfer and feature-rich IPTV with full mobility. Even though UMTS along with HSDPA and HSUPA gives high data transfer rates, wireless data usage, the competition from state-of-the-art technologies like the WiMAX has forced the UTMS to strengthen their network with the HSOPA that provides with increased data speeds, and spectral efficiency thus giving way for more functionality. Another advantage is with respect to the cost of upgradation to HSOPA. The cost will be very less for up gradation compared to setting up a new network.

64-Bit Computing

The question of why we need 64-bit computing is often asked but rarely answered in a satisfactory manner. There are good reasons for the confusion surrounding the question. That is why first of all; let's look through the list of users who need 64 addressing and 64-bit calculations today: oUsers of CAD, designing systems, simulators do need RAM over 4 GB. Although there are ways to avoid this limitation (for example, Intel PAE), it impacts the performance. Thus, the Xeon processors support the 36bit addressing mode where they can address up to 64GB RAM. The idea of this support is that the RAM is divided into segments, and an address consists of the numbers of segment and locations inside the segment. This approach causes almost 30% performance loss in operations with memory. Besides, programming is much simpler and more convenient for a flat memory model in the 64bit address space - due to the large address space a location has a simple address processed at one pass. A lot of design offices use quite expensive workstations on the RISC processors where the 64bit addressing and large memory sizes are used for a long time already. oUsers of data bases. Any big company has a huge data base, and extension of the maximum memory size and possibility to address data directly in the data base is very costly. Although in the special modes the 32bit architecture IA32 can address up to 64GB memory, a transition to the flat memory model in the 64bit space is much more advantageous in terms of speed and ease of programming. oScientific calculations. Memory size, a flat memory model and no limitation for processed data are the key factors here. Besides, some algorithms in the 64bit representation have a much simpler form. oCryptography and safety ensuring applications get a great benefit from 64bit integer calculations.

In computer architecture, 64-bit integers, memory addresses, or other data units are those that are at most 64 bits (8 octets) wide. Also, 64-bit CPU and ALU architectures are those that are based on registers, address buses, or data buses of that size.

64-bit CPUs have existed in supercomputers since the 1960s and in RISC-based workstationsservers since the early 1990s. In 2003 they were introduced to the (previously 32-bit) mainstream personal computer arena, in the form of the x86-64 and 64-bit PowerPC processor architectures. and

A CPU that is 64-bit internally might have external data buses or address buses with a different size, either larger or smaller; the term "64-bit" is often used to describe the size of these buses as well. For instance, many current machines with 32-bit processors use 64-bit buses (e.g. the original Pentium and later CPUs), and may occasionally be referred to as "64-bit" for this reason. Likewise, some 16-bit processors (for instance, the MC68000) were referred to as 16-/32-bit processors as they had 16-bit buses, but had some internal 32-bit capabilities. The term may also refer to the size of an instruction in the computer's instruction set or to any other item of data (e.g. 64-bit double-precision floating-point quantities are common). Without further qualification, "64-bit" computer architecture generally has integer registers that are 64 bits wide, which allows it to support (both internally and externally) 64-bit "chunks" of integer data.

Registers in a processor are generally divided into three groups: integer, floating point, and other. In all common general purpose processors, only the integer registers are capable of storing pointer values (that is, an address of some data in memory). The non-integer registers cannot be used to store pointers for the purpose of reading or writing to memory, and therefore cannot be used to bypass any memory restrictions imposed by the size of the integer registers.

Nearly all common general purpose processors (with the notable exception of most ARM and 32-bit MIPS implementations) have integrated floating point hardware, which may or may not use 64-bit registers to hold data for processing. For example, the x86 architecture includes the x87SSE instructions, which use 8 128-bit wide registers. By contrast, the 64-bit Alpha floating-point instructions which use 8 80-bit registers in a stack configuration; later revisions of x86, also include family of processors defines 32 64-bit wide floating point registers in addition to its 32 64-bit wide integer registers.

Most CPUs are designed so that the contents of a single integer register can store the addressvirtual memory. Therefore, the total number of addresses in the virtual memory – the total amount of data the computer can keep in its working area – is determined by the width of these registers. Beginning in the 1960s with the IBM System/360, then (amongst many others) the DEC VAX minicomputer in the 1970s, and then with the Intel 80386 in the mid-1980s, a de facto consensus developed that 32 bits was a convenient register size. A 32-bit register meant that 232 addresses, or 4 GBs of RAM, could be referenced. At the time these architectures were devised, 4 GB of memory was so far beyond the typical quantities (0.016 GB) available in installations that this was considered to be enough "headroom" for addressing. 4 GB addresses were considered an appropriate size to work with for another important reason: 4 billion integers are enough to assign unique references to most physically countable things in applications like databases. (location) of any datum in the computer's

However, by the early 1990s, the continual reductions in the cost of memory led to installations with quantities of RAM approaching 4 GB, and the use of virtual memory spaces exceeding the 4-gigabyte ceiling became desirable for handling certain types of problems. In response, a number of companies began releasing new families of chips with 64-bit architectures, initially for supercomputers and high-end workstation and server machines. 64-bit computing has gradually drifted down to the personal computer desktop, with some models in Apple's Macintosh lines switching to PowerPC 970 processors (termed "G5" by Apple) in 2002 and to 64-bit x86-64PCs. The emergence of the 64-bit architecture effectively increases the memory ceiling to 264 addresses, equivalent to approximately 17.2 billion gigabytes, 16.8 million terabytes, or 16 exabytes of RAM. To put this in perspective, in the days when 4 MB of main memory was commonplace, the maximum memory ceiling of 232 addresses was about 1,000 times larger than typical memory configurations. Today, when 2 GB of main memory is common, the ceiling of 264 addresses is about ten billion times larger, i.e. ten million times more headroom than the 232 case. processors in 2003 (with the launch of the AMD Athlon 64), and with x86-64 processors becoming common in high-end

Most 64-bit consumer PCs on the market today have an artificial limit on the amount of memory they can recognize, because physical constraints make it highly unlikely that one will need support for the full 16.8 million terabyte capacity. Apple's Mac Pro, for example, can be physically configured with up to 32 gigabytes of memory.

When reading about PCs and servers, you'll often see the CPU described by the number of bits (e.g., 32-bit or 64-bit), here's a little info about what that means.

32-bit refers to the number of bits (the smallest unit of information on a machine) that can be processed or transmitted in parallel, or the number of bits used for single element in a data format. The term when used in conjunction with a microprocessor indicates the width of the registers; a special high-speed storage area within the CPU. A 32-bit microprocessor can process data and memory addresses that are represented by 32 bits.

64-bit therefore refers to a processor with registers that store 64-bit numbers. A generalization would be to suggest that 64-bit architecture would double the amount of data a CPU can process per clock cycle. Users would note a performance increase because a 64-bit CPU can handle more memory and larger files. One of the most attractive features of 64-bit processors is the amount of memory the system can support. 64-bit architecture will allow systems to address up to 1 terabyteGB) of memory. In today's 32-bit desktop systems, you can have up to 4GB of RAM (provided your motherboard that can handle that much RAM) which is split between the applications and the operating system (OS). (1000

The majority of desktop computers today don't even have 4GB of memory installed, and most small business and home desktop computer software do not require that much memory either. As more complex software and 3D games become available however, we could actually see this become a limitation, but for the average home user that is very far down the road indeed.

Unfortunately, most benefits of a 64-bit CPU will go unnoticed without the key components of a 64-bit operating system and 64-bit software and drivers which are able to take advantage of 64-bit processor features. Additionally for the average home computer user, 32-bits is more than adequate computing power.

When making the transition from 32-bit to 64-bit desktop PCs, users won't actually see Web browsers and word processing programs run faster. Benefits of 64-bit processors would be seen with more demanding applications such as video encoding, scientific research, searching massive databases; tasks where being able to load massive amounts of data into the system's memory is required.

While talk of 64-bit architecture may make one think this is a new technology, 64-bit computing has been used over the past ten years in supercomputing and database management systems. Many companies and organizations with the need to access huge amounts of data have already made the transition to using 64-bit servers, since a 64-bit server can support a greater number of larger files and could effectively load large enterprise databases to into memory allowing for faster searches and data retrieval. Additionally, using a 64-bit server means organizations can support more simultaneous users on each server potentially removing the need for extra hardware as one 64-bit server could replace the use of several 32-bit servers on a network.

It is in scientific and data management industries where the limitations of the 4GB memory of a 32-bit system have been reached and the need for 64-bit processing becomes apparent. Some of the major software developers in the database management systems business, such as Oracle and SQL Server, to name just two, offer 64-bit versions of their database management systems.

While 64-bit servers were once used only by those organizations with massive amounts of data and big budgets, we do see in the near future 64-bit enabled systems hitting the mainstream market. It is only a matter of time until 64-bit software and retail OS packages become available thereby making 64-bit computing an attractive solution for business and home computing needs.

The essence of the move to 64-bit computing is a set of extensions to the x86 intruction set pioneered by AMD and now known as AMD64. During development, they were sensibly called x86-64, but AMD decided to rename them to AMD64, probably for marketing reasons. In fact, AMD64 is also the official name of AMD's K8 microarchitecture, just to keep things confusing. When Intel decided to play ball and make its chips compatible with the AMD64 extensions, there was little chance they would advertise their processors "now with AMD64 compatibility!" Heart attacks all around in the boardroom. And so EM64T, Intel's carbon copy of AMD64 renamed to Intel Extended Memory 64 Technology, was born.

The difference in names obscures a distinct lack of difference in functionality. Code compiled for AMD64 will run on a processor with EM64T and vice versa. They are, for our purposes, the same thing.

Whatever you call 'em, 64-bit extensions are increasingly common in newer x86-compatible processors. Right now, all Athlon 64 and Opteron processors have x86-64 capability, as do Intel's Pentium 4 600 series processors and newer Xeons. Intel has pledged to bring 64-bit capability throughout its desktop CPU line, right down into the Celeron realm. AMD hasn't committed to bringing AMD64 extensions to its Sempron lineup, but one would think they'd have to once the Celeron makes the move.

For some time now, various flavors of Linux compiled for 64-bit processors have been available, but Microsoft's version of Windows for x86-64 is still in beta. That's about to change, at long last, in April. Windows XP Professional x64 Edition, as it's called, is finally upon us, as are server versions of Windows with 64-bit support. (You'll want to note that these operating systems are distinct from Windows XP 64-bit Edition, intended for Intel Itanium processors, which is a whole different ball of wax.) Windows x64 is currently available to the public as a Release Candidate 2, and judging by our experience with it, it's nearly ready to roll. Once the Windows XP x64 Edition hits the stores, I expect that we'll see the 64-bit marketing push begin in earnest, and folks will want to know more about what 64-bit computing really means for them.

The immediate impact, in a positive sense, isn't much at all. Windows x64 can run current 32-bit applications transparently, with few perceptible performance differences, via a facility Microsoft has dubbed WOW64, for Windows on Windows 64-bit. WOW64 allows 32-bit programs to execute normally on a 64-bit OS. Using Windows XP Pro x64 is very much like using the 32-bit version of Windows XP Pro, with the same basic look and feel. Generally, things just work as they should.

There are differences, though. Device drivers, in particular, must be recompiled for Windows x64. The 32-bit versions won't work. In many cases, Windows x64 ships with drivers for existing hardware. We were able to test on the Intel 925X and nForce4 platforms without any additional chipset drivers, for example. In other cases, we'll have to rely on hardware vendors to do the right thing and release 64-bit drivers for their products. Both RealTek and NVIDIA, for instance, supply 64-bit versions of their audio and video drivers, respectively, that share version numbers and feature sets with the 32-bit equivalents, and we were able to use them in our testing. ATI has a 64-bit beta version of its Catalyst video drivers available, as well, but not all hardware makers are so on the ball.

Some other types of programs won't make the transition to Windows x64 seamlessly, either. Microsoft ships WinXP x64 with two versions of Internet Explorer, a 32-bit version and a 64-bit version. The 32-bit version is the OS default because nearly all ActiveX controls and the like are 32-bit code, and where would we be if we couldn't execute the full range of spyware available to us? Similarly, some system-level utilities and programs that do black magic with direct hardware access are likely to break in the 64-bit version of Windows. There will no doubt be teething pains and patches required for certain types of programs, despite Microsoft's best efforts.

Of course, many applications will be recompiled as native 64-bit programs as time passes, and those 64-bit binaries will only be compatible with 64-bit processors and operating systems. Those applications should benefit in several ways from making the transition.

Microsoft 64-Bit

Today, 64-bit processors have become the standard for systems ranging from the most scalable servers to desktop PCs. The way to take full advantage of these systems is with 64-bit editions of Microsoft Windows products.

The 64-bit systems offer direct access to more virtual and physical memory than 32-bit systems and process more data per clock cycle, enabling more scalable, higher performing computing solutions. There are two 64-bit Windows platforms: x64-based and Itanium-based.

x64 solutions are the direct descendants of x86 32-bit products, and are the natural choice for most server application deployments—small or large. Itanium-based systems offer alternative system designs and a processor architecture best suited to extremely large database and custom application solutions.