TCPA / Palladium

TCPA stands for the Trusted Computing Platform Alliance, an initiative led by Intel. Their stated goal is `a new computing platform for the next century that will provide for improved trust in the PC platform.' Palladium is software that Microsoft says it plans to incorporate in future versions of Windows; it will build on the TCPA hardware, and will add some extra features. successor to the Trusted Computing Platform Alliance (TCPA), is an initiative started by AMD, Hewlett-Packard, IBM, Infineon, Intel, Microsoft, and Sun Microsystems to implement Trusted Computing. Many others followed.

What does TCPA / Palladium do?
It provides a computing platform on which you can't tamper with the applications, and where these applications can communicate securely with the vendor. The obvious application is digital rights management (DRM): Disney will be able to sell you DVDs that will decrypt and run on a Palladium platform, but which you won't be able to copy. The music industry will be able to sell you music downloads that you won't be able to swap. They will be able to sell you CDs that you'll only be able to play three times, or only on your birthday. All sorts of new marketing possibilities will open up. TCPA / Palladium will also make it much harder for you to run unlicensed software. Pirate software can be detected and deleted remotely. It will also make it easier for people to rent software rather than buying it; and if you stop paying the rent, then not only does the software stop working but so may the files it created. For years, Bill Gates has dreamed of finding a way to make the Chinese pay for software: Palladium could be the answer to his prayer.
There are many other possibilities. Governments will be able to arrange things so that all Word documents created on civil servants' PCs are `born classified' and can't be leaked electronically to journalists. Auction sites might insist that you use trusted proxy software for bidding, so that you can't bid tactically at the auction. Cheating at computer games could be made more difficult. There is a downside too.

TCG's original goal was the development of a Trusted Platform Module (TPM), a semiconductor intellectual property core or integrated circuit that conforms to the trusted platform module specification put forward by the Trusted Computing Group and is to be included with computers to enable trusted computing features. TCG-compliant functionality has since been integrated directly into certain-market chipsets.

TCG also recently released the first version of their Trusted Network Connect ("TNC") protocol specification, based on the principles of AAA, but adding the ability to authorize network clients on the basis of hardware configuration, BIOS, kernel version, and which updates that have been applied to the OS and anti-virus software, etc.

Seagate has also developed a Full Disk encryption drive which can use the ability of the TPM to secure the key within the hardware chip.

The owner of a TPM-enabled system has complete control over what software does and doesn't run on their system does include the possibility that a system owner would choose to run a version of an operating system that refuses to load unsigned or unlicensed software, but those restrictions would have to be enforced by the operating system and not by the TCG technology. What a TPM does provide in this case is the capability for the OS to lock software to specific machine configurations, meaning that "hacked" versions of the OS designed to get around these restrictions would not work. While there is legitimate concern that OS vendors could use these capabilities to restrict what software would load under their OS (hurting small software companies or open source/shareware/freeware providers, and causing vendor lock-in for some data formats), no OS vendor has yet suggested that this is planned. Furthermore, since restrictions would be a function of the operating system, TPMs could in no way restrict alternative operating systems from running , including free or open source operating systems. There are several projects which are experimenting with TPM support in free operating systems - examples of such projects include a TPM device driver for Linux, an open source implementation of the TCG's Trusted Software Stack called TrouSerS, a Java interface to TPM capabilities called TPM/J, and a TPM-supporting version of the Grub bootloader called TrustedGrub.

Sense-Response Applications

Sensor networks are widely being used for sense-response applications. The role of the sensor nodes in such applications is to monitor an area for events of interest and report the occurrence of the event to the base-station. The receipt of the event at the base-station is followed by a prompt physical response. An example of a sense-response application is the detection of fires in a forest. The sensor nodes report the occurrence of a fire upon which fire trucks are immediately dispatched to the location of the fire. Other examples of sense-response applications are intruder detection and apprehension, natural disaster monitoring, structural integrity monitoring, bio/chemical spill monitoring and containment etc.

Sensor nodes in sense-response applications are deployed with overlapping sensing regions to avoid holes in the coverage area. Thus an event is detected by more than one sensor node in the neighborhood of its occurrence. The base-station exploits this redundancy by responding to only those in the network. This is mainly done in order to avoid any false positives in the event generation process, i.e., an event is reported though it never occurred. However, this requires every sensor node to transmit a message to the base-station for every event that is detected, which expends a lot of energy. An alternative (that is often used in practice) is to have all the sensor nodes in the neighborhood of an event reach a consensus and have only one of the nodes transmit an event detection message to the base-station that implicitly contains the testimony of every node that detected the event. Sensor networks are often deployed in public and untrustworthy places. In some cases, they are also deployed in hostile areas.

The wireless medium of communication in sensor networks prevents any form of access control mechanism at the physical layer. The adversary can very easily introduce spurious messages in the network containing a false event report. This leads to energy wastage of the nodes in the network and also wastage of resources due to the physical response initiated by the base station in response to the false event report. A simple solution to thwart such attacks is to use a system wide secret key coupled with explicit authentication mechanisms. However, this solution fails to protect against internal attacks where the adversary has compromised a subset of sensor nodes.

Sensor nodes are designed to be cheap and cannot be equipped with expensive tamper-proof hardware. This coupled with the unmanned operation of the network leaves the nodes at the mercy of an adversary who can potentially steal some nodes, recover their cryptographic material, and pose them as authorized nodes in the network. We hereby refer to such nodes as internal adversaries. Internal adversaries are capable of launching more sophisticated attacks, where by posing to be real authenticated nodes, they can also suppress the generation of a message for any real event that is detected. This effectively renders the entire system to be useless.

Cable Modems



A cable modem is a type of modem that provides access to a data signal sent over the cable television infrastructure. Cable modems are primarily used to deliver broadband Internet access in the form of cable internet, taking advantage of the high bandwidth of a cable television network. They are commonly found in Australia, New Zealand, Canada, Europe, Costa Rica, and the United States. In the USA alone there were 22.5 million cable modem users during the first quarter of 2005, up from 17.4 million in the first quarter of 2004.

In network topology, a cable modem is a network bridge that conforms to IEEE 802.1D for Ethernet networking (with some modifications). The cable modem bridges Ethernet frames between a customer LAN and the coax cable network.

With respect to the OSI model, a cable modem is a data link layer (or layer 2) forwarder, rather than simply a modem.

A cable modem does support functionalities at other layers. In physical layer (or layer 1), the cable modem supports the Ethernet PHY on its LAN interface, and a DOCSIS defined cable-specific PHY on its HFC cable interface. It is to this cable-specific PHY that the name cable modem refers. In the network layer (or layer 3), the cable modem is a IP host in that it has its own IP address used by the network operator to manage and troubleshoot the device. In the transport layer (or layer 4) the cable modem supports UDP in association with its own IP address, and it supports filtering based on TCP and UDP port numbers to, for example, block forwarding of NetBIOS traffic out of the customer's LAN. In the application layer (layer 5 or layer 7), the cable modem supports certain protocols that are used for management and maintenance, notably DHCP, SNMP, and TFTP.

Some cable modem devices may incorporate a router along with the cable modem functionality, to provide the LAN with its own IP network addressing. From a data forwarding and network topology perspective, this router functionality is typically kept distinct from the cable modem functionality (at least logically) even though the two may share a single enclosure and appear as one unit. So, the cable modem function will have its own IP address and MAC address as will the router.

A modem designed to operate over cable TV lines. Because the coaxial cable used by cable TV provides much greater bandwidth than telephone lines, a cable modem can be used to achieve extremely fast access to the World Wide Web. This, combined with the fact that millions of homes are already wired for cable TV, has made the cable modem something of a holy grail for Internet and cable TV companies.

There are a number of technical difficulties, however. One is that the cable TV infrastructure is designed to broadcast TV signals in just one direction - from the cable TV company to people's homes. The Internet, however, is a two-way system where data also needs to flow from the client to the server. In addition, it is still unknown whether the cable TV networks can handle the traffic that would ensue if millions of users began using the system for Internet access.




WAP is an open international standard for application layer network communications in a wireless communication environment. Its main use is to enable access to the Internet (HTTP) from a mobile phone or PDA.

A WAP browser provides all of the basic services of a computer based web browser but simplified to operate within the restrictions of a mobile phone, such as its smaller view screen. WAP sites are websites written in, or dynamically converted to, WML (Wireless Markup Language) and accessed via the WAP browser.

Before the introduction of WAP, service providers had extremely limited opportunities to offer interactive data services. Interactive data applications are required to support now commonplace activities such as:

  • Email by mobile phone
  • Tracking of stock market prices
  • Sports results
  • News headlines
  • Music downloads

The Japanese i-mode system is another major competing wireless data protocol.

Protocol design lessons from WAP

The original WAP was a simple platform for access to web-like WML services and e-mail using mobile phones in Europe and the SE Asian regions and continues today with a considerable user base. The later versions of WAP were primarily for the United States region and was designed for a different requirement - to enable full web XHTML access using mobile devices with a higher specification and cost, and with a higher degree of software complexity.

There has been considerable discussion about whether the WAP protocol design was appropriate. Some have suggested that the bandwidth-sparing simple interface of Gopher would be a better match for mobile phones and Personal digital assistants (PDAs).

The initial design of WAP was specifically aimed at protocol independence across a range of different protocols (SMS, IP over PPP over a circuit switched bearer, IP over GPRS, etc). This has led to a protocol considerably more complex than an approach directly over IP might have caused.

Most controversial, especially for many from the IP side, was the design of WAP over IP. WAP's transmission layer protocol, WTP, uses its own retransmission mechanisms over UDP to attempt to solve the problem of inadequacy using TCP over high packet loss networks.

The Wireless Application Protocol is a standard developed by the WAP Forum, a group founded by Nokia, Ericsson, Phone.com (formerly Unwired Planet), and Motorola. The WAP Forum’s membership roster now includes computer industry heavyweights such as Microsoft, Oracle, IBM, and Intel along with several hundred other companies. According to the WAP Forum, the goals of WAP are to be:

  • Independent of wireless network standard.
  • Open to all.
  • Proposed to the appropriate standards bodies.
  • Scalable across transport options.
  • Scalable across device types.
  • Extensible over time to new networks and transports.

As part of the Forum’s goals, WAP will also be accessible to (but not limited to) the following:

  • GSM-900, GSM-1800, GSM-1900
  • CDMA IS-95
  • TDMA IS-136
  • 3G systems - IMT-2000, UMTS, W-CDMA, Wideband IS-95

WAP defines a communications protocol as well as an application environment. In essence, it is a standardized technology for cross-platform, distributed computing. Sound similar to the World Wide Web? If you think so, you’re on the right track! WAP is very similar to the combination of HTML and HTTP except that it adds in one very important feature: optimization for low-bandwidth, low-memory, and low-display capability environments. These types of environments include PDAs, wireless phones, pagers, and virtually any other communications device.

The remainder of this overview will concentrate on presenting WAP from a software developer’s perspective so that other software developer’s can be quickly brought up to speed. Other documents on this site go into much greater detail on development specifics including in-depth reviews and demonstrations using a variety of vendor packages.

How Does It Work?

WAP uses some new technologies and terminologies which may be foreign to the software developer, however the overall concepts should be very familiar. WAP client applications make requests very similar in concept to the URL concept in use on the Web. As a general example, consider the following explanation (exact details may vary on a vendor-to-vendor basis).

A WAP request is routed through a WAP gateway which acts as an intermediary between the “bearer” used by the client (GSM, CDMA, TDMA, etc.) and the computing network that the WAP gateway resides on (TCP/IP in most cases). The gateway then processes the request, retrieves contents or calls CGI scripts, Java servlets, or some other dynamic mechanism, then formats data for return to the client. This data is formatted as WML (Wireless Markup Language), a markup language based directly on XML. Once the WML has been prepared (known as a deck), the gateway then sends the completed request back (in binary form due to bandwidth restrictions) to the client for display and/or processing. The client retrieves the first card off of the deck and displays it on the monitor.

The deck of cards metaphor is designed specifically to take advantage of small display areas on handheld devices. Instead of continually requesting and retrieving cards (the WAP equivalent of HTML pages), each client request results in the retrieval of a deck of one or more cards. The client device can employ logic via embedded WMLScript (the WAP equivalent of client-side JavaScript) for intelligently processing these cards and the resultant user inputs.

To sum up, the client makes a request. This request is received by a WAP gateway that then processes the request and formulates a reply using WML. When ready, the WML is sent back to the client for display. As mentioned earlier, this is very similar in concept to the standard stateless HTTP transaction involving client Web browsers.

Virtual Instrumentation


Virtual Instrumentation is the use of customizable software and modular measurement hardware to create user-defined measurement systems, called virtual instruments.

Traditional hardware instrumentation systems are made up of pre-defined hardware components, such as digital multimeters and oscilloscopes that are completely specific to their stimulus, analysis, or measurement function. Because of their hard-coded function, these systems are more limited in their versatility than virtual instrumentation systems. The primary difference between hardware instrumentation and virtual instrumentation is that software is used to replace a large amount of hardware. The software enables complex and expensive hardware to be replaced by already purchased computer hardware; e. g. analog to digital converter can act as a hardware complement of a virtual oscilloscope, a potentiostat enables frequency response acquisition and analysis in electrochemical impedance spectroscopy with virtual instrumentation.

The concept of a synthetic instrument is a subset of the virtual instrument concept. A synthetic instrument is a kind of virtual instrument that is purely software defined. A synthetic instrument performs a specific synthesis, analysis, or measurement function on completely generic, measurement agnostic hardware. Virtual instruments can still have measurement specific hardware, and tend to emphasize modular hardware approaches that facilitate this specificity. Hardware supporting synthetic instruments is by definition not specific to the measurement, nor is it necessarily (or usually) modular.

Leveraging commercially available technologies, such as the PC and the analog to digital converter, virtual instrumentation has grown significantly since its inception in the late 1970s. Additionally, software packages like National Instruments' LabVIEW and other graphical programming languages helped grow adoption by making it easier for non-programmers to develop systems.

Simplifying the development process
Virtual instrumentation has led to a simpler way of looking at measurement systems. Instead of using several stand-alone instruments for multiple measurement types and performing rudimentary analysis by hand, engineers now can quickly and cost-effectively create a system equipped with analysis software and a single measurement device that has the capabilities of a multitude of instruments.

Powerful off-the-shelf software, such as our own company's LabVIEW, automates the entire process, delivering an easy way to acquire, analyse, and present data from a personal computer without sacrificing performance or functionality. The software integrates tightly with hardware, making it easy to automate measurements and control, while taking advantage of the personal computer for processing, display, and networking capabilities.

The expectations of performance and flexibility in measurement and control applications continue to rise in the industry, growing the importance of software design. By investing in intuitive engineering software tools that run at best possible performance, companies can dramatically reduce development time and increase individual productivity, giving themselves a powerful weapon to wield in competitive situations.

Preparing investments for the future
Measurement systems have historically been 'islands of automation', in which you design a system to meet the needs of a specific application. With virtual instrumentation, modular hardware components and open engineering software make it easy to adapt a single system to a variety of measurement requirements.

To meet the changing needs of your testing system, open platforms such as PXI (PCI eXtensions for Instrumentation) make it simple to integrate measurement devices from different vendors into a single system that is easy to modify or expand, as new technologies emerge or your application needs change. With a PXI system, you can quickly integrate common measurements such as machine vision, motion control, and data acquisition to create multifunction systems without spending valuable engineering hours making the hardware work together. The open PXI platform combines industry-standard technologies, such as CompactPCI and Windows operating systems, with built-in triggering to provide a rugged, more deterministic system than desktop PCs.


Reconfigurability denotes the Reconfigurable Computing capability of a system, so that its behavior can be changed by reconfiguration, i. e. by loading different configware code. This static reconfigurability distinguishes between reconfiguration time and run time. Dynamic reconfigurability denotes the capability of a dynamically reconfigurable system that can dynamically change its behavior during run time, usually in response to dynamic changes in its environment.

In the context of wireless communication dynamic reconfigurability tackles the changeable behavior of wireless networks and associated equipment, specifically in the fields of radio spectrum, radio access technologies, protocol stacks, and application services.

In the context of Control reconfiguration, a field of fault-tolerant control within control engineering, reconfigurability is a property of faulty systems meaning that the original control goals specified for the fault-free system can be reached after suitable control reconfiguration.

Research regarding the (dynamic) reconfigurability of wireless communication systems is ongoing for example in working group 6 of the Wireless World Research Forum (WWRF), in the Software Defined Radio Forum (SDRF), and in the European FP6 project End-to-End Reconfigurability (E²R). Recently, E²R initiated a related standardization effort on the cohabitation of heterogeneous wireless radio systems in the framework of the IEEE P1900.4 Working Group.

Inverse Multiplexing



An inverse multiplexer (often abbreviated to "inverse mux" or "imux") allows a data stream to be broken into multiple lower data rate communications links. An inverse multiplexer differs from a demultiplexer in that each of the low rate links coming from it is related to the other ones and they all work together to carry the same data. By contrast, the output streams from a demultiplexer may each be completely independent from each other and the demultiplexer does not have to understand them in any way.

A technique that is the inverse, or opposite, of multiplexing. Traditional multiplexing folds together multiple low-speed channels onto a high-speed circuit. Inverse multiplexing spreads a high-speed channel across multiple low-speed circuits. Inverse multiplexing is used where an appropriately high-speed circuit is not available. A 6-Mbps data stream, for example, might be inverse multiplexed across four (4) T1 circuits, each running at 1.544 Mbps. Inverse multiplexing over ATM (IMA) fans out an ATM cell stream across multiple circuits between the user premises and the edge of the carrier network. In such a circumstance, multiple physical T1 circuits can be used as a single, logical ATM pipe.The IMAcompliant ATM concentrator at the user premises spreads the ATM cells across the T1 circuits in a round robin fashion, and the ATM switch at the edge of the carrier network scans the T1 circuits in the same fashion in order to reconstitute the cell stream.There is a similar implementation agreement (IA) for Frame Relay. Multilink point-to-point protocol (PPP) serves much the same purpose in the Internet domain.

Note that this is the opposite of a multiplexer which creates one high speed link from multiple low speed ones.

This provides an end to end connection of 3 x the data rate available on each of the low rate data links. Note that, as with multiplexers, links are almost always bi-directional and an inverse mux will practically always be combined with its reverse and still be called an inverse mux. This means that the "de-inverse mux" will actually be an inverse mux.

Inverse muxes are used, for example, to combine a number of ISDN channels together into one high rate circuit, where the DTE needs a higher rate connection than is available from a single ISDN connection. This is typically useful in areas where higher rate circuits are not available.

An alternative to an inverse mux is to use three separate links and load sharing of data between them. In the case of IP, network packets could be sent in round robin mode between each separate link. Advantages of using an inverse mux over separate links include

  • lower link latency (one single packet can be spread across all links)
  • fairer load sharing
  • network simplicity (no router needed between boxes with high speed interfaces)

Enhanced Data rates for GSM Evolution (EDGE), Enhanced GPRS (EGPRS), or IMT Single Carrier (IMT-SC) is a backward-compatible digital mobile phone technology that allows improved data transmission rates, as an extension on top of standard GSM. EDGE can be considered a 3G radio technology and is part of ITU's 3G definition, but is most frequently referred to as 2.75G. EDGE was deployed on GSM networks beginning in 2003— initially by Cingular (now AT&T) in the United States.

EDGE is standardized by 3GPP as part of the GSM family, and it is an upgrade that provides a potential three-fold increase in capacity of GSM/GPRS networks. The specification achieves higher data-rates by switching to more sophisticated methods of coding, within existing GSM timeslots. Introducing 8PSK encoding, EDGE is capable of delivering higher bit-rates per radio channel in good conditions.

EDGE can be used for any packet switched application, such as an Internet connection. High-speed data applications such as video services and other multimedia benefit from EGPRS' increased data capacity. EDGE Circuit Switched is a possible future development.

Evolved EDGE was added in Release 7 of the 3GPP standard. This is a further extension on top of EDGE, providing reduced latency and potential speeds of 1Mbit/s by using even more complex coding functions than the 8PSK originally introduced with EDGE.

Technology

EDGE/EGPRS is implemented as a bolt-on enhancement for 2G and 2.5G GSM and GPRS networks, making it easier for existing GSM carriers to upgrade to it. EDGE/EGPRS is a superset to GPRS and can function on any network with GPRS deployed on it, provided the carrier implements the necessary upgrade.

Although EDGE requires no hardware or software changes to be made in GSM core networks, base stations must be modified. EDGE compatible transceiver units must be installed and the base station subsystem needs to be upgraded to support EDGE. If the operator already has this in place, which is often the case today, the network can be upgraded to EDGE by activating an optional software feature. Today EDGE is supported by all major chip vendors for GSM. New mobile terminal hardware and software is also required to decode/encode the new modulation and coding schemes and carry the higher user data rates to implement new services.

Transmission techniques

In addition to Gaussian minimum-shift keying (GMSK), EDGE uses higher-order PSK/8 phase shift keying (8PSK) for the upper five of its nine modulation and coding schemes. EDGE produces a 3-bit word for every change in carrier phase. This effectively triples the gross data rate offered by GSM. EDGE, like GPRS, uses a rate adaptation algorithm that adapts the modulation and coding scheme (MCS) according to the quality of the radio channel, and thus the bit rate and robustness of data transmission. It introduces a new technology not found in GPRS, Incremental Redundancy, which, instead of retransmitting disturbed packets, sends more redundancy information to be combined in the receiver. This increases the probability of correct decoding.

EDGE can carry data speeds up to 236.8 kbit/s (with end-to-end latency of less than 150 ms) for 4 timeslots (theoretical maximum is 473.6 kbit/s for 8 timeslots) in packet mode. This means it can handle four times as much traffic as standard GPRS. EDGE will therefore meets the International Telecommunications Union's requirement for a 3G network, and has been accepted by the ITU as part of the IMT-2000 family of 3G standards. It also enhances the circuit data mode called HSCSD, increasing the data rate of this service.

EDGE Evolution

EDGE Evolution improves on EDGE in a number of ways. Latencies are reduced by lowering the Transmission Time Interval by half (from 20 ms to 10 ms). Bit rates are increased up to 1 MBit/s peak speed and latencies down to 100 ms using dual carriers, higher symbol rate and higher-order modulation (32QAM and 16QAM instead of 8-PSK), and turbo codes to improve error correction. And finally signal quality is improved using dual antennas improving average bit-rates and spectrum efficiency. EDGE Evolution can be gradually introduced as software upgrades, taking advantage of the installed base. With EDGE Evolution, end-users will be able to experience mobile internet connections corresponding to a 500 kbit/s ADSL service.

Holographic Data Storage



Holographic data storage is a potential replacement technology in the area of high-capacity data storage currently dominated by magnetic and conventional optical data storage. Magnetic and optical data storage devices rely on individual bits being stored as distinct magnetic or optical changes on the surface of the recording medium. Holographic data storage overcomes this limitation by recording information throughout the volume of the medium and is capable of recording multiple images in the same area utilizing light at different angles.

Additionally, whereas magnetic and optical data storage records information a bit at a time in a linear fashion, holographic storage is capable of recording and reading millions of bits in parallel, enabling data transfer rates greater than those attained by optical storage.

Holographic data storage captures information using an optical inference pattern within a thick, photosensitive optical material. Light from a single laser beam is divided into two separate beams, a reference beam and an object or signal beam; a spatial light modulator is used to encode the object beam with the data for storage. An optical inference pattern results from the crossing of the beams’ paths, creating a chemical and/or physical change in the photosensitive medium; the resulting data is represented in an optical pattern of dark and light pixels. By adjusting the reference beam angle, wavelength, or media position, a multitude of holograms (theoretically, several thousand) can be stored on a single volume. The theoretical limits for the storage density of this technique are approximately tens of terabits (1 terabit = 1024 gigabits, 8 gigabits = 1 gigabyte) per cubic centimeter. In 2006, InPhase technologies published a white paper reporting an achievement of 500 Gb/in2.

For two-color holographic recording, the reference and signal beams are fixed to a particular wavelength (green, red or IR) and the sensitizing/gating beam is a separate, shorter wavelength (blue or UV). The sensitizing/gating beam is used to sensitize the material before and during the recording process, while the information is recorded in the crystal via the reference and signal beams. It is shone intermittently on the crystal during the recording process for measuring the diffracted beam intensity. Readout is achieved by illumination with the reference beam alone. Hence the readout beam with a longer wavelength would not be able to excite the recombined electrons from the deep trap centers during readout, as they need the sensitizing light with shorter wavelength to erase them.

Usually, for two-color holographic recording, two different dopants are required to promote trap centers, which belong to transition metal and rare earth elements and are sensitive to certain wavelengths. By using two dopants, more trap centers would be created in the Lithium niobate crystal. Namely a shallow and a deep trap would be created. The concept now is to use the sensitizing light to excite electrons from the deep trap farther from the valence band to the conduction band and then to recombine at the shallow traps nearer to the conduction band. The reference and signal beam would then be used to excite the electrons from the shallow traps back to the deep traps. The information would hence be stored in the deep traps. Reading would be done with the reference beam since the electrons can no longer be excited out of the deep traps by the long wavelength beam.

With its omnipresent computers, all connected via the Internet, the Information Age has led to an explosion of information available to users. The decreasing cost of storing data, and the increasing storage capacities of the same small device footprint, have been key enablers of this revolution. While current storage needs are being met, storage technologies must continue to improve in order to keep pace with the rapidly increasing demand.

However, both magnetic and conventional optical data storage technologies, where individual bits are stored as distinct magnetic or optical changes on the surface of a recording medium, are approaching physical limits beyond which individual bits may be too small or too difficult to store. Storing information throughout the volume of a medium—not just on its surface—offers an intriguing high-capacity alternative. Holographic data storage is a volumetric approach which, although conceived decades ago, has made recent progress toward practicality with the appearance of lower-cost enabling technologies, significant results from longstanding research efforts, and progress in holographic recording materials.

In addition to high storage density, holographic data storage promises fast access times, because the laser beams can be moved rapidly without inertia, unlike the actuators in disk drives. With the inherent parallelism of its pagewise storage and retrieval, a very large compound data rate can be reached by having a large number of relatively slow, and therefore low-cost, parallel channels.

Because of all of these advantages and capabilities, holographic storage has provided an intriguing alternative to conventional data storage techniques for three decades. However, it is the recent availability of relatively low-cost components, such as liquid crystal displays for SLMs and solid-state camera chips from video camcorders for detector arrays, which has led to the current interest in creating practical holographic storage devices. Recent reviews of holographic storage can be found. A team of scientists from the IBM Research Division have been involved in exploring holographic data storage, partially as a partner in the DARPA-initiated consortia on holographic data storage systems (HDSS) and on photorefractive information storage materials (PRISM). In this paper, we describe the current status of our effort.

The overall theme of our research is the evaluation of the engineering tradeoffs between the performance specifications of a practical system, as affected by the fundamental material, device, and optical physics. Desirable performance specifications include data fidelity as quantified by bit-error rate (BER), total system capacity, storage density, readout rate, and the lifetime of stored data. This paper begins by describing the hardware aspects of holographic storage, including the test platforms we have built to evaluate materials and systems tradeoffs experimentally, and the hardware innovations developed during this process. Phase-conjugate readout, which eases the demands on both hardware design and material quality, is experimentally demonstrated. The second section of the paper describes our work in coding and signal processing, including modulation codes, novel preprocessing techniques, the storage of more than one bit per pixel, and techniques for quantifying coding tradeoffs. Then we discuss associative retrieval, which introduces parallel search capabilities offered by no other storage technology. The fourth section describes our work in testing and evaluating materials, including permanent or write-once read-many-times (WORM) materials, read­write materials, and photon-gated storage materials offering reversible storage without sacrificing the lifetime of stored data. The paper concludes with a discussion of applications for holographic data storage.

A concept of integer fast Fourier transform (IntFFT) for approximating the discrete Fourier transform is introduced. Unlike the fixed-point fast Fourier transform (FxpFFT), the new transform has the properties that it is an integer-to-integer mapping, is power adaptable and is reversible. The lifting scheme is used to approximate complex multiplications appearing in the FFT lattice structures where the dynamic range of the lifting coefficients can be controlled by proper choices of lifting factorizations. Split-radix FFT is used to illustrate the approach for the case of 2N-point FFT, in which case, an upper bound of the minimal dynamic range of the internal nodes, which is required by the reversibility of the transform, is presented and confirmed by a simulation. The transform can be implemented by using only bit shifts and additions but no multiplication. A method for minimizing the number of additions required is presented. While preserving the reversibility, the IntFFT is shown experimentally to yield the same accuracy as the FxpFFT when their coefficients are quantized to a certain number of bits. Complexity of the IntFFT is shown to be much lower than that of the FxpFFT in terms of the numbers of additions and shifts. Finally, they are applied to noise reduction applications, where the IntFFT provides significantly improvement over the FxpFFT at low power and maintains similar results at high power

NRAM Nano Ram



NRAM Nano Ram Nano-RAM is a proprietary computer memory technology from the company Nantero. It is a type of nonvolatile random access memory based on the mechanical position of carbon nanotubes deposited on a chip-like substrate. In theory the small size of the nanotubes allows for very high density memories. Nantero also refers to it as NRAM in short.

Nantero's technology is based on a well-known effect in carbon nanotubes where crossed nanotubes on a flat surface can either be touching or slightly separated in the vertical direction (normal to the substrate) due to Van der Waal's interactions. In Nantero's technology, each NRAM "cell" consists of a number of nanotubes suspended on insulating "lands" over a metal electrode. At rest the nanotubes lie above the electrode "in the air", about 13 nm above it in the current versions, stretched between the two lands. A small dot of gold is deposited on top of the nanotubes on one of the lands, providing an electrical connection, or terminal. A second electrode lies below the surface, about 100 nm away.

Normally, with the nanotubes suspended above the electrode, a small voltage applied between the terminal and upper electrode will result in no current flowing. This represents a "0" state. However if a larger voltage is applied between the two electrodes, the nanotubes will be pulled towards the upper electrode until they touch it. At this point a small voltage applied between the terminal and upper electrode will allow current to flow (nanotubes are conductors), representing a "1" state. The state can be changed by reversing the polarity of the charge applied to the two electrodes.

What causes this to act as a memory is that the two positions of the nanotubes are both stable. In the off position the mechanical strain on the tubes is low, so they will naturally remain in this position and continue to read "0". When the tubes are pulled into contact with the upper electrode a new force, the tiny Van der Waals force, comes into play and attracts the tubes enough to overcome the mechanical strain. Once in this position the tubes will again happily remain there and continue to read "1". These positions are fairly resistant to outside interference like radiation that can erase or flip memory in a conventional DRAM.

NRAMs are built by depositing masses of nanotubes on a pre-fabricated chip containing rows of bar-shaped electrodes with the slightly taller insulating layers between them. Tubes in the "wrong" location are then removed, and the gold terminals deposited on top. Any number of methods can be used to select a single cell for writing, for instance the second set of electrodes can be run in the opposite direction, forming a grid, or they can be selected by adding voltage to the terminals as well, meaning that only those selected cells have a total voltage high enough to cause the flip.

Currently the method of removing the unwanted nanotubes makes the system impractical. The accuracy and size of the epitaxy machinery is considerably "larger" that the cell size otherwise possible. Existing experimental cells have very low densities compared to existing systems, some new method of construction will have to be introduced in order to make the system practical.

Advantages

NRAM has a density, at least in theory, similar to that of DRAM. DRAM consists of a number of capacitors, which are essentially two small metal plates with a thin insulator between them. NRAM is similar, with the terminals and electrodes being roughly the same size as the plates in a DRAM, the nanotubes between them being so much smaller they add nothing to the overall size. However it seems there is a minimum size at which a DRAM can be built, below which there is simply not enough charge being stored to be able to effectively read it. NRAM appears to be limited only by the current state of the art in lithography. This means that NRAM may be able to become much denser than DRAM, meaning that it will also be less expensive, if it becomes possible to control the locations of carbon nanotubes at the scale the semiconductor industry can control the placement of devices on silicon.

Additionally, unlike DRAM, NRAM does not require power to "refresh" it, and will retain its memory even after the power is removed. Additionally the power needed to write to the device is much lower than a DRAM, which has to build up charge on the plates. This means that NRAM will not only compete with DRAM in terms of cost, but will require much less power to run, and as a result also be much faster (write performance is largely determined by the total charge needed). NRAM can theoretically reach performance similar to SRAM, which is faster than DRAM but much less dense, and thus much more expensive.

In comparison with other NVRAM technologies, NRAM has the potential to be even more advantageous. The most common form of NVRAM today is Flash RAM, which combines a bistable transistor circuit known as a flip-flop (also the basis of SRAM) with a high-performance insulator wrapped around one of the transistor's bases. After being written to, the insulator traps electrons in the base electrode, locking it into the "1" state. However, in order to change that bit the insulator has to be "overcharged" to erase any charge already stored in it. This requires high voltage, about 10 volts, much more than a battery can provide. Flash systems thus have to include a "charge pump" that slowly builds up power and then releases it at higher voltage. This process is not only very slow, but degrades the insulators as well. For this reason Flash has a limited lifetime, between 10,000 and 1,000,000 "writes" before the device will no longer operate effectively.

NRAM potentially avoids all of these issues. The read and write process are both "low energy" in comparison to Flash (or DRAM for that matter), meaning that NRAM can result in longer battery life in conventional devices. It may also be much faster to write than either, meaning it may be used to replace both. A modern cell phone will often include Flash memory for storing phone numbers and such, DRAM for higher performance working memory because flash is too slow, and additionally some SRAM in the CPU because DRAM is too slow for its own use. With NRAM all of these may be replaced, with some NRAM placed on the CPU to act as the CPU cache, and more in other chips replacing both the DRAM and Flash.

Nano-RAM, is a proprietary computer memory technology from the company Nantero and NANOMOTOR is invented by University of bologna and California nano systems.NRAM is a type of nonvolatile random access memory based on the mechanical position of carbon nanotubes deposited on a chip-like substrate. In theory the small size of the nanotubes allows for very high density memories. Nantero also refers to it as NRAM in short, but this acronym is also commonly used as a synonym for the more common NVRAM, which refers to all nonvolatile RAM memories.Nanomotor is a molecular motor which works continuously without the consumption of fuels. It is powered by sunlight. The research are federally funded by national science foundation and national academy of science.

Carbon Nanotubes
Carbon nanotubes (CNTs) are a recently discovered allotrope of carbon. They take the form of cylindrical carbon molecules and have novel properties that make them potentially useful in a wide variety of applications in nanotechnology, electronics, optics, and other fields of materials science. They exhibit extraordinary strength and unique electrical properties, and are efficient conductors of heat. Inorganic nanotubes have also been synthesized.
A nanotube is a member of the fullerene structural family, which also includes buckyballs. Whereas buckyballs are spherical in shape, a nanotube is cylindrical, with at least one end typically capped with a hemisphere of the buckyball structure. Their name is derived from their size, since the diameter of a nanotube is on the order of a few nanometers (approximately 50,000 times smaller than the width of a human hair), while they can be up to several millimeters in length. There are two main types of nanotubes: single-walled nanotubes (SWNTs) and multi-walled nanotubes (MWNTs).

Manufacturing a nanotube is dependent on applied quantum chemistry, specifically, orbital hybridization. Nanotubes are composed entirely of sp2 bonds, similar to those of graphite. This bonding structure, stronger than the sp3 bonds found in diamond, provides the molecules with their unique strength. Nanotubes naturally align themselves into "ropes" held together by Van der Waals forces. Under high pressure, nanotubes can merge together, trading some sp2 bonds for sp3 bonds, giving great possibility for producing strong, unlimited-length wires through high-pressure nanotube linking.

Fabrication Of NRAM
This nano electromechanical memory, called NRAM, is a memory with actual moving parts, with dimensions measured in nanometers. Its carbon nanotube based technology makes advantage of vaanderwaals force to create basic on off junctions of a bit. Vaanderwaals forces interaction between atoms that enable noncovalant binding. They rely on electron attractions that arise only at nano scale levels as a force to be reckoned with. The company is using this property in its design to integrate nanoscale material property with established cmos fabrication technique.

Storage In NRAM
NRAM works by balancing the on ridges of silicon. Under differing electric charges, the tubes can be physically swung into one or two positions representing one and zeros. Because the tubes are very small-under a thousands of time-this movement is very fast and needs very little power, and because the tubes are a thousand times conductive as copper it is very to sense to read back the data. Once in position the tubes stay there until a signal resets them.
The bit itself is not stored in the nano tubes, but rather is stored as the position of the nanotube. Up is bit 0 and down is bit 1.Bits are switched between the states by the application of the electric field.

The technology work by changing the charge placed on a latticework of crossed nanotube. By altering the charges, engineers can cause the tubes to bind together or separate, creating ones and zeros that form the basis of computer memory. If we have two nano tubes perpendicular to each other one is positive and other negative, they will bend together and touch. If we have two similar charges they will repel. These two positions are used to store one and zero. The chip will stay in the same state until you make another change in the electric field. So when you turn the computer off, it doesn't erase the memory .We can keep all the data in the NRAM and gives your computer an instant boot.

Ovonic Unified Memory


Ovonic Unified Memory

Ovonyx is developing a microelectronics memory technology called Ovonic Unified Memory (OUM). This technology is originally developed by Mr. Stanford Ovshinsky and exclusively licensed from Energy Conversion Devices (ECD) Inc. Ovonic unified memory -- its name is derived from ''Ovshinsky'' and ''electronic''. OVM is also known as phase change memory because it uses unique thin-film phase change material to store information economically and with excellent solid-state memory properties. It would be the replacement of conventional memories like Magnetic Random Access Memory (MRAM), Ferro electric Random Access Memory (FeRAM or FRAM), Dynamic Random Access Memory (DRAM), and Static Random Access Memory (SRAM).

OVM allows the rewriting of CD & DVDs .CD & DVD drives read or write ovonic material with laser , but OVM uses electric current to change the phase of the material. The thin-film material is a phase-change chalcogenide alloy similar to the film used to store information on commercial CD-RW and DVD-RAM optical disks, based on proprietary technology originally developed by and exclusively licensed from Energy Conversion Devices.

Evolution Of OUM
Magnetic Random Access Memory (MRAM), a technology first developed in the 1970's, but rarely commercialized, has attracted by the backing of I.B.M. Motorola and others. MRAM stores information by flip flopping two layers of magnetic material in and out of alignment with an electric current. For reading and writing data, MRAM can be as fast as a few nanoseconds, or billionths of a second, best among the next three generation memory candidates. And if promises to integrate easily with the industry's existing chip manufacturing process. MRAM is built on top of silicon circuitry. The biggest problem with MRAM is a relatively small distance, difficult to detect, between it's ON and OFF states.

The second potential successor to flash, Ferro - electric Random Access Memory (FeRAM / FRAM), has actually been commercially available for nearly 15 years, has attracted by the backing of Fujitsu, Matsushita, I.B.M. and Ramtron. FRAM relies on the polarization of what amount to tiny magnets inside certain materials like perouikite, from basaltic rocks. FRAM memory cells do not wear out until they have been read or written to billions of times, while MRAM and OUM would require the addition of six to eight "masking" layers in the chip manufacturing process, just like Flash, FRAM might require as little as two extra layers.

OUM is based on the information storage technology developed by Mr.Ovshinsky that allows rewriting of CD's and DVD's. While CD and DVD drives read and write ovonic material with lasers, OUM uses electric current to change the phase of memory cells. These cells are either in crystalline state, where electrical resistance is low or in amorphous state, where resistance is high. OUM can be read and write to trillionths of times making its use essentially nondestructive, unlike MRAM or FRAM. OUM's dynamic range, difference between the electrical resistance in the crystalline state and in the amorphous state - is wide enough to allow more than one set of ON and OFF values in a cell, dividing it into several bits and multiplying memory density by two, four potential even 16 times. OUM is not as fast as MRAM.The OUM solid-state memory has cost advantages over conventional solid-state memories such as DRAM or Flash due to its thin-film nature, very small active storage media, and simple device structure. OUM requires fewer steps in an IC manufacturing process resulting in reduced cycle times, fewer defects, and greater manufacturing flexibility.


4G Wireless Systems


Fourth generation wireless system is a packet switched wireless system with wide area coverage and high throughput. It is designed to be cost effective and to provide high spectral efficiency . The 4g wireless uses Orthogonal Frequency Division Multiplexing (OFDM), Ultra Wide Radio Band (UWB),and Millimeter wireless. Data rate of 20mbps is employed. Mobile speed will be up to 200km/hr.The high performance is achieved by the use of long term channel prediction, in both time and frequency, scheduling among users and smart antennas combined with adaptive modulation and power control. Frequency band is 2-8 GHz. it gives the ability for world wide roaming to access cell anywhere.

Wireless mobile communications systems are uniquely identified by "generation designations. Introduced in the early 1980s, first generation (1G) systems were marked by analog frequency modulation and used primarily for voice communications. Second generation (2G) wireless communications systems, which made their appearance in the late 1980s, were also used mainly for voice transmission and reception The wireless system in widespread use today goes by the name of 2.5G-an "in between " service that serves as a stepping stone to 3G. Whereby 2G communications is generally associated with Global System for Mobile (GSM) service, 2.5G is usually identified as being "fueled " by General Packet Radio Services (GPRS) along with GSM. In 3G systems, making their appearance in late 2002 and in 2003, are designed for voice and paging services, as well as interactive media use such as teleconferencing, Internet access, and other services. The problem with 3G wireless systems is bandwidth-these systems provide only WAN coverage ranging from 144 kbps (for vehicle mobility applications) to 2 Mbps (for indoor static applications). Segue to 4G, the "next dimension " of wireless communication. The 4g wireless uses Orthogonal Frequency Division Multiplexing (OFDM), Ultra Wide Radio Band (UWB), and Millimeter wireless and smart antenna. Data rate of 20mbps is employed. Mobile speed will be up to 200km/hr. it gives the ability for world wide roaming to access cell anywhere.

Features:
o Support for interactive multimedia, voice, streaming video, Internet, and other broadband services
o IP based mobile system
o High speed, high capacity, and low cost per bit
o Global access, service portability, and scalable mobile services
o Seamless switching, and a variety of Quality of Service driven services
o Better scheduling and call admission control techniques
o Ad hoc and multi hop networks (the strict delay requirements of voice make multi hop network service a difficult problem)
o Better spectral efficiency
o Seamless network of multiple protocols and air interfaces (since 4G will be all �]IP, look for 4G systems to be compatible with all common network technologies, including802.11, WCDMA, Blue tooth, and Hyper LAN).
o An infrastructure to handle pre existing 3G systems along with other wireless technologies, some of which are currently under development.


AC Performance Of Nanoelectronics

Nano electronic devices fall into two classes: tunnel devices and ballistic transport devices. In Tunnel devices single electron effects occur if the tunnel resistance is larger than h/e = 25 K §Ù. In Ballistic devices with cross sectional dimensions in the range of quantum mechanical wavelength of electrons, the resistance is of order h/e = 25 K §Ù. This high resistance may seem to restrict the operational speed of nano electronics in general. However the capacitance values and drain source spacing are typically small which gives rise to very small RC times and transit times of order of ps or less. Thus the speed may be very large, up to THz range. The goal of this seminar is to present the models an performance predictions about the effects that set the speed limit in carbon nanotube transistors, which form the ideal test bed for understanding the high frequency properties of Nano electronics because they may behave as ideal ballistic 1d transistors.

Ballistic Transport- An Outline
When carriers travel through a semiconductor material, they are likely to be scattered by any number of possible sources, including acoustic and optical phonons, ionized impurities, defects, interfaces, and other carriers. If, however, the distance traveled by the carrier is smaller than the mean free path, it is likely not to encounter any scattering events; it can, as a result, move ballistically through the channel. To the first order, the existence of ballistic transport in a MOSFET depends on the value of the characteristic scattering length (i.e. mean free path) in relation to channel length of the transistor.

This scattering length, l , can be estimated from the measured carrier mobility where t is the average scattering time, m* is the carrier effective mass, and vth is the thermal velocity. Because scattering mechanisms determine the extent of ballistic transport, it is important to understand how these depend upon operating conditions such as normal electric field and ambient temperature.

Dependence On Normal Electric Field
In state-of-the-art MOSFET inversion layers, carrier scattering is dominated by phonons, impurities (Coulomb interaction), and surface roughness scattering at the Si-SiO2 interface. The relative importance of each scattering mechanism is dependent on the effective electric field component normal to the conduction channel. At low fields, impurity scattering dominates due to strong Coulombic interactions between the carriers and the impurity centers. As the electric field is increased, acoustic phonons begin to dominate the scattering process. At very high fields, carriers are pulled closer to the Si-SiO2 gate oxide interface; thus, surface roughness scattering degrades carrier mobility. A universal mobility model has been developed to relate field strength with the effective carrier mobility due to phonon and surface roughness scattering:

Dependence On Temperature
When the temperature is changed, the relative importance of each of the aforementioned scattering mechanisms is altered. Phonon scattering becomes less important at very low temperatures. Impurity scattering, on the other hand, becomes more significant because carriers are moving slower (thermal velocity is decreased) and thus have more time to interact with impurity centers. Surface roughness scattering remains the same because it does not depend on temperature. At liquid nitrogen temperatures (77K) and an effective electric field of 1MV/cm, the electron and hole mobilities are ~700 cm2/Vsec and ~100 cm2/Vsec, respectively. Using the above equations, the scattering lengths are approximately 17nm and 3.6nm.These scattering lengths can be assumed to be worst-case scenarios, as large operating voltages (1V) and aggressively scaled gate oxides (10Å) are assumed. Thus, actual scattering lengths will likely be larger than the calculated values.

Further device design considerations in maximizing this scattering length will be discussed in the last section of this paper. Still, the values calculated above are certainly in the range of transistor gate lengths currently being studied in advanced MOSFET research (<50nm).>

To accurately determine the extent of ballistic transport evident in a particular transistor structure, Monte Carlo simulation methods must be employed. Only by modeling the random trajectory of each carrier traveling through the channel can we truly assess the extent of ballistic transport in a MOSFET.



Digital Signal Processing is carried out by mathematical operations. Digital Signal Processors are microprocessors specifically designed to handle Digital Signal Processing tasks. These devices have seen tremendous growth in the last decade, finding use in everything from cellular telephones to advanced scientific instruments. In fact, hardware engineers use "DSP" to mean Digital Signal Processor, just as algorithm developers use "DSP" to mean Digital Signal Processing. DSP has become a key component in many consumer, communications, medical, and industrial products. These products use a variety of hardware approaches to implement DSP, ranging from the use of off-the-shelf microprocessors to field-programmable gate arrays (FPGAs) to custom integrated circuits (ICs).

Programmable "DSP processors," a class of microprocessors optimized for DSP, are a popular solution for several reasons. In comparison to fixed-function solutions, they have the advantage of potentially being reprogrammed in the field, allowing product upgrades or fixes. They are often more cost-effective than custom hardware, particularly for low-volume applications, where the development cost of ICs may be prohibitive. DSP processors often have an advantage in terms of speed, cost, and energy efficiency.

DSP Algorithms Mould DSP Architectures
From the outset, DSP algorithms have moulded DSP processor architectures. For nearly every feature found in a DSP processor, there are associated DSP algorithms whose computation is in some way eased by inclusion of this feature. Therefore, perhaps the best way to understand the evolution of DSP architectures is to examine typical DSP algorithms and identify how their computational requirements have influenced the architectures of DSP processors.

Fast Multipliers
The FIR filter is mathematically expressed as a vector of input data, along with a vector of filter coefficients. For each "tap" of the filter, a data sample is multiplied by a filter coefficient, with the result added to a running sum for all of the taps . Hence, the main component of the FIR filter algorithm is a dot product: multiply and add, multiply and add. These operations are not unique to the FIR filter algorithm; in fact, multiplication is one of the most common operations performed in signal processing convolution, IIR filtering, and Fourier transforms also all involve heavy use of multiply-accumulate operations. Originally, microprocessors implemented multiplications by a series of shift and add operations, each of which consumed one or more clock cycles. As might be expected, faster multiplication hardware yields faster performance in many DSP algorithms, and for this reason all modern DSP processors include at least one dedicated single- cycle multiplier or combined multiply-accumulate (MAC) unit.

Multiple Execution Units
DSP applications typically have very high computational requirements in comparison to other types of computing tasks, since they often must execute DSP algorithms in real time on lengthy segments of signals sampled at 10-100 KHz or higher. Hence, DSP processors often include several independent execution units that are capable of operating in parallel for example, in addition to the MAC unit, they typically contain an arithmetic- logic unit (ALU) and a shifter.

Efficient Memory Accesses
Executing a MAC in every clock cycle requires more than just a single-cycle MAC unit. It also requires the ability to fetch the MAC instruction, a data sample, and a filter coefficient from memory in a single cycle. To address the need for increased memory bandwidth, early DSP processors developed different memory architectures that could support multiple memory accesses per cycle. Often, instructions were stored in the memory bank, while data was stored in another. With this arrangement, the processor could fetch an instruction and a data operand in parallel in every cycle.
Since many DSP algorithms consume two data operands per instruction, a further optimization commonly used is to include a small bank of RAM near the processor core that is used as an instruction cache. When a small group of instructions is executed repeatedly, the cache is loaded with those instructions, freeing the instruction bus to be used for data fetches instead of instruction fetches thus enabling the processor to execute a MAC in a single cycle. High memory bandwidth requirements are often further supported via dedicated hardware for calculating memory addresses. These address generation units operate in parallel with the DSP processor's main execution units, enabling it to access data at new locations in memory without pausing to calculate the new address. Memory accesses in DSP algorithms tend to exhibit very predictable patterns; for example, for each sample in an FIR filter, the filter coefficients are accessed sequentially from start to finish for each sample, then accesses start over from the beginning of the coefficient vector when processing the next input sample.




In telecommunications, Free Space Optics (FSO) is an optical communication technology that uses light propagating in free space to transmit data between two points. The technology is useful where the physical connection by the means of fibre optic cables is impractical, due to high costs or other considerations. Free Space Optics are also used for communications between spacecraft. The optical links are implemented using infrared laser light, although low-data-rate communication over short distances is possible using LEDs. Maximum range for terrestrial links is in the order of 10 km, but the stability and quality of the link is highly dependent on atmospheric factors such as rain, fog, dust and heat. In space range is in the order of several thousand kilometers . IrDA is a very simple form of free-space optical

Optical communications, in various forms, have been used for thousands of years. The Ancient Greeks polished their shields to send signals during battle. Later on a wireless solar telegraph called heliograph was developed, that signals using Morse code flashes of sunlight. Alexander Graham Bell developed a light based telephone, the photophone.

The invention of laser in the 1960s, revolutionized free space optics. Military organizations were particularly interested and boosted development. The technology lost market momentum when the installation of optical fiber networks for civilian uses was at its peak.

Applications

Typically scenarios for use are:

  • LAN-to-LAN connections on campuses at Fast Ethernet or Gigabit Ethernet speeds.
  • LAN-to-LAN connections in a city. example, Metropolitan area network.
  • To cross a public road or other barriers which the sender and receiver do not own.
  • Speedy service delivery of high-bandwidth access to optical fiber networks.
  • Converged Voice-Data-Connection.
  • Temporary network installation (for events or other purposes).
  • Reestablish high-speed connection quickly (disaster recovery).
  • As an alternative or upgrade add-on to existing wireless technologies.
  • As a safety add-on for important fiber connections (redundancy).
  • For communications between spacecraft, including elements of a satellite constellation.

The light beam can be very narrow, which makes FSO hard to intercept, improving security. In any case, it is comparatively easy to encrypt any data traveling across the FSO connection for additional security. FSO provides vastly improved EMI behavior using light instead of microwaves.

Advantages

  • Ease of deployment
  • License-free operation
  • High bit rates
  • Low bit error rates
  • Immunity to electromagnetic interference
  • Full duplex operation
  • Protocol transparency
  • Very secure due to the high directionality and narrowness of the beam(s)
  • No Fresnel zone necessary

Disadvantages

When used in a vacuum, for example for inter-space craft communication, FSO may provide similar performance to that of fibre-optic systems. However, for terrestrial applications, the principal limiting factors are:

  • Beam dispersion
  • Atmospheric absorption
  • Rain
  • Fog (10..~100 dB/km attenuation)
  • Snow
  • Scintillation
  • Background light
  • Shadowing
  • Pointing stability in wind
  • Pollution / smog
  • If the sun goes exactly behind the transmitter, it can swamp the signal.

These factors cause an attenuated receiver signal and lead to higher bit error ratio (BER). To overcome these issues, vendors found some solutions, like multi-beam or multi-path architectures, which use more than one sender and more than one receiver. Some state-of-the-art devices also have larger fade margin (extra power, reserved for rain, smog, fog). To keep an eye-safe environment, good FSO systems have a limited laser power density and support laser classes 1 or 1M. Atmospheric and fog attenuation, which are exponential in nature, limit practical range of FSO devices to several kilometres.