Showing posts with label Electricals and Electronics. Show all posts
Showing posts with label Electricals and Electronics. Show all posts

Spintronics

Spintronics

Spintronics (a neologism meaning "spin transport electronics"), also known as magnetoelectronics, is an emerging technology that exploits the intrinsic spin of electrons and its associated magnetic moment, in addition to its fundamental electronic charge, in solid-state devices.

History

The research field of Spintronics emerged from experiments on spin-dependent electron transport phenomena in solid-state devices done in the 1980s, including the observation of spin-polarized electron injection from a ferromagnetic metal to a normal metal by Er. Jiveshwar Sharma (Jove) and Johnson and Silsbee (1985), and the discovery of giant magnetoresistance independently by Albert Fert et al. and Peter Grünberg et al. (1988). The origins can be traced back further to the ferromagnet/superconductor tunneling experiments pioneered by Meservey and Tedrow, and initial experiments on magnetic tunnel junctions by Julliere in the 1970s. The use of semiconductors for spintronics can be traced back at least as far as the theoretical proposal of a spin field-effect-transistor by Datta and Das in 1990.

Conventional electronic devices rely on the transport of electrical charge carriers - electrons - in a semiconductor such as silicon. Now, however, physicists are trying to exploit the 'spin' of the electron rather than its charge to create a remarkable new generation of 'spintronic' devices which will be smaller, more versatile and more robust than those currently making up silicon chips and circuit elements. The potential market is worth hundreds of billions of dollars a year. See Spintronics

All spintronic devices act according to the simple scheme: (1) information is stored (written) into spins as a particular spin orientation (up or down), (2) the spins, being attached to mobile electrons, carry the information along a wire, and (3) the information is read at a terminal. Spin orientation of conduction electrons survives for a relatively long time (nanoseconds, compared to tens of femtoseconds during which electron momentum decays), which makes spintronic devices particularly attractive for memory storage and magnetic sensors applications, and, potentially for quantum computing where electron spin would represent a bit (called qubit) of information. See Spintronics

Magnetoelectronics, Spin Electronics, and Spintronics are different names for the same thing: the use of electrons' spins (not just their electrical charge) in information circuits. See Magnetoelectronics, Spin Electronics, and Spintronics

Theory

Electrons are spin-1/2 fermions and therefore constitute a two-state system with spin "up" and spin "down". To make a spintronic device, the primary requirements are to have a system that can generate a current of spin polarized electrons comprising more of one spin species—up or down—than the other (called a spin injector), and a separate system that is sensitive to the spin polarization of the electrons (spin detector). Manipulation of the electron spin during transport between injector and detector (especially in semiconductors) via spin precession can be accomplished using real external magnetic fields or effective fields caused by spin-orbit interaction.

Spin polarization in non-magnetic materials can be achieved either through the Zeeman effect in large magnetic fields and low temperatures, or by non-equilibrium methods. In the latter case, the non-equilibrium polarization will decay over a timescale called the "spin lifetime". Spin lifetimes of conduction electrons in metals are relatively short (typically less than 1 nanosecond) but in semiconductors the lifetimes can be very long (microseconds at low temperatures), especially when the electrons are isolated in local trapping potentials (for instance, at impurities, where lifetimes can be milliseconds).

Metals-based spintronic devices

The simplest method of generating a spin-polarised current in a metal is to pass the current through a ferromagnetic material. The most common application of this effect is a giant magnetoresistance (GMR) device. A typical GMR device consists of at least two layers of ferromagnetic materials separated by a spacer layer. When the two magnetization vectors of the ferromagnetic layers are aligned, the electrical resistance will be lower (so a higher current flows at constant voltage) than if the ferromagnetic layers are anti-aligned. This constitutes a magnetic field sensor.

Two variants of GMR have been applied in devices: (1) current-in-plane (CIP), where the electric current flows parallel to the layers and (2) current-perpendicular-to-plane (CPP), where the electric current flows in a direction perpendicular to the layers.

Other metals-based spintronics devices:

* Tunnel Magnetoresistance (TMR), where CPP transport is achieved by using quantum-mechanical tunneling of electrons through a thin insulator separating ferromagnetic layers.
* Spin Torque Transfer, where a current of spin-polarized electrons is used to control the magnetization direction of ferromagnetic electrodes in the device.

Applications

The storage density of hard drives is rapidly increasing along an exponential growth curve, in part because spintronics-enabled devices like GMR and TMR sensors have increased the sensitivity of the read head which measures the magnetic state of small magnetic domains (bits) on the spinning platter. The doubling period for the areal density of information storage is twelve months, much shorter than Moore's Law, which observes that the number of transistors that can cheaply be incorporated in an integrated circuit doubles every two years.

MRAM, or magnetic random access memory, uses a grid of magnetic storage elements called magnetic tunnel junctions (MTJ's). MRAM is nonvolatile (unlike charge-based DRAM in today's computers) so information is stored even when power is turned off, potentially providing instant-on computing. Motorola has developed a 1st generation 256 kb MRAM based on a single magnetic tunnel junction and a single transistor and which has a read/write cycle of under 50 nanoseconds (Everspin, Motorola's spin-off, has since developeda 4 Mbit version). There are two 2nd generation MRAM techniques currently in development: Thermal Assisted Switching (TAS) which is being developed by Crocus Technology, and Spin Torque Transfer (STT) on which Crocus, Hynix, IBM, and several other companies are working.

Another design in development, called Racetrack memory, encodes information in the direction of magnetization between domain walls of a ferromagnetic metal wire.

Semiconductor-based spintronic devices

In early efforts, spin-polarized electrons are generated via optical orientation using circularly-polarized photons at the bandgap energy incident on semiconductors with appreciable spin-orbit interaction (like GaAs and ZnSe). Although electrical spin injection can be achieved in metallic systems by simply passing a current through a ferromagnet, the large impedance mismatch between ferromagnetic metals and semiconductors prevented efficient injection across metal-semiconductor interfaces. A solution to this problem is to use ferromagnetic semiconductor sources (like manganese-doped gallium arsenide GaMnAs), increasing the interface resistance with a tunnel barrier, or using hot-electron injection.

Spin detection in semiconductors is another challenge, which has been met with the following techniques:

* Faraday/Kerr rotation of transmitted/reflected photons
* Circular polarization analysis of electroluminescence
* Nonlocal spin valve (adapted from Johnson and Silsbee's work with metals)
* Ballistic spin filtering

The latter technique was used to overcome the lack of spin-orbit interaction and materials issues to achieve spin transport in silicon, the most important semiconductor for electronics.

Because external magnetic fields (and stray fields from magnetic contacts) can cause large Hall effects and magnetoresistance in semiconductors (which mimic spin-valve effects), the only conclusive evidence of spin transport in semiconductors is demonstration of spin precession and dephasing in a magnetic field non-colinear to the injected spin orientation. This is called the Hanle effect.

Applications

Advantages of semiconductor-based spintronics applications are potentially lower power use and a smaller footprint than electrical devices used for information processing. Also, applications such as semiconductor lasers using spin-polarized electrical injection have shown threshold current reduction and controllable circularly polarized coherent light output. Future applications may include a spin-based transistor having advantages over MOSFET devices such as steeper sub-threshold slope.



VHDL stands for VHSIC (Very High Speed Integrated Circuits) Hardware Description Language. In the mid-1980’s the U.S. Department of Defense and the IEEE sponsored the development of this hardware description language with the goal to develop very high-speed integrated circuit. It has become now one of industry’s standard languages used to describe digital systems. The other widely used hardware description language is Verilog. Both are powerful languages that allow you to describe and simulate complex digital systems. A third HDL language is ABEL (Advanced Boolean Equation Language) which was specifically designed for Programmable Logic Devices (PLD). ABEL is less powerful than the other two languages and is less popular in industry. This tutorial deals with VHDL, as described by the IEEE standard 1076-1993.

Although these languages look similar as conventional programming languages, there are some important differences. A hardware description language is inherently parallel, i.e. commands, which correspond to logic gates, are executed (computed) in parallel, as soon as a new input arrives. A HDL program mimics the behavior of a physical, usually digital, system. It also allows incorporation of timing specifications (gate delays) as well as to describe a system as an interconnection of different components.

VHDL (VHSIC hardware description language) is commonly used as a design-entry language for field-programmable gate arrays and application-specific integrated circuits in electronic design automation of digital circuits.

VHDL was originally developed at the behest of the US Department of Defense in order to document the behavior of the ASICs that supplier companies were including in equipment. That is to say, VHDL was developed as an alternative to huge, complex manuals which were subject to implementation-specific details.

The idea of being able to simulate this documentation was so obviously attractive that logic simulators were developed that could read the VHDL files. The next step was the development of logic synthesis tools that read the VHDL, and output a definition of the physical implementation of the circuit. Modern synthesis tools can extract RAM, counter, and arithmetic blocks out of the code, and implement them according to what the user specifies. Thus, the same VHDL code could be synthesized differently for lowest area, lowest power consumption, highest clock speed, or other requirements.

VHDL borrows heavily from the Ada programming language in both concepts (for example, the slice notation for indexing part of a one-dimensional array) and syntax. VHDL has constructs to handle the parallelism inherent in hardware designs, but these constructs (processes) differ in syntax from the parallel constructs in Ada (tasks). Like Ada, VHDL is strongly-typed and is not case sensitive. There are many features of VHDL which are not found in Ada, such as an extended set of Boolean operators including nand and nor, in order to represent directly operations which are common in hardware. VHDL also allows arrays to be indexed in either direction (ascending or descending) because both conventions are used in hardware, whereas Ada (like most programming languages) provides ascending indexing only. The reason for the similarity between the two languages is that the Department of Defense required as much of the syntax as possible to be based on Ada, in order to avoid re-inventing concepts that had already been thoroughly tested in the development of Ada.

The initial version of VHDL, designed to IEEE standard 1076-1987, included a wide range of data types, including numerical (integer and real), logical (bit and boolean), character and time, plus arrays of bit called bit_vector and of character called string.

A problem not solved by this edition, however, was "multi-valued logic", where a signal's drive strength (none, weak or strong) and unknown values are also considered. This required IEEE standard 1164, which defined the 9-value logic types: scalar std_ulogic and its vector version std_ulogic_vector.

The second issue of IEEE 1076, in 1993, made the syntax more consistent, allowed more flexibility in naming, extended the character type to allow ISO-8859-1 printable characters, added the xnor operator, etc.

Minor changes in the standard (2000 and 2002) added the idea of protected types (similar to the concept of class in C++) and removed some restrictions from port mapping rules.

In addition to IEEE standard 1164, several child standards were introduced to extend functionality of the language. IEEE standard 1076.2 added better handling of real and complex data types. IEEE standard 1076.3 introduced signed and unsigned types to facilitate arithmetical operations on vectors. IEEE standard 1076.1 (known as VHDL-AMS) provided analog and mixed-signal circuit design extensions.

Some other standards support wider use of VHDL, notably VITAL (VHDL Initiative Towards ASIC Libraries) and microwave circuit design extensions.

In June 2006, VHDL Technical Committee of Accellera (delegated by IEEE to work on next update of the standard) approved so called Draft 3.0 of VHDL-2006. While maintaining full compatibility with older versions, this proposed standard provides numerous extensions that make writing and managing VHDL code easier. Key changes include incorporation of child standards (1164, 1076.2, 1076.3) into the main 1076 standard, an extended set of operators, more flexible syntax of 'case' and 'generate' statements, incorporation of VHPI (interface to C/C++ languages) and a subset of PSL (Property Specification Language). These changes should improve quality of synthesizable VHDL code, make testbenches more flexible, and allow wider use of VHDL for system-level descriptions.

In February 2008, Accellera approved VHDL 4.0 also informally known as VHDL 2008, which addressed more than 90 issues discovered during the trial period for version 3.0 and includes enhanced generic types. In 2008, Accellera plans to release VHDL 4.0 to the IEEE for balloting for inclusion in IEEE 1076-2008.

Tele-immersion


Tele-immersion is a technology to be implemented with Internet that will enable users in different geographic locations to come together in a simulated environment to interact. Users will feel like they are actually looking, talking, and meeting with each other face-to-face in the same room.

This is achieved using computers that recognize the presence and movements of individuals and objects, tracking those individuals and images, and reconstructing them onto one stereo-immersive surface.

3D reconstruction for tele-immersion is performed using stereo, which means two or more cameras take rapid sequential shots of the same object, continuously performing distance calculations, and projecting them into the computer-simulated environment, as to replicate real-time movement.

Tele-immersion presents the greatest technological challenge for Internet2.

Optical Computer

An optical computer is a computer that uses light instead of electricity (i.e. photons rather than electrons) to manipulate, store and transmit data. Photons have fundamentally different physical properties than electrons, and researchers have attempted to make use of these properties, mostly using the basic principles of optics, to produce computers with performance and/or capabilities greater than those of electronic computers. Optical computer technology is still in the early stages: functional optical computers have been built in the laboratory, but none have progressed past the prototype stage.

Most research projects focus on replacing current computer components with optical equivalents, resulting in an optical digital computer system processing binary data. This approach appears to offer the best short-term prospects for commercial optical computing, since optical components could be integrated into traditional computers to produce an optical/electronic hybrid. Other research projects take a non-traditional approach, attempting to develop entirely new methods of computing that are not physically possible with electronics.

Optical components for binary digital computer

The fundamental building block of modern electronic computers is the transistor. To replace electronic components with optical ones, an equivalent "optical transistor" is required. This is achieved using materials with a non-linear refractive index. In particular, materials exist where the intensity of incoming light affects the intensity of the light transmitted through the material in a similar manner to the voltage response of an electronic transistor. This "optical transistor" effect is used to create logic gates, which in turn are assembled into the higher level components of the computer's CPU.

Graph Separators

Graph Separators with Applications is devoted to techniques for obtaining upper and lower bounds on the sizes of graph separators -- upper bounds being obtained via decomposition algorithms. The book surveys the main approaches to obtaining good graph separations, while the main focus of the book is on techniques for deriving lower bounds on the sizes of graph separators. This asymmetry in focus reflects our perception that the work on upper bounds, or algorithms, for graph separation is much better represented in the standard theory literature than is the work on lower bounds, which we perceive as being much more scattered throughout the literature on application areas. Given the multitude of notions of graph separator that have been developed and studied over the past (roughly) three decades, there is a need for a central, theory-oriented repository for the mass of results. The need is absolutely critical in the area of lower-bound techniques for graph separators, since these techniques have virtually never appeared in articles having the word `separator' or any of its near-synonyms in the title. Graph Separators with Applications fills this need.

Extended Mark Up Language

XML (Extended Markup Language): XML is an HTML-like formatting language. Misconception: The recommended advice when developing Internet protocols has been be conservative in what you send and liberal in what you receive. While in the early days this philosophy fostered interoperability, any economist could tell you the long term effect of this. It leads to a large number of slightly incompatible implementations of protocols, all of which mostly work, but none of which interopate well. For example, FTP is one of the oldest Internet protocols, and there are essentially no fully complient clients or servers, and building a fully interopable client/server is extremely difficult, requiring knowledge of all the quirks of all the popular implementations. Therefore, XML has chosen the opposite approach: compliant implementations are supposed to reject all input that isn't "well-formed", even when the intent is clear. This not only avoids interoperability errors in the long run, it also dramatically improves security by guaranteeing that there is only one way to interpret things.

The Extensible Markup Language (XML) is a general-purpose specification for creating custom markup languages. It is classified as an extensible language, because it allows the user to define the mark-up elements. XML's purpose is to aid information systems in sharing structured data, especially via the Internet, to encode documents, and to serialize data; in the last context, it compares with text-based serialization languages such as JSON and YAML.

XML began as a simplified subset of the Standard Generalized Markup Language (SGML), meant to be readable by people via semantic constraints; application languages can be implemented in XML. These include XHTML, RSS, MathML, GraphML, Scalable Vector Graphics, MusicXML, and others. Moreover, XML is sometimes used as the specification language for such application languages.

XML is recommended by the World Wide Web Consortium (W3C). It is a fee-free open standard. The recommendation specifies lexical grammar and parsing requirements.

An XML document has two correctness levels:

  • Well-formed. A well-formed document conforms to the XML syntax rules; e.g. if a start-tag (< >) appears without a corresponding end-tag (), it is not well-formed. A document not well-formed is not in XML; a conforming parser is disallowed from processing it.
  • Valid. A valid document additionally conforms to semantic rules, either user-defined or in an XML schema, especially DTD; e.g. if a document contains an undefined element, then it is not valid; a validating parser is disallowed from processing it.

TCPA / Palladium

TCPA stands for the Trusted Computing Platform Alliance, an initiative led by Intel. Their stated goal is `a new computing platform for the next century that will provide for improved trust in the PC platform.' Palladium is software that Microsoft says it plans to incorporate in future versions of Windows; it will build on the TCPA hardware, and will add some extra features. successor to the Trusted Computing Platform Alliance (TCPA), is an initiative started by AMD, Hewlett-Packard, IBM, Infineon, Intel, Microsoft, and Sun Microsystems to implement Trusted Computing. Many others followed.

What does TCPA / Palladium do?
It provides a computing platform on which you can't tamper with the applications, and where these applications can communicate securely with the vendor. The obvious application is digital rights management (DRM): Disney will be able to sell you DVDs that will decrypt and run on a Palladium platform, but which you won't be able to copy. The music industry will be able to sell you music downloads that you won't be able to swap. They will be able to sell you CDs that you'll only be able to play three times, or only on your birthday. All sorts of new marketing possibilities will open up. TCPA / Palladium will also make it much harder for you to run unlicensed software. Pirate software can be detected and deleted remotely. It will also make it easier for people to rent software rather than buying it; and if you stop paying the rent, then not only does the software stop working but so may the files it created. For years, Bill Gates has dreamed of finding a way to make the Chinese pay for software: Palladium could be the answer to his prayer.
There are many other possibilities. Governments will be able to arrange things so that all Word documents created on civil servants' PCs are `born classified' and can't be leaked electronically to journalists. Auction sites might insist that you use trusted proxy software for bidding, so that you can't bid tactically at the auction. Cheating at computer games could be made more difficult. There is a downside too.

TCG's original goal was the development of a Trusted Platform Module (TPM), a semiconductor intellectual property core or integrated circuit that conforms to the trusted platform module specification put forward by the Trusted Computing Group and is to be included with computers to enable trusted computing features. TCG-compliant functionality has since been integrated directly into certain-market chipsets.

TCG also recently released the first version of their Trusted Network Connect ("TNC") protocol specification, based on the principles of AAA, but adding the ability to authorize network clients on the basis of hardware configuration, BIOS, kernel version, and which updates that have been applied to the OS and anti-virus software, etc.

Seagate has also developed a Full Disk encryption drive which can use the ability of the TPM to secure the key within the hardware chip.

The owner of a TPM-enabled system has complete control over what software does and doesn't run on their system does include the possibility that a system owner would choose to run a version of an operating system that refuses to load unsigned or unlicensed software, but those restrictions would have to be enforced by the operating system and not by the TCG technology. What a TPM does provide in this case is the capability for the OS to lock software to specific machine configurations, meaning that "hacked" versions of the OS designed to get around these restrictions would not work. While there is legitimate concern that OS vendors could use these capabilities to restrict what software would load under their OS (hurting small software companies or open source/shareware/freeware providers, and causing vendor lock-in for some data formats), no OS vendor has yet suggested that this is planned. Furthermore, since restrictions would be a function of the operating system, TPMs could in no way restrict alternative operating systems from running , including free or open source operating systems. There are several projects which are experimenting with TPM support in free operating systems - examples of such projects include a TPM device driver for Linux, an open source implementation of the TCG's Trusted Software Stack called TrouSerS, a Java interface to TPM capabilities called TPM/J, and a TPM-supporting version of the Grub bootloader called TrustedGrub.

Sense-Response Applications

Sensor networks are widely being used for sense-response applications. The role of the sensor nodes in such applications is to monitor an area for events of interest and report the occurrence of the event to the base-station. The receipt of the event at the base-station is followed by a prompt physical response. An example of a sense-response application is the detection of fires in a forest. The sensor nodes report the occurrence of a fire upon which fire trucks are immediately dispatched to the location of the fire. Other examples of sense-response applications are intruder detection and apprehension, natural disaster monitoring, structural integrity monitoring, bio/chemical spill monitoring and containment etc.

Sensor nodes in sense-response applications are deployed with overlapping sensing regions to avoid holes in the coverage area. Thus an event is detected by more than one sensor node in the neighborhood of its occurrence. The base-station exploits this redundancy by responding to only those in the network. This is mainly done in order to avoid any false positives in the event generation process, i.e., an event is reported though it never occurred. However, this requires every sensor node to transmit a message to the base-station for every event that is detected, which expends a lot of energy. An alternative (that is often used in practice) is to have all the sensor nodes in the neighborhood of an event reach a consensus and have only one of the nodes transmit an event detection message to the base-station that implicitly contains the testimony of every node that detected the event. Sensor networks are often deployed in public and untrustworthy places. In some cases, they are also deployed in hostile areas.

The wireless medium of communication in sensor networks prevents any form of access control mechanism at the physical layer. The adversary can very easily introduce spurious messages in the network containing a false event report. This leads to energy wastage of the nodes in the network and also wastage of resources due to the physical response initiated by the base station in response to the false event report. A simple solution to thwart such attacks is to use a system wide secret key coupled with explicit authentication mechanisms. However, this solution fails to protect against internal attacks where the adversary has compromised a subset of sensor nodes.

Sensor nodes are designed to be cheap and cannot be equipped with expensive tamper-proof hardware. This coupled with the unmanned operation of the network leaves the nodes at the mercy of an adversary who can potentially steal some nodes, recover their cryptographic material, and pose them as authorized nodes in the network. We hereby refer to such nodes as internal adversaries. Internal adversaries are capable of launching more sophisticated attacks, where by posing to be real authenticated nodes, they can also suppress the generation of a message for any real event that is detected. This effectively renders the entire system to be useless.

Cable Modems



A cable modem is a type of modem that provides access to a data signal sent over the cable television infrastructure. Cable modems are primarily used to deliver broadband Internet access in the form of cable internet, taking advantage of the high bandwidth of a cable television network. They are commonly found in Australia, New Zealand, Canada, Europe, Costa Rica, and the United States. In the USA alone there were 22.5 million cable modem users during the first quarter of 2005, up from 17.4 million in the first quarter of 2004.

In network topology, a cable modem is a network bridge that conforms to IEEE 802.1D for Ethernet networking (with some modifications). The cable modem bridges Ethernet frames between a customer LAN and the coax cable network.

With respect to the OSI model, a cable modem is a data link layer (or layer 2) forwarder, rather than simply a modem.

A cable modem does support functionalities at other layers. In physical layer (or layer 1), the cable modem supports the Ethernet PHY on its LAN interface, and a DOCSIS defined cable-specific PHY on its HFC cable interface. It is to this cable-specific PHY that the name cable modem refers. In the network layer (or layer 3), the cable modem is a IP host in that it has its own IP address used by the network operator to manage and troubleshoot the device. In the transport layer (or layer 4) the cable modem supports UDP in association with its own IP address, and it supports filtering based on TCP and UDP port numbers to, for example, block forwarding of NetBIOS traffic out of the customer's LAN. In the application layer (layer 5 or layer 7), the cable modem supports certain protocols that are used for management and maintenance, notably DHCP, SNMP, and TFTP.

Some cable modem devices may incorporate a router along with the cable modem functionality, to provide the LAN with its own IP network addressing. From a data forwarding and network topology perspective, this router functionality is typically kept distinct from the cable modem functionality (at least logically) even though the two may share a single enclosure and appear as one unit. So, the cable modem function will have its own IP address and MAC address as will the router.

A modem designed to operate over cable TV lines. Because the coaxial cable used by cable TV provides much greater bandwidth than telephone lines, a cable modem can be used to achieve extremely fast access to the World Wide Web. This, combined with the fact that millions of homes are already wired for cable TV, has made the cable modem something of a holy grail for Internet and cable TV companies.

There are a number of technical difficulties, however. One is that the cable TV infrastructure is designed to broadcast TV signals in just one direction - from the cable TV company to people's homes. The Internet, however, is a two-way system where data also needs to flow from the client to the server. In addition, it is still unknown whether the cable TV networks can handle the traffic that would ensue if millions of users began using the system for Internet access.


Virtual Instrumentation


Virtual Instrumentation is the use of customizable software and modular measurement hardware to create user-defined measurement systems, called virtual instruments.

Traditional hardware instrumentation systems are made up of pre-defined hardware components, such as digital multimeters and oscilloscopes that are completely specific to their stimulus, analysis, or measurement function. Because of their hard-coded function, these systems are more limited in their versatility than virtual instrumentation systems. The primary difference between hardware instrumentation and virtual instrumentation is that software is used to replace a large amount of hardware. The software enables complex and expensive hardware to be replaced by already purchased computer hardware; e. g. analog to digital converter can act as a hardware complement of a virtual oscilloscope, a potentiostat enables frequency response acquisition and analysis in electrochemical impedance spectroscopy with virtual instrumentation.

The concept of a synthetic instrument is a subset of the virtual instrument concept. A synthetic instrument is a kind of virtual instrument that is purely software defined. A synthetic instrument performs a specific synthesis, analysis, or measurement function on completely generic, measurement agnostic hardware. Virtual instruments can still have measurement specific hardware, and tend to emphasize modular hardware approaches that facilitate this specificity. Hardware supporting synthetic instruments is by definition not specific to the measurement, nor is it necessarily (or usually) modular.

Leveraging commercially available technologies, such as the PC and the analog to digital converter, virtual instrumentation has grown significantly since its inception in the late 1970s. Additionally, software packages like National Instruments' LabVIEW and other graphical programming languages helped grow adoption by making it easier for non-programmers to develop systems.

Simplifying the development process
Virtual instrumentation has led to a simpler way of looking at measurement systems. Instead of using several stand-alone instruments for multiple measurement types and performing rudimentary analysis by hand, engineers now can quickly and cost-effectively create a system equipped with analysis software and a single measurement device that has the capabilities of a multitude of instruments.

Powerful off-the-shelf software, such as our own company's LabVIEW, automates the entire process, delivering an easy way to acquire, analyse, and present data from a personal computer without sacrificing performance or functionality. The software integrates tightly with hardware, making it easy to automate measurements and control, while taking advantage of the personal computer for processing, display, and networking capabilities.

The expectations of performance and flexibility in measurement and control applications continue to rise in the industry, growing the importance of software design. By investing in intuitive engineering software tools that run at best possible performance, companies can dramatically reduce development time and increase individual productivity, giving themselves a powerful weapon to wield in competitive situations.

Preparing investments for the future
Measurement systems have historically been 'islands of automation', in which you design a system to meet the needs of a specific application. With virtual instrumentation, modular hardware components and open engineering software make it easy to adapt a single system to a variety of measurement requirements.

To meet the changing needs of your testing system, open platforms such as PXI (PCI eXtensions for Instrumentation) make it simple to integrate measurement devices from different vendors into a single system that is easy to modify or expand, as new technologies emerge or your application needs change. With a PXI system, you can quickly integrate common measurements such as machine vision, motion control, and data acquisition to create multifunction systems without spending valuable engineering hours making the hardware work together. The open PXI platform combines industry-standard technologies, such as CompactPCI and Windows operating systems, with built-in triggering to provide a rugged, more deterministic system than desktop PCs.


Reconfigurability denotes the Reconfigurable Computing capability of a system, so that its behavior can be changed by reconfiguration, i. e. by loading different configware code. This static reconfigurability distinguishes between reconfiguration time and run time. Dynamic reconfigurability denotes the capability of a dynamically reconfigurable system that can dynamically change its behavior during run time, usually in response to dynamic changes in its environment.

In the context of wireless communication dynamic reconfigurability tackles the changeable behavior of wireless networks and associated equipment, specifically in the fields of radio spectrum, radio access technologies, protocol stacks, and application services.

In the context of Control reconfiguration, a field of fault-tolerant control within control engineering, reconfigurability is a property of faulty systems meaning that the original control goals specified for the fault-free system can be reached after suitable control reconfiguration.

Research regarding the (dynamic) reconfigurability of wireless communication systems is ongoing for example in working group 6 of the Wireless World Research Forum (WWRF), in the Software Defined Radio Forum (SDRF), and in the European FP6 project End-to-End Reconfigurability (E²R). Recently, E²R initiated a related standardization effort on the cohabitation of heterogeneous wireless radio systems in the framework of the IEEE P1900.4 Working Group.

Inverse Multiplexing



An inverse multiplexer (often abbreviated to "inverse mux" or "imux") allows a data stream to be broken into multiple lower data rate communications links. An inverse multiplexer differs from a demultiplexer in that each of the low rate links coming from it is related to the other ones and they all work together to carry the same data. By contrast, the output streams from a demultiplexer may each be completely independent from each other and the demultiplexer does not have to understand them in any way.

A technique that is the inverse, or opposite, of multiplexing. Traditional multiplexing folds together multiple low-speed channels onto a high-speed circuit. Inverse multiplexing spreads a high-speed channel across multiple low-speed circuits. Inverse multiplexing is used where an appropriately high-speed circuit is not available. A 6-Mbps data stream, for example, might be inverse multiplexed across four (4) T1 circuits, each running at 1.544 Mbps. Inverse multiplexing over ATM (IMA) fans out an ATM cell stream across multiple circuits between the user premises and the edge of the carrier network. In such a circumstance, multiple physical T1 circuits can be used as a single, logical ATM pipe.The IMAcompliant ATM concentrator at the user premises spreads the ATM cells across the T1 circuits in a round robin fashion, and the ATM switch at the edge of the carrier network scans the T1 circuits in the same fashion in order to reconstitute the cell stream.There is a similar implementation agreement (IA) for Frame Relay. Multilink point-to-point protocol (PPP) serves much the same purpose in the Internet domain.

Note that this is the opposite of a multiplexer which creates one high speed link from multiple low speed ones.

This provides an end to end connection of 3 x the data rate available on each of the low rate data links. Note that, as with multiplexers, links are almost always bi-directional and an inverse mux will practically always be combined with its reverse and still be called an inverse mux. This means that the "de-inverse mux" will actually be an inverse mux.

Inverse muxes are used, for example, to combine a number of ISDN channels together into one high rate circuit, where the DTE needs a higher rate connection than is available from a single ISDN connection. This is typically useful in areas where higher rate circuits are not available.

An alternative to an inverse mux is to use three separate links and load sharing of data between them. In the case of IP, network packets could be sent in round robin mode between each separate link. Advantages of using an inverse mux over separate links include

  • lower link latency (one single packet can be spread across all links)
  • fairer load sharing
  • network simplicity (no router needed between boxes with high speed interfaces)

Enhanced Data rates for GSM Evolution (EDGE), Enhanced GPRS (EGPRS), or IMT Single Carrier (IMT-SC) is a backward-compatible digital mobile phone technology that allows improved data transmission rates, as an extension on top of standard GSM. EDGE can be considered a 3G radio technology and is part of ITU's 3G definition, but is most frequently referred to as 2.75G. EDGE was deployed on GSM networks beginning in 2003— initially by Cingular (now AT&T) in the United States.

EDGE is standardized by 3GPP as part of the GSM family, and it is an upgrade that provides a potential three-fold increase in capacity of GSM/GPRS networks. The specification achieves higher data-rates by switching to more sophisticated methods of coding, within existing GSM timeslots. Introducing 8PSK encoding, EDGE is capable of delivering higher bit-rates per radio channel in good conditions.

EDGE can be used for any packet switched application, such as an Internet connection. High-speed data applications such as video services and other multimedia benefit from EGPRS' increased data capacity. EDGE Circuit Switched is a possible future development.

Evolved EDGE was added in Release 7 of the 3GPP standard. This is a further extension on top of EDGE, providing reduced latency and potential speeds of 1Mbit/s by using even more complex coding functions than the 8PSK originally introduced with EDGE.

Technology

EDGE/EGPRS is implemented as a bolt-on enhancement for 2G and 2.5G GSM and GPRS networks, making it easier for existing GSM carriers to upgrade to it. EDGE/EGPRS is a superset to GPRS and can function on any network with GPRS deployed on it, provided the carrier implements the necessary upgrade.

Although EDGE requires no hardware or software changes to be made in GSM core networks, base stations must be modified. EDGE compatible transceiver units must be installed and the base station subsystem needs to be upgraded to support EDGE. If the operator already has this in place, which is often the case today, the network can be upgraded to EDGE by activating an optional software feature. Today EDGE is supported by all major chip vendors for GSM. New mobile terminal hardware and software is also required to decode/encode the new modulation and coding schemes and carry the higher user data rates to implement new services.

Transmission techniques

In addition to Gaussian minimum-shift keying (GMSK), EDGE uses higher-order PSK/8 phase shift keying (8PSK) for the upper five of its nine modulation and coding schemes. EDGE produces a 3-bit word for every change in carrier phase. This effectively triples the gross data rate offered by GSM. EDGE, like GPRS, uses a rate adaptation algorithm that adapts the modulation and coding scheme (MCS) according to the quality of the radio channel, and thus the bit rate and robustness of data transmission. It introduces a new technology not found in GPRS, Incremental Redundancy, which, instead of retransmitting disturbed packets, sends more redundancy information to be combined in the receiver. This increases the probability of correct decoding.

EDGE can carry data speeds up to 236.8 kbit/s (with end-to-end latency of less than 150 ms) for 4 timeslots (theoretical maximum is 473.6 kbit/s for 8 timeslots) in packet mode. This means it can handle four times as much traffic as standard GPRS. EDGE will therefore meets the International Telecommunications Union's requirement for a 3G network, and has been accepted by the ITU as part of the IMT-2000 family of 3G standards. It also enhances the circuit data mode called HSCSD, increasing the data rate of this service.

EDGE Evolution

EDGE Evolution improves on EDGE in a number of ways. Latencies are reduced by lowering the Transmission Time Interval by half (from 20 ms to 10 ms). Bit rates are increased up to 1 MBit/s peak speed and latencies down to 100 ms using dual carriers, higher symbol rate and higher-order modulation (32QAM and 16QAM instead of 8-PSK), and turbo codes to improve error correction. And finally signal quality is improved using dual antennas improving average bit-rates and spectrum efficiency. EDGE Evolution can be gradually introduced as software upgrades, taking advantage of the installed base. With EDGE Evolution, end-users will be able to experience mobile internet connections corresponding to a 500 kbit/s ADSL service.

Holographic Data Storage



Holographic data storage is a potential replacement technology in the area of high-capacity data storage currently dominated by magnetic and conventional optical data storage. Magnetic and optical data storage devices rely on individual bits being stored as distinct magnetic or optical changes on the surface of the recording medium. Holographic data storage overcomes this limitation by recording information throughout the volume of the medium and is capable of recording multiple images in the same area utilizing light at different angles.

Additionally, whereas magnetic and optical data storage records information a bit at a time in a linear fashion, holographic storage is capable of recording and reading millions of bits in parallel, enabling data transfer rates greater than those attained by optical storage.

Holographic data storage captures information using an optical inference pattern within a thick, photosensitive optical material. Light from a single laser beam is divided into two separate beams, a reference beam and an object or signal beam; a spatial light modulator is used to encode the object beam with the data for storage. An optical inference pattern results from the crossing of the beams’ paths, creating a chemical and/or physical change in the photosensitive medium; the resulting data is represented in an optical pattern of dark and light pixels. By adjusting the reference beam angle, wavelength, or media position, a multitude of holograms (theoretically, several thousand) can be stored on a single volume. The theoretical limits for the storage density of this technique are approximately tens of terabits (1 terabit = 1024 gigabits, 8 gigabits = 1 gigabyte) per cubic centimeter. In 2006, InPhase technologies published a white paper reporting an achievement of 500 Gb/in2.

For two-color holographic recording, the reference and signal beams are fixed to a particular wavelength (green, red or IR) and the sensitizing/gating beam is a separate, shorter wavelength (blue or UV). The sensitizing/gating beam is used to sensitize the material before and during the recording process, while the information is recorded in the crystal via the reference and signal beams. It is shone intermittently on the crystal during the recording process for measuring the diffracted beam intensity. Readout is achieved by illumination with the reference beam alone. Hence the readout beam with a longer wavelength would not be able to excite the recombined electrons from the deep trap centers during readout, as they need the sensitizing light with shorter wavelength to erase them.

Usually, for two-color holographic recording, two different dopants are required to promote trap centers, which belong to transition metal and rare earth elements and are sensitive to certain wavelengths. By using two dopants, more trap centers would be created in the Lithium niobate crystal. Namely a shallow and a deep trap would be created. The concept now is to use the sensitizing light to excite electrons from the deep trap farther from the valence band to the conduction band and then to recombine at the shallow traps nearer to the conduction band. The reference and signal beam would then be used to excite the electrons from the shallow traps back to the deep traps. The information would hence be stored in the deep traps. Reading would be done with the reference beam since the electrons can no longer be excited out of the deep traps by the long wavelength beam.

With its omnipresent computers, all connected via the Internet, the Information Age has led to an explosion of information available to users. The decreasing cost of storing data, and the increasing storage capacities of the same small device footprint, have been key enablers of this revolution. While current storage needs are being met, storage technologies must continue to improve in order to keep pace with the rapidly increasing demand.

However, both magnetic and conventional optical data storage technologies, where individual bits are stored as distinct magnetic or optical changes on the surface of a recording medium, are approaching physical limits beyond which individual bits may be too small or too difficult to store. Storing information throughout the volume of a medium—not just on its surface—offers an intriguing high-capacity alternative. Holographic data storage is a volumetric approach which, although conceived decades ago, has made recent progress toward practicality with the appearance of lower-cost enabling technologies, significant results from longstanding research efforts, and progress in holographic recording materials.

In addition to high storage density, holographic data storage promises fast access times, because the laser beams can be moved rapidly without inertia, unlike the actuators in disk drives. With the inherent parallelism of its pagewise storage and retrieval, a very large compound data rate can be reached by having a large number of relatively slow, and therefore low-cost, parallel channels.

Because of all of these advantages and capabilities, holographic storage has provided an intriguing alternative to conventional data storage techniques for three decades. However, it is the recent availability of relatively low-cost components, such as liquid crystal displays for SLMs and solid-state camera chips from video camcorders for detector arrays, which has led to the current interest in creating practical holographic storage devices. Recent reviews of holographic storage can be found. A team of scientists from the IBM Research Division have been involved in exploring holographic data storage, partially as a partner in the DARPA-initiated consortia on holographic data storage systems (HDSS) and on photorefractive information storage materials (PRISM). In this paper, we describe the current status of our effort.

The overall theme of our research is the evaluation of the engineering tradeoffs between the performance specifications of a practical system, as affected by the fundamental material, device, and optical physics. Desirable performance specifications include data fidelity as quantified by bit-error rate (BER), total system capacity, storage density, readout rate, and the lifetime of stored data. This paper begins by describing the hardware aspects of holographic storage, including the test platforms we have built to evaluate materials and systems tradeoffs experimentally, and the hardware innovations developed during this process. Phase-conjugate readout, which eases the demands on both hardware design and material quality, is experimentally demonstrated. The second section of the paper describes our work in coding and signal processing, including modulation codes, novel preprocessing techniques, the storage of more than one bit per pixel, and techniques for quantifying coding tradeoffs. Then we discuss associative retrieval, which introduces parallel search capabilities offered by no other storage technology. The fourth section describes our work in testing and evaluating materials, including permanent or write-once read-many-times (WORM) materials, read­write materials, and photon-gated storage materials offering reversible storage without sacrificing the lifetime of stored data. The paper concludes with a discussion of applications for holographic data storage.

A concept of integer fast Fourier transform (IntFFT) for approximating the discrete Fourier transform is introduced. Unlike the fixed-point fast Fourier transform (FxpFFT), the new transform has the properties that it is an integer-to-integer mapping, is power adaptable and is reversible. The lifting scheme is used to approximate complex multiplications appearing in the FFT lattice structures where the dynamic range of the lifting coefficients can be controlled by proper choices of lifting factorizations. Split-radix FFT is used to illustrate the approach for the case of 2N-point FFT, in which case, an upper bound of the minimal dynamic range of the internal nodes, which is required by the reversibility of the transform, is presented and confirmed by a simulation. The transform can be implemented by using only bit shifts and additions but no multiplication. A method for minimizing the number of additions required is presented. While preserving the reversibility, the IntFFT is shown experimentally to yield the same accuracy as the FxpFFT when their coefficients are quantized to a certain number of bits. Complexity of the IntFFT is shown to be much lower than that of the FxpFFT in terms of the numbers of additions and shifts. Finally, they are applied to noise reduction applications, where the IntFFT provides significantly improvement over the FxpFFT at low power and maintains similar results at high power

NRAM Nano Ram



NRAM Nano Ram Nano-RAM is a proprietary computer memory technology from the company Nantero. It is a type of nonvolatile random access memory based on the mechanical position of carbon nanotubes deposited on a chip-like substrate. In theory the small size of the nanotubes allows for very high density memories. Nantero also refers to it as NRAM in short.

Nantero's technology is based on a well-known effect in carbon nanotubes where crossed nanotubes on a flat surface can either be touching or slightly separated in the vertical direction (normal to the substrate) due to Van der Waal's interactions. In Nantero's technology, each NRAM "cell" consists of a number of nanotubes suspended on insulating "lands" over a metal electrode. At rest the nanotubes lie above the electrode "in the air", about 13 nm above it in the current versions, stretched between the two lands. A small dot of gold is deposited on top of the nanotubes on one of the lands, providing an electrical connection, or terminal. A second electrode lies below the surface, about 100 nm away.

Normally, with the nanotubes suspended above the electrode, a small voltage applied between the terminal and upper electrode will result in no current flowing. This represents a "0" state. However if a larger voltage is applied between the two electrodes, the nanotubes will be pulled towards the upper electrode until they touch it. At this point a small voltage applied between the terminal and upper electrode will allow current to flow (nanotubes are conductors), representing a "1" state. The state can be changed by reversing the polarity of the charge applied to the two electrodes.

What causes this to act as a memory is that the two positions of the nanotubes are both stable. In the off position the mechanical strain on the tubes is low, so they will naturally remain in this position and continue to read "0". When the tubes are pulled into contact with the upper electrode a new force, the tiny Van der Waals force, comes into play and attracts the tubes enough to overcome the mechanical strain. Once in this position the tubes will again happily remain there and continue to read "1". These positions are fairly resistant to outside interference like radiation that can erase or flip memory in a conventional DRAM.

NRAMs are built by depositing masses of nanotubes on a pre-fabricated chip containing rows of bar-shaped electrodes with the slightly taller insulating layers between them. Tubes in the "wrong" location are then removed, and the gold terminals deposited on top. Any number of methods can be used to select a single cell for writing, for instance the second set of electrodes can be run in the opposite direction, forming a grid, or they can be selected by adding voltage to the terminals as well, meaning that only those selected cells have a total voltage high enough to cause the flip.

Currently the method of removing the unwanted nanotubes makes the system impractical. The accuracy and size of the epitaxy machinery is considerably "larger" that the cell size otherwise possible. Existing experimental cells have very low densities compared to existing systems, some new method of construction will have to be introduced in order to make the system practical.

Advantages

NRAM has a density, at least in theory, similar to that of DRAM. DRAM consists of a number of capacitors, which are essentially two small metal plates with a thin insulator between them. NRAM is similar, with the terminals and electrodes being roughly the same size as the plates in a DRAM, the nanotubes between them being so much smaller they add nothing to the overall size. However it seems there is a minimum size at which a DRAM can be built, below which there is simply not enough charge being stored to be able to effectively read it. NRAM appears to be limited only by the current state of the art in lithography. This means that NRAM may be able to become much denser than DRAM, meaning that it will also be less expensive, if it becomes possible to control the locations of carbon nanotubes at the scale the semiconductor industry can control the placement of devices on silicon.

Additionally, unlike DRAM, NRAM does not require power to "refresh" it, and will retain its memory even after the power is removed. Additionally the power needed to write to the device is much lower than a DRAM, which has to build up charge on the plates. This means that NRAM will not only compete with DRAM in terms of cost, but will require much less power to run, and as a result also be much faster (write performance is largely determined by the total charge needed). NRAM can theoretically reach performance similar to SRAM, which is faster than DRAM but much less dense, and thus much more expensive.

In comparison with other NVRAM technologies, NRAM has the potential to be even more advantageous. The most common form of NVRAM today is Flash RAM, which combines a bistable transistor circuit known as a flip-flop (also the basis of SRAM) with a high-performance insulator wrapped around one of the transistor's bases. After being written to, the insulator traps electrons in the base electrode, locking it into the "1" state. However, in order to change that bit the insulator has to be "overcharged" to erase any charge already stored in it. This requires high voltage, about 10 volts, much more than a battery can provide. Flash systems thus have to include a "charge pump" that slowly builds up power and then releases it at higher voltage. This process is not only very slow, but degrades the insulators as well. For this reason Flash has a limited lifetime, between 10,000 and 1,000,000 "writes" before the device will no longer operate effectively.

NRAM potentially avoids all of these issues. The read and write process are both "low energy" in comparison to Flash (or DRAM for that matter), meaning that NRAM can result in longer battery life in conventional devices. It may also be much faster to write than either, meaning it may be used to replace both. A modern cell phone will often include Flash memory for storing phone numbers and such, DRAM for higher performance working memory because flash is too slow, and additionally some SRAM in the CPU because DRAM is too slow for its own use. With NRAM all of these may be replaced, with some NRAM placed on the CPU to act as the CPU cache, and more in other chips replacing both the DRAM and Flash.

Nano-RAM, is a proprietary computer memory technology from the company Nantero and NANOMOTOR is invented by University of bologna and California nano systems.NRAM is a type of nonvolatile random access memory based on the mechanical position of carbon nanotubes deposited on a chip-like substrate. In theory the small size of the nanotubes allows for very high density memories. Nantero also refers to it as NRAM in short, but this acronym is also commonly used as a synonym for the more common NVRAM, which refers to all nonvolatile RAM memories.Nanomotor is a molecular motor which works continuously without the consumption of fuels. It is powered by sunlight. The research are federally funded by national science foundation and national academy of science.

Carbon Nanotubes
Carbon nanotubes (CNTs) are a recently discovered allotrope of carbon. They take the form of cylindrical carbon molecules and have novel properties that make them potentially useful in a wide variety of applications in nanotechnology, electronics, optics, and other fields of materials science. They exhibit extraordinary strength and unique electrical properties, and are efficient conductors of heat. Inorganic nanotubes have also been synthesized.
A nanotube is a member of the fullerene structural family, which also includes buckyballs. Whereas buckyballs are spherical in shape, a nanotube is cylindrical, with at least one end typically capped with a hemisphere of the buckyball structure. Their name is derived from their size, since the diameter of a nanotube is on the order of a few nanometers (approximately 50,000 times smaller than the width of a human hair), while they can be up to several millimeters in length. There are two main types of nanotubes: single-walled nanotubes (SWNTs) and multi-walled nanotubes (MWNTs).

Manufacturing a nanotube is dependent on applied quantum chemistry, specifically, orbital hybridization. Nanotubes are composed entirely of sp2 bonds, similar to those of graphite. This bonding structure, stronger than the sp3 bonds found in diamond, provides the molecules with their unique strength. Nanotubes naturally align themselves into "ropes" held together by Van der Waals forces. Under high pressure, nanotubes can merge together, trading some sp2 bonds for sp3 bonds, giving great possibility for producing strong, unlimited-length wires through high-pressure nanotube linking.

Fabrication Of NRAM
This nano electromechanical memory, called NRAM, is a memory with actual moving parts, with dimensions measured in nanometers. Its carbon nanotube based technology makes advantage of vaanderwaals force to create basic on off junctions of a bit. Vaanderwaals forces interaction between atoms that enable noncovalant binding. They rely on electron attractions that arise only at nano scale levels as a force to be reckoned with. The company is using this property in its design to integrate nanoscale material property with established cmos fabrication technique.

Storage In NRAM
NRAM works by balancing the on ridges of silicon. Under differing electric charges, the tubes can be physically swung into one or two positions representing one and zeros. Because the tubes are very small-under a thousands of time-this movement is very fast and needs very little power, and because the tubes are a thousand times conductive as copper it is very to sense to read back the data. Once in position the tubes stay there until a signal resets them.
The bit itself is not stored in the nano tubes, but rather is stored as the position of the nanotube. Up is bit 0 and down is bit 1.Bits are switched between the states by the application of the electric field.

The technology work by changing the charge placed on a latticework of crossed nanotube. By altering the charges, engineers can cause the tubes to bind together or separate, creating ones and zeros that form the basis of computer memory. If we have two nano tubes perpendicular to each other one is positive and other negative, they will bend together and touch. If we have two similar charges they will repel. These two positions are used to store one and zero. The chip will stay in the same state until you make another change in the electric field. So when you turn the computer off, it doesn't erase the memory .We can keep all the data in the NRAM and gives your computer an instant boot.