The SED technology has been developing since 1987. The flat panel display technology that employs surface conduction electron emitters for every individual display pixel can be referred to as the Surface-conduction Electron-emitter Display (SED). Though the technology differs, the basic theory that the emitted electrons can excite a phosphor coating on the display panel seems to be the bottom line for both the SED display technology and the traditional cathode ray tube (CRT) televisions.

When bombarded by moderate voltages (tens of volts), the electrons tunnel across a thin slit in the surface conduction electron emitter apparatus. Some of these electrons are then scattered at the receiving pole and are accelerated towards the display surface, between the display panel and the surface conduction electron emitter apparatus, by a large voltage gradient (tens of kV) as these electrons pass the electric poles across the thin slit. These emitted electrons can then excite the phosphor coating on the display panel and the image follows.

The main advantage of SED’s compared with LCD’s and CRT’s is that it can provide with a best mix of both the technologies. The SED can combine the slim form factor of LCD’s with the superior contrast ratios, exceptional response time and can give the better picture quality of the CRT’s. The SED’s also provides with more brightness, color performance, viewing angles and also consumes very less power. More over, the SED’s do not require a deflection system for the electron beam, which has in turn helped the manufacturer to create a display design, that is only few inches thick but still light enough to be hung from the wall. All the above properties has consequently helped the manufacturer to enlarge the size of the display panel just by increasing the number of electron emitters relative to the necessary number of pixels required. Canon and Toshiba are the two major companies working on SED’s. The technology is still developing and we can expect further breakthrough on the research.

A surface-conduction electron-emitter display (SED) is a flat panel display technology that uses surface conduction electron emitters for every individual display pixel. The surface conduction emitter emits electrons that excite a phosphor coating on the display panel, the same basic concept found in traditional cathode ray tube (CRT) televisions. This means that SEDs use tiny cathode ray tubes behind every single pixel (instead of one tube for the whole display) and can combine the slim form factor of LCDs and plasma displays with the superior viewing angles, contrast, black levels, color definition and pixel response time of CRTs. Canon also claims that SEDs consume less power than LCD displays

The surface conduction electron emitter apparatus consists of a thin slit across which electrons tunnel when excited by moderate voltages (tens of volts). When the electrons cross electric poles across the thin slit, some are scattered at the receiving pole and are accelerated toward the display surface by a large voltage gradient (tens of thousands of volts) between the display panel and the surface conduction electron emitter apparatus. Canon Inc. working with Toshiba uses inkjet printing technology to spray phosphors onto the glass. The technology has been in development since 1986.

How it Works
SED technology works much like a traditional CRT except instead of one large electron gun firing at all the screen phosphors that light up to create the image you see, SED has thousands of tiny electron guns known as "emitters" for each phosphor sub-pixel. Remember, a sub-pixel is just one of the three colors (red, green, blue) that make up a pixel. So it takes three emitters to create one pixel on the screen and over 6 million SED emitters to produce a true high definition (HDTV) image! It's sort of like an electron Gatling gun with a barrel for every target positioned at point-blank range. An army of electron guns, if you will.


This may bode well for video purists who feel that CRTs offer the best picture quality, bar none. One prototype has even attained a contrast ratio of 100,000:1. Its brightness of 400cd/m2 is a tad on the low side for an LCD TV and nowhere close to a plasma. This is expected to increase in the future, but still works out to about 116 ftL (foot Lamberts) or more than twice a regular TV. To put this in perspective, a movie theater shows a film at about 15 ftL.

Life Expectancy
It does look like SED TVs will last a good while as it has been reported that the electron emitters have been shown to only drop 10% after 60,000 hours, simulated by an "accelerated" test. This means that it is likely the unit will keep working as long as the phosphors continue to emit light. That can be a while. Maybe yours will even show up on the Antiques Roadshow in working condition in the far distant future. Time will tell but "accelerated" testing results should always be taken with a grain of salt as it only imitates wear and tear over time.

SED TV Compared to CRT
SED is flat. A traditional CRT has one electron gun that scans side to side and from top to bottom by being deflected by an electromagnet or "yoke". This has meant that the gun has had to be set back far enough to target the complete screen area and, well, it starts to get ridiculously large and heavy around 36". CRTs are typically as wide as they are deep. They need to be built like this or else the screen would need to be curved too severely for viewing. Not so with SED, where you supposedly get all the advantages of a CRT display but need only a few inches of thickness to do it in. Screen size can be made as large as the manufacturer dares. Also, CRTs can have image challenges around the far edges of the picture tube, which is a non-issue for SED.

SED TV Compared to Plasma TV
Compared to plasma the future looks black indeed. As in someone wearing a black suit and you actually being able to tell it's a black suit with all those tricky, close to black, gray levels actually showing up. This has been a major source of distraction for this writer for most display technologies other than CRT. Watching the all-pervasive low-key (dark) lighting in movies, it can be hard to tell what you're actually looking at without the shadow detail being viewable. Think Blade Runner or Alien. SED's black detail should be better, as plasma cells must be left partially on in order to reduce latency. This means they are actually dark gray – not black. Plasma has been getting better in this regard but still has a way to go to match a CRT. Hopefully, SED will solve this and it's likely to. Also, SED is expected to use only half the power that a plasma does at a given screen size although this will vary depending on screen content.

SED TV Compared to LCD
LCDs have had a couple of challenges in creating great pictures but they are getting better. Firstly, latency has been a problem with television pictures with an actual 16ms speed needed in order to keep up with a 60Hz screen update. That needs to happen all the way through the grayscale, not just where the manufacturers decide to test. Also, due to LCD's highly directional light, it has a limited angle of view and tends to become too dim to view off axis, which can limit seating arrangements. This will not be an issue for SED's self illuminated phosphors. However, LCD does have the advantage of not being susceptible to burn-in which any device using phosphors will, including SED. SED is likely to use about two-thirds the power of a similarly sized LCD. Finally, LCD generally suffers from the same black level issues and solarization, otherwise known as false contouring, that plasma does. SED does not.

SED TV Compared to RPTV
SED is flat and RPTVs aren't. RPTV also has limitations as to where it can be viewed from, particularly being vertically challenged with regard to viewing angles. A particular RPTV's image quality is driven by its imaging technology such as DLP, LCoS, 3LCD or, more rarely recently, CRT. With the exception of CRT, these units need to have their lamps changed at various times but usually at around 6,000 hours, costing an average of $250.

Pricing
The cost of flat panels is largely dependent on production yields of saleable product. Nobody really knows for sure what this will be until real production starts, but new technology is always expensive in early production. If it works, the use of inkjet technology to make SED displays rather than the more expensive photolithography process used in LCD panels should help cost management. The first product release will be a 55" version at full HD resolution (1920x1080) priced comparably to today's plasma display panel (PDP) of similar size. That could be a big dollar difference by early 2007, as the price of plasma displays is expected to continue to drop.

Taking a look at the current crop of display technologies, one reality is hard to escape; we haven’t drastically improved on the nearly antique Cathode Ray Tube (CRT) televisions of years past. Sure we now have flat panels that can display resolutions of up to 1920×1080 pixels or higher in rare instances, but the often shunned CRT technology is capable of resolutions of 2560×1920 and higher, well within the future-proof 1080p spec.

Ok so flat panels don’t beat CRT’s on resolution, and to be honest they don’t really look better, with comparable resolutions. In addition both Plasma and LCD displays often fall short of CRT black levels, so why all the fuss? The flat screen of course, specifically screens less than 3 inches in depth.

What if a new display technology could combine the best attributes of both CRT’s and flat panel displays? Well I haven’t written this far to say wouldn’t that be nice, enter SED (Surface-Conduction Electron-Emitter Display). Spearheaded by Canon and Toshiba back in the mid eighties, SED appears to offer an excellent balance between cost, resolution and screen depth.

The inner workings of SED borrow from both LCD and Plasma technologies; a glass plate is embedded with electron emitters, one for each pixel on the display. The emitters on this plate face a fluorescent coating on a second plate. Between the two plates is a vacuum, and an ultra-fine particle film that forms a slit several nanometers wide. By applying voltage to this slit, the sets can produce a tunneling effect that generates electron emission. The panel emits light as the voltage accelerates some of the electrons toward the fluorescent coating.

SED displays offer brightness, color performance, and viewing angles on par with CRTs. However, they do not require a deflection system for the electron beam. Engineers as a result can create a display that is just a few inches thick; while still light enough for wall-hanging designs. The manufacturer can enlarge the panel merely by increasing the number of electron emitters relative to the necessary number of pixels. Canon and Toshiba believe their SED’s will be cost-competitive with other flat panel displays.

Technology Overview & Description

SED, or Surface-conduction Electron-emitter Displays are a new, emerging technology co-developed by Canon and Toshiba Corporation. The hope for this technology is a display which reproduces vivid color, deep blacks, fast response times and almost limitless contrast. In fact, if you take all of the claims made by the backers of SED you would think that there should be no reason to buy any other type of display. A long life filled with bitter disappointments and lengthy product-to-market times have increased my skepticism and lowered my tendency to act as a cheerleader until products start to hit the market. As far as the specs go, this is one hot technology.

An SED display is very similar to a CRT (and now we come full circle) in that it utilizes an electronemitter which activates phosphors on a screen. The electron emission element is made from an ultra-thin electron emission film that is just a few nanometers thick. Unlike a CRT, which has a single electron emitter that is steered, SEDs utilize a separate emitter for each color phosphor (3 per pixel, or 1 per sub-pixel) and therefore do not require an electron beam deflector (which also makes screen sizes of over 42" possible). Just for clarity that means a 1920 x 1080 panel has 6.2 million electron "guns". The emitter takes roughly 10V to fire and is accelerated by 10kV before it hits the phosphor lined glass panel. Sound like a lot of power? It's all relative as a typical SED display is expected to use about 2/3 the power of a typical plasma panel (and less than CRTs and LCD displays).

OK, here's the real interesting news. SED display electron emitters are supposed to be printable using inkjet printing technology from Canon while the matrix wiring can be created with a special screen printing method. The obvious result is the potential for extremely low production costs at high volumes once the technology is perfected.

What's Next?

Canon debuted an SED display prototype at the la Defense in Paris in October 2005. The specs referenced a <>

SED Display Advantages

  • CRT-matching black levels
  • Excellent color and contrast potential
  • Relatively inexpensive production cost
  • Wide viewing angle

SED Display Disadvantages

  • Unknown (though optimistic) life expectancy
  • Potential for screen burn-in
  • Currently prototype only

Optical Burst Switching


Optical burst switching (OBS) is a switching concept which lies between optical circuit switching and optical packet switching. Firstly, a dynamic optical network is provided by the interconnection of optical cross connects. These optical cross connects (OXC) usually consist switches based on 2D or 3D Micro electro Mechanical mirrorsMEMS which reflect light coming into the switch at an incoming port to a particular outgoing port. The granularity of this type of switching is at a fibre, waveband (a band of wavelengths) or at a wavelength level. The finest granularity offered by an OXC is at a wavelength level. Therefore this type of switching is appropriate for provisioning lightpaths from one node to another for different clients/ services e.g. SDH (Synchronous Digital Hierarchy) circuits.

Optical switching enables routing of optical data signals without the need for conversion to electrical signals and, therefore, is independent of data rate and data protocol.Optical Burst Switching (OBS) is an attempt at a new synthesis of optical and electronic technologies that seeks to exploit the tremendous bandwidth of optical technology, while using electronics for management and control.

In an OBS network the incoming IP traffic is first assembled into bigger entities called bursts. Bursts, being substantially bigger than IP packets are easier to switch with relatively small overhead. When a burst is ready, reservation request is sent to the core network. Transmission and switching resources for each burst are reserved according to the one-pass reservation scheme, i.e. data is sent shortly after the reservation request without receiving an acknowledgement of successful reservation.

The reservation request (control packet) is sent on a dedicated wavelength some offset time prior to the transmission of the data burst. This basic offset has to be large enough to electronically process the control packet and set up the switching matrix for the data burst in all nodes. When a data burst arrives in a node the switching matrix has been already set up, i.e. the burst is kept in the optical domain. The reservation request is analysed in each core node, the routing decision is made, and sent to the next node. When the burst reaches its destination node it is disassembled, and the resulting IP packets are sent to their respective destinations.

The benefit of OBS over circuit switching is that there is no need to dedicate a wavelength for each end-to-end connection. OBS is more viable than optical packet switching because the burst data does not need to be buffered or processed at the cross-connect. Advantages

* Greater transport channel capacity

* No O-E-O conversion

* Cost effective

Disadvantages

* Burst dropped in case of contention

* Lack of effective technology

Optical Burst Switching operates at the sub-wavelength level and is designed to better improve the utilisation of wavelengths by rapid setup and teardown of the wavelength/lightpath for incoming bursts. In OBS, incoming traffic from clients at the edge of the network are aggregated at the ingress of the network according to a particular parameter (commonly destination, type of service (TOS bytes) class of service and quality of service(e.g. profiled Diffserv code points)). Therefore, at the OBS edge router, different queues represent the various destinations or class of services. Therefore based on the assembly/aggregation algorithm, packets are assembled into bursts using either a time based or threshold based aggregation algorithm. In some implementations, Aggregation is based on a Hybrid of Timer and Threshold. From the aggregation of packets, a burst is created and this is the granularity that is handled in OBS.

Also important about OBS is the fact that the required electrical processing is decoupled from the Optical process. Therefore the burst header generated at the edge of the network is sent on a separate control channel which could be a designated out-of-band control wavelength. At each switch the control channel is converted to the electrical domain for the electrical processing of the header information. The header information precedes the burst by a set amount known as an offset time. Therefore giving enough time for the switch resources to be made available prior to the arrival of the burst. Different reservation protocols have been proposed and their efficacy studied and published in numerous research publications. Obviously the signalling and reservation protocols depends of the network architecture, node capability, network topology and level of network connectivity. The reservation process has implications on the performance of OBS due to the buffering requirements at the edge. The one-way signalling paradigm obviously introduces a higher level of blocking in the network as connections are not usually guaranteed prior to burst release. Again numerous proposals have sought to improve these issues.

Optical burst switching has many flavours determined by the current available technologies such as the switching speed of available core optical switches. Most optical cross connects have switching times of the order of milliseconds but require tens of milliseconds to set up the switch and perform switching. New switch architectures and faster switches of the order of micro and nano second switching times can help to reduce the path setup overhead. Similarly, control plane signalling and reservation protocols implemented in hardware can help to speed up processing times by several clock cycles.

The initial phase of introducing optical burst switching would be based on an acknowledged reservation protocol i.e. two-way signalling: after burstification process, based on a forwarding table bursts of a particular destination are mapped to a wavelength. As the burst requests a path across the network, the request is sent on the control channel, at each switch, if it is possible to switch for the wavelength, the path is set up and an acknowledge signal is sent back to the ingress. The burst is then transmitted. Under this concept, the burst is held electronically at the edge and the bandwidth and path is guaranteed prior to transmission. This reduces the amount of bursts dropped. The effects of dropping bursts can be detrimental to a network as each burst is an amalgamation of IP packets which could be carrying keepalive messages between IP routers. If lost, the IP router would be forced to retransmit and reconverge.

Under the GMPLS control plane, forwarding tables are used to map the bursts and the MPLS (Multiprotocol Label Switching) base 'PATH' and 'RESV' signals are used for requesting a path and confirming setup respectively. This is a two way signalling process which can be inefficient in terms of network utilisation. However for increasingly bursty traffic, the conventional OBS is the preferred choice.

Under this conventional OBS, a one way signalling concept as mentioned previously is used. The idea is to hold the burst at the edge for an offset period while the control header traverses across the network setting up the switches, the burst follows immediately without confirmation of burst setup. There is an increased likelihood for bursts to be dropped but contention resolution mechanisms can be used to ensure alternative resources are made available to the burst if the switch is blocked ( being used by another burst for the incoming or outgoing switch port). An example contention resolution solution is deflection routing, where blocked bursts are routed to alternative port until the required port becomes available. This requires optical buffering which is implemented mainly by fibre delay lines.

One way signalling makes more efficient use of the network and the burst probability of blocking can be reduced by increasing the offset time, thereby increasing the likely hood of switch resources being available for burst.

A potential disadvantage of lambda switching is that, once a wavelength has been assigned, it is used exclusively by its “owner.” If 100 percent of its capacity is not in use for 100 percent of the time, then clearly there is an inefficiency in the network.

One solution to this problem is to allocate the wavelength for the duration of the data burst being sent. Historically, this has always been recognized as a challenge because the amount of time used in setting up and tearing down connections is typically very large compared to the amount of time the wavelength is “occupied” by the data burst. This is because traditional signaling techniques (eg. ATM, RSVP, X.25, ISDN) have tended to use a multi-way handshaking process to ensure that the channel really is established before data is sent. These techniques could not be applied to optical burst switching because they take far too long.

For this reason, a simplex “on the fly” signaling mechanism is the current favorite for optical burst switching, and there is no explicit confirmation that the connection is in place before the data burst is sent. Given that, at the time of writing, most optical burst research has been confined to computer simulation, it’s still not totally clear what the impact of this unreliable signaling will be on real network performance.

Here's a more detailed comparison of lambda switching and optical burst switching (OBS):

In a lambda switch, which we can also describe as an LSC interface with a GMPLS control plane, the goal is to reduce the time taken to establish optical paths from months to minutes. Once established, the wavelengths will remain in place for a relatively long time – perhaps months or even years. In this timescale, it’s quite acceptable to use traditional, reliable signaling techniques – notably RSVP (resource reservation protocol) and CR-LDP (constraint-based routing-label distribution protocol), which are being extended for use in GMPLS. Signaling can be out of band, using a low-speed overlay such as fast Ethernet.

In OBS, the goal is to set up lambdas so that a single burst of data can be transmitted. As noted previously, a 1-Mbyte file transmitted at 10 Gbit/s only requires a lambda for 1ms. The burst has to be buffered by the OEO edge device while the lambda is being set up, so the signaling has to be very fast indeed, and it looks as though we won’t have time for traditional handshakes.

The signaling itself can be out of band, but it must follow the topology required by the lambda path. If this seems confusing, think of a primary rate ISDN line. In this technology we use a single D-channel (a signaling channel) to control up to 30 B-channels (the channels carrying payload). The B and D channels share the same physical cable, and therefore the same topology. In the optical context we could use a dedicated signaling wavelength on a given fiber, and run this wavelength at speeds where economic network processors are available (eg., gigabit Ethernet).

Pixie dust

Pixie dust is the informal name that IBM is using for its antiferromagnetically-coupled (AFC) media technology, which can increase the data capacity of hard drives to up to four times the density possible with current drives. AFC overcomes limits of current hard drives caused by a phenomenon called the superparamagnet effect.

AFC allows more data to be packed onto a disk. The pixie dust used is a 3- atom thick magnetic coating composed of the element ruthenium sandwiched between two magnetic layers. The technology is expected to yield 400 GB hard drives for desktop computers, and 200 GB hard drives for laptops.

IBM s use of AFC for hard drives overcomes what was considered an insuperable problem for storage: the physical limit for data stored on hard drives. IBM discovered a means of adding AFC to their standard production methods so that the increased capacity costs little or nothing.

In information technology, the term "pixie dust" is often used to refer to a technology that seemingly does the impossible. IBM's use of AFC for hard drives overcomes what was considered an insuperable problem for storage: the physical limit for data stored on hard drives. Hard drive capacities have more or less doubled in each of the last five years, and it was assumed in the storage industry that the upper limit would soon be reached. The superparamagnetic effect has long been predicted to appear when densities reached 20 to 40 gigabits per square inch - close to the data density of current products. AFC increases possible data density, so that capacity is increased without using either more disks or more heads to read the data. Current hard drives can store 20 gigabits of data per square inch. IBM began shipping Travelstar hard drives in May 2001 that are capable of storing 25.7 gigabits per square inch. Drives shipped later in the year are expected to be capable of 33% greater density. Because smaller drives will be able to store more data and use less power, the new technology may also lead to smaller and quieter devices.

IBM discovered a means of adding AFC to their standard production methods so that the increased capacity costs little or nothing. The company, which plans to implement the process across their entire line of products, chose not to publicize the technology in advance. Many companies have focused research on the use of AFC in hard drives; a number of vendors, such as Seagate Technology and Fujitsu, are expected to follow IBM's lead.

Genomic Signal Processing


Genomic Signal Processing (GSP) is the engineering discipline that studies the processing of genomic signals.The theory of signal processing is utilized in both structural and functional understanding. The aim of GSP is to integrate the theory and methods of signal processing with the global understanding of functional genomics, with special emphasis on genomic regulation.

Gene prediction typically refers to the area of computational that is concerned with algorithmically identifying biology genomic DNA, that are stretches of sequence, usually biologically functional. This especially includes protein-genes, but may also include other functional coding RNA genes and regulatory regions. Gene elements such as finding is

one of the first and most important steps in understanding the genome of a species once it has been sequenced.

Genomic signal processing (GSP) is the engineering dis-

cipline that studies the processing of genomic signals. Ow-

ing to the major role played in genomics by transcriptional

signaling and the related pathway modeling, it is only nat-

ural that the theory of signal processing should be utilized

in both structural and functional understanding. The aim of

GSP is to integrate the theory and methods of signal process-

ing with the global understanding of functional genomics,

with special emphasis on genomic regulation. Hence, GSP

encompasses various methodologies concerning expression

profiles: detection, prediction, classification, control, and sta-

tistical and dynamical modeling of gene networks. GSP is

a fundamental discipline that brings to genomics the struc-

tural model-based analysis and synthesis that form the basis

of mathematically rigorous engineering.

Application is generally directed towards tissue classifi-

cation and the discovery of signaling pathways, both based

on the expressed macromolecule phenotype of the cell. Ac-

complishment of these aims requires a host of signal process-

ing approaches. These include signal representation relevant

to transcription, such as wavelet decomposition and more

general decompositions of stochastic time series, and system

modeling using nonlinear dynamical systems. The kind of

correlation-based analysis commonly used for understand-

ing pairwise relations between genes or cellular effects can-

not capture the complex network of nonlinear information

processing based upon multivariate inputs from inside and

outside the genome. Regulatory models require the kind of

nonlinear dynamics studied in signal processing and con-

trol, and in particular the use of stochastic dataflow networks

common to distributed computer systems with stochastic

inputs. This is not to say that existing model systems suf-

fice. Genomics requires its own model systems, not simply

straightforward adaptations of currently formulated mod-

els. New systems must capture the specific biological mecha-

nisms of operation and distributed regulation at work within

the genome. It is necessary to develop appropriate mathe-

matical theory, including optimization, for the kinds of ex-

ternal controls required for therapeutic intervention as well

as approximation theory to arrive at nonlinear dynamical

models that are sufficiently complex to adequately represent

genomic regulation for diagnosis and therapy while not be-

ing overly complex for the amounts of data experimentally

feasible or for the computational limits of existing computer

hardware.

A cell relies on its protein components for a wide variety of

its functions, including energy production, biosynthesis of

component macromolecules, maintenance of cellular archi-

tecture, and the ability to act upon intra- and extra-cellular

stimuli. Each cell in an organism contains the information

necessary to produce the entire repertoire of proteins the

organism can specify. Since a cell’s specific functionality is

largely determined by the genes it is expressing, it is logical

that transcription, the first step in the process of convert-

ing the genetic information stored in an organism’s genome

into protein, would be highly regulated by the control net-

work that coordinates and directs cellular activity. A primary

means for regulating cellular activity is the control of pro-

tein production via the amounts of mRNA expressed by in-

dividual genes. The tools to build an understanding of ge-

nomic regulation of expression will involve the characteriza-

tion of these expression levels. Microarray technology, both

cDNA and oligonucleotide, provides a powerful analytic tool

for genetic research. Since our concern in this paper is to ar-

ticulate the salient issues for GSP, and not to delve deeply

into microarray technology, we confine our brief discussion

to cDNA microarrays.

Complementary DNA microarray technology combines

robotic spotting of small amounts of individual, pure nu-

cleic acid species on a glass surface, hybridization to this array

with multiple fluorescently labeled nucleic acids, and detec-

tion and quantitation of the resulting fluor-tagged hybrids

by a scanning confocal microscope. A basic application is

quantitative analysis of fluorescence signals representing the

relative abundance of mRNA from distinct tissue samples.

Complementary DNA microarrays are prepared by print-

ing thousands of cDNAs in an array format on glass micro-

scope slides, which provide gene-specific hybridization tar-

gets. Distinct mRNA samples can be labeled with different

fluors and then co-hybridized onto each arrayed gene. Ratios

(or sometimes the direct intensity measurements) of gene

expression levels between the samples can be used to detect

meaningfully different expression levels between the samples

for a given gene. Given an experimental design with multiple

tissue samples, microarray data can be used to cluster genes

based on expression profiles, to characterize and classify dis-

ease based on the expression levels of gene sets, and for other

signal processing tasks.

Microvia Technology


Microvias are small holes in the range of 50 -100 µm. In most cases they are blind vias from the outer layers to the first innerlayer.
The development of very complex Integrated Circuits (ICs) with extremely high input/output counts coupled with the steadily increasing clock rates has forced the electronic manufacturer to develop new packaging and assembly techniques. Components with pitches less then 0.30 mm, chip scale packages, and flip chip technology are underlining this trend and highlight the importance of new printed wiring board technologies able to cope with the requirement of modern electronics.
In addition, more and more electronic devices have to be portable and consequently systems integration, volume and weight considerations are gaining importance.
These portables are usually battery powered resulting in a trend towards lower voltage power supplies, with their implication in PCB (Printed Circuit Board) complexity.
As a result of the above considerations, the future PCB will be characterized by very high interconnection density with finer lines and spaces, smaller holes and decreasing thickness. To gain more landing pads for small footprint components the use of microvias becomes a must.

They are essential in setting the pace of electronic developments and the circuit board is under pressure to keep up.

From the automobile industry through to industrial electronics, the importance of microelectronics has risen enormously in recent years. At the same time, it has developed into an essential feature of intelligent devices and systems. The demand for reduced volume and weight, enhanced system performance with shorter signal transit times, increased reliability and minimised system costs have become progressively more important. And as a consequence, this means that heightened demands are placed on developers and layout engineers.

While microvia technology has long since become a telecommunications manufacturing standard, it is now penetrating other market segments. Here microvia technology offers the potential to completely fulfil the demands for technically perfect solutions and rational production. In other words - this technology unites modern technology and economics. Looking at the circuit board industry in the cold light of day, it is apparent that it has a cost-efficient, safe and proven technology at its disposal. With the aid of microvias, the integration of modern components on the boards requires only minor modifications to the multi-layer architecture. Many of the requirements placed on electronic products can be realised without problems as a result. HDI (High Density Interconnect) involves using microvias for high density interconnection of numerous components and functions within a confined space. Microvia circuit boards manage without conventional mechanically drilled through contacts and use the appropriate laser drilling machines as drilling tools.

The drivers for HDI microvia technology are the various component formats, such as COB (Chip on Board), Flip Chip, CSP (Chip Size Packaging) and BGA (Ball Grid Arrays), which are described in terms of "footprint" or pitch. The footprint characterises the overall solder surface, connection surface or landing sites for SMD components. Pitch denotes the separation between the midpoints of the individual solder surfaces. Many new components arrive on the market with a large number of connections and a low pitch which demand a further increase in wiring density on the circuit board. This demonstrates why the challenges facing the technical knowledge of the circuit board designer and the implementation options are so infinitely important for new components. Because even at this early stage, the profitability, as well as the rational technical feasibility and process compatibility of the boards, are decided. This highlights how strongly circuit board development is influenced by the development of components and their geometric design.

In the past, microvias were still staggered relative to one another as a means of achieving contact over several layers. New techniques, with which microvias generate connections across two layers, have become established as particularly cost-effective and efficient in their manufacturing technology. These holes can be produced in one program starting from the outer layer.

Cu-filled microvias represent the latest development ready for series production. Special feature of this technology: The vias can be set directly on top of each other. With this method it is possible to layout components even in very confined geometries.

When are microvias worthwhile?

No textbook specifies where the transition between mechanical drill holes and laser holes is to be found. After all, the application of microvias is not only determined by the technology or the geometry of the components and consequently the circuit board geometry. However, questions concerning profitability can be clearly answered through the application of microvias. In the light of Würth Elektronik's experience from today's perspective, a clear technical boundary can be drawn at a BGA pitch of 0.8 mm. Here conventional technology, with mechanically drilled vias, meets its limitations and the use of microvias (laser drilled blind vias) is necessary.

Naturally however, economic considerations for or against play a significant role. A comparison of variable drilling costs reveals the superiority of microvia technology over mechanical drilling (Ø 0.3 mm) even with a relatively small number of holes. The 100x faster drilling speed and the tool costs approaching ZERO make laser drilling extremely fast and cheap. This effect becomes more pronounced as the number of drill holes increases. The comparison clearly illustrates the cost saving potential via technology has to offer. Experience at Würth Elektronik shows that the proper application of this technology results in savings of between eight and ten percent of the overall costs of "conventional circuits". The advantage of via technology grows beyond measure if smaller drills have to be used for geometrical reasons. The drill unit costs rise dramatically. And the service life of the drills plummets. The cost differential opens up enormously for Ø 0.1 mm mechanically drilled vias compared with Ø 0.1 mm laser drilled microvias. Here the variable costs are in a ratio of around 500:1.

As the need for high-density, handheld products increases, the electronic packaging industry has been developing new technologies, such as chip-scale packaging and §ip-chip assembly, to pack more information-processing functions per unit volume. Many system designers, however, believe that the circuit board technology to accommodate packages with high I/O densities has not kept pace. Even though printed wiring board fabricators have been developing new, higher-density circuit fabrication methods, the system designers perceive today's advanced technology as unproven, low reliability, and high cost. IBIS Associates applies Technical Cost Modeling in this paper to examine the cost issues of implementing microvia technology.

CSPs and microvias go hand-in-hand: What is the value of high-I/O-density, chip-scale packaging without a high- density substrate to connect these chips? Alternatively, why have a circuit board with ultrafine features if coarse-pitch devices will be used?

Yet, many system designers believe that either CSPs or microvia technology (or both) mean higher system costs. Certainly, it is wise to be cautious about employing new technologies. But if there are proven technologies that can offer system cost reduction as well as system performance improvements and size reduction, what are you waiting for?

IBIS Associates has studied the cost impact of microvia technologies on circuit board fabrication1, and of CSP technologies on IC packaging2. This paper shows some of these cost analyses, revealing the cost savings possible through the use of these advanced technologies.

CSPs and microvias go hand-in-hand: What is the value of high-I/O-density, chip-scale packaging without a high- density substrate to connect these chips? Alternatively, why have a circuit board with ultrafine features if coarse-pitch devices will be used?

Yet, many system designers believe that either CSPs or microvia technology (or both) mean higher system costs. Certainly, it is wise to be cautious about employing new technologies. But if there are proven technologies that can offer system cost reduction as well as system performance improvements and size reduction, what are you waiting for?

IBIS Associates has studied the cost impact of microvia technologies on circuit board fabrication1, and of CSP technologies on IC packaging2. This paper shows some of these cost analyses, revealing the cost savings possible through the use of these advanced technologies.

Methodology-Technical Cost Modeling

Technical Cost Modeling (TCM), a methodology pioneered by IBIS Associates, provides the method for analyzing cost1. The goal of TCM is to understand the costs of a product and how these costs are likely to change with alterations to the product and process.

Specifically, TCM includes the breakdown of cost into its constituent elements (listed below), and ranking cost items on the basis of their contribution:

  • Materials and energy
  • Direct and overhead labor
  • Equipment, tooling and building
  • Other costs

Once these costs are established, sensitivity analysis can be performed to understand the impact of changes to key parameters such as annual production volume, process yield and material pricing.

In short, TCM provides an understanding not only of current costs but also of how these costs might differ in the face of future technological or economic developments.

High-Density Packaging Technologies

Much has been published on microvia technology3,4 and chip-scale packaging5,6. Microvia technology, also called build-up technology, allows high-density circuitry on the outer layers of a circuit board, with lower, conventional-density circuitry on the inside layers.

These high-density circuit boards contain a conventional core, for rigidity and cost reasons, among others. Since the materials used in creating microvias tend not to have glass reinforcement, the core layers, which are glass reinforced, provide the rigidity needed for handling and end-use structural requirements.

Creating vias smaller than 6 to 8 mils (150 to 200 microns) in diameter, allows higher-density circuit layers to be created than with conventional technology in general. These vias are created through a myriad of technologies, including the following:

  • Advanced mechanical drilling
  • Lasers
  • Photoimageable dielectric layers
  • Plasma etching

Yields

Microvia technologies have been adopted by most large board fabricators and are being used by some OEMs, mainly in Japan. Reported yields achievable with microvia technology range from 50% to 95%, depending on the technology, how long the fabricator has been learning fabrication techniques and many other factors. Further details of each technology are presented elsewhere1.

Most §ash memory devices are being offered in CSPs for use in portable electronic products. Uses are on the horizon for many other ICs, but CSPs are just beginning to be employed outside of memory.

In summary, CSPs and microvias have "burst onto the scene" due to the demand for complex handheld products and other compact electronics. Since it can be construed that their implementation is driven mainly by the need for smaller form factors and not by cost, both microvias and CSPs have suffered from perceptions of high cost among potential users.

But is this necessarily true?

When a new technology is introduced, it tends to cost more, with the promise that, eventually, costs will be lower than they are today.

This situation occurs because volume production is necessary for costs to come down, and new technologies are usually introduced at low-volume levels. At their beginning, as customers "test the water" these low volumes often do not allow the new technology to cost less than the incumbent technology. This is happening today with CSPs and microvias.

Cost models can show if new technologies will, in fact, cost less at higher production volumes. This analysis shows some of the cost results from recent work at IBIS.

WiDEN

WiDEN enhanced specialized mobile radio (or ESMR) wireless telephony protocol of Motorola employs this software upgrade developed by the company. In a WiDEN or Wideband Integrated Dispatch Enhanced Network by applying a bandwidth of 100 kbit/s the subscriber can communicate over four 25 kHz channels. The iM240 PCMCIA card that can transmit the data at a speed of 60 kbit/s was the first WiDEN-compatible device to be released by Motorola. Unveiled in mid-summer 2005, the first WiDEN-compatible telephones, the motorola i850 and the i760 used the software upgrade of i850/i760 which puts them in the WIDEN network. The Motorola i870 c, released on 31 October 2005, made possible the commercial launch of WiDEN. Today this new technology is widely used in Nextel s National Network.

Update: Since the Sprint Nextel merger the company determined that because Sprint s CDMA network was already 3G and going to EVDO (broadband speeds) and then EVDO Rev A (T-1 speeds) it would be redundant to keep upgrading the IDEN data network.

iDEN, the platform which WiDEN upgrades, and the protocol on which it is based, was originally introduced by Motorola in 1993, and launched as a commercial network by Nextel in the United States in September 1996.

WiDEN was originally anticipated to be a major stepping stone for United States wireless telephone provider Nextel Communications and its affiliate, Nextel Partners. However, with the December 2004 announcement of the proposed Sprint Nextel merger, it has been speculated that the Nextel iDEN network will be quickly abandoned in favor of Sprint's CDMA network. Although a complete roadmap of the merger's impact on the combined company's wireless networks has not been released, Nextel and Motorola have agreed to continue to maintain and expand the iDEN network through, at least, 31 December 2010. WiDEN has not been active on the NEXTEL National Network since October of 2005 when rebanding efforts in the 800MHz band began in a Sprint effort to utilize those data channels as a way to handle more cellular phone call traffic on the NEXTEL iDEN network. To this date, WiDEN has not been restored.

The first WiDEN-compatible device to be released was the Motorola iM240 PC card card which allows raw data speeds up to 60 kbit/s. The first WiDEN-compatible telephones are the Motorola i850 and i760, which were released mid-summer 2005. The recent i850/i760 Software Upgrade enables WiDEN on both of these phones. The commercial launch of WiDEN came with the release of the Motorola i870 on 31 October 2005, however, most people never got to experience the WiDEN capability in their handsets. WiDEN is also offered in the i930/i920 Smartphone, however, Sprint shipped these units with WiDEN service disabled. Many in the cellular forum communities have found ways using Motorola's own RSS software to activate it. WiDEN was available in most places on Nextel's National Network. As stated above, it no longer is enabled on the Sprint-controlled towers. Since the Sprint Nextel merger the company determined that because Sprint's CDMA network was already 3G and going to EVDO (broadband speeds) and then EVDO Rev A (T-1 speeds) it would be redundant to keep upgrading the iDEN data network. WiDEN is considered a 2.5G technology.

Femtotechnology


Femtotechnology is a term used by some futurists to refer to structuring of matter on a femtometre scale, by analogy with nanotechnology and picotechnology. This involves the manipulation of excited energy states within atomic nuclei to produce metastable (or otherwise stabilized) states with unusual properties. In the extreme case, excited states of nucleons are considered, ostensibly to tailor the behavioral properties of these particles (though this is in practice unlikely to work as intended).

Practical applications of femtotechnology are currently considered to be unlikely. The spacings between nuclear energy levels require equipment capable of efficiently generating and processing gamma rays, without equipment degradation. The nature of the strong interaction is such that excited nuclear states tend to be very unstable (unlike the excited electron states in Rydberg atoms), and there are a finite number of excited states below the nuclear binding energy, unlike the (in principle) infinite number of bound states available to an atom's electrons. Similarly, what is known about the excited states of individual nucleons seems to indicate that these do not produce behavior that in any way makes nucleons easier to use or manipulate, and indicates instead that these excited states are even less stable and fewer in number than the excited states of atomic nuclei.

The most advanced form of molecular nanotechnology is often imagined to involve self-replicatingnucleons rather than atoms. For example, the astrophysicist Frank Drake once speculated about the possibility of self-replicating organisms composed of such nuclear molecules living on the surface of a neutron star, a suggestion taken up in the science fiction novel Dragon's Egg by the physicist Robert Forward. It is thought by physicists that nuclear molecules may be possible, but they would be very short-lived, and whether they could actually be made to perform complex tasks such as self-replication, or what type of technology could be used to manipulate them, is unknown. molecular machines, and there have been some very speculative suggestions that something similar might in principle be possible with "molecules" composed of

The hypothetical hafnium bomb can be considered a crude application of femtotechnology.

New Sensor Technology

The invention of new fluorescence-based chemical sensor has facilitated the myriad potential applications such as monitoring oxygen, inorganic gases, volatile organic compounds, biochemical compounds etc, as the technology is versatile, compact and inexpensive in nature. Depending upon the vital criteria’s of accuracy, precision, cost and the ability to meet the environmental range of the intended application, proper sensor can be chosen for the military control based subsystem. Sensor web and video sensor technology are two widely applied sensor techniques, in which Sensor Web is a type of sensor network or geographic information system (GIS) well suited for environmental monitoring and control, where as video sensor technology is used for digital image analysis. In sensor web technology, we have a wirelessly connected shapeless network of unevenly distributed sensor platform or pods, which is very much different from the TCP/IP-like network with respect to its synchronous and router-free nature.

Because of this unique architecture every pod in the network knows what is happening with every other pod throughout the Sensor Web at each measurement cycle. The main requirements for a video sensor technology is an application software and a computer that acts as the carrier platform, which is usually equipped with a Linux or Microsoft operating system upon which the application software works. By programming the digital algorithms the interpretation of digital images and frame rates can be carried out. The video sensor is very much helpful in evaluating the scenes und sequences within an image section of a (CCD) camera.

We can use this technology to detect chemical and biological agents and also to determine if a country is using its nuclear reactors to produce material for nuclear weapons or to track the direction of a chemical or radioactive plume to evacuate an area," explained Paul Raptis, section manager. Raptis is developing these sensors with Argonne engineers Sami Gopalsami, Sasan Bakhtiari and Hual-Te Chien.

Argonne engineers have successfully performed the first-ever remote detection of chemicals and identification of unique explosives spectra using a spectroscopic technique that uses the properties of the millimeter/terahertz frequencies between microwave and infrared on the electromagnetic spectrum. The researchers used this technique to detect spectral "fingerprints" that uniquely identify explosives and chemicals.

The Argonne-developed technology was demonstrated in tests that accomplished three important goals:

* Detected and measured poison gas precursors 60 meters away in the Nevada Test Site to an accuracy of 10 parts per million using active sensing.
* Identified chemicals related to defense applications, including nuclear weapons, from 600 meters away using passive sensing at the Nevada Test Site.
* Built a system to identify the spectral fingerprints of trace levels of explosives, including DNT, TNT, PETN, RDX and plastics explosives semtex and C-4.

Current research involves collecting a database of explosive "fingerprints" and, working with partners Sarnoff Corp., Dartmouth College and Sandia National Laboratory, testing a mail- or cargo-screening system for trace explosives.

Argonne engineers have been exploring this emerging field for more than a decade to create remote technology to detect facilities that may be violating nonproliferation agreements by creating materials for nuclear weapons or making nerve agents.
How it works

The millimeter/terahertz technology detects the energy levels of a molecule as it rotates. The frequency distribution of this energy provides a unique and reproducible spectral pattern – its "fingerprint" – that identifies the material. The technology can also be used in its imaging modality – ranging from concealed weapons to medical applications such as tumor detection.

The technique is an improvement over laser or optical sensing, which can be perturbed by atmospheric conditions, or X-rays, which can cause damage by ionization. Operating at frequencies between 0.1 and 10 terahertz, the sensitivity is four to five orders of magnitude higher and imaging resolution is 100 to 300 times more than possible at microwave frequencies.
Other homeland security sensors

To remotely detect radiation from nuclear accidents or reactor operations, Argonne researchers are testing millimeter-wave radars and developing models to detect and interpret radiation-induced effects in air that cause radar reflection and scattering. Preliminary results of tests, in collaboration with AOZT Finn-Trade of St. Peterspurg, Russia, with instruments located 9 km from a nuclear power plant showed clear differences between when the plant was operating and when it was idling. This technology can also be applied to mapping plumes from nuclear radiation releases.

Argonne engineers have also applied this radar technology for remote and rapid imaging of gas leaks from natural gas pipelines. The technique detects the fluctuations in the index-of-refraction caused by leaking gas into surrounding air.

Early warnings of biological hazards can be made using another Argonne-developed sensing system that measures dielectric signatures. The systems sense repeatable dielectric response patterns from a number of biomolecules. The method holds potential for a fast first screening of chemical or biological agents in gases, powders or aerosols.

Other tests can detect these agents, but may take four hours or longer. "While this method may not be as precise as other methods, such as bioassays and biochips, it can be an early warning to start other tests sooner," said Raptis.

These Argonne sensor specialists will continue to probe the basics of sensor technology and continue to develop devices that protect the nation's security interests.

Other potential applications for these technologies, in addition to security, include nondestructive evaluation of parts, environmental monitoring and health, including testing human tissue and replacing dental X-rays.

In addition to DOE, the U.S. Department of Defense and the National Aeronautics and Space Administration have provided support for this research.

The nation's first national laboratory, Argonne National Laboratory conducts basic and applied scientific research across a wide spectrum of disciplines, ranging from high-energy physics to climatology and biotechnology. Since 1990, Argonne has worked with more than 600 companies and numerous federal agencies and other organizations to help advance America's scientific leadership and prepare the nation for the future. Argonne is managed by the University of Chicago for the U.S. Department of Energy's Office of Science.