PROTOCOL STACKS

PROTOCOL STACKS

ARCNET

ARCNET (also CamelCased as ARCnet, an acronym from Attached Resource Computer NETwork) is a local area network (LAN) protocol, similar in purpose to Ethernet or Token Ring. ARCNET was the first widely available networking system for microcomputers and became popular in the 1980s for office automation tasks. It has since gained a following in the embedded systems market, where certain features of the protocol are especially useful.
ARCNET remained proprietary until the early-to-mid 1980s. This did not cause concern at the time, as most network architectures were proprietary. The move to non-proprietary, open systems began as a response to the dominance of International Business Machines (IBM) and its Systems Network Architecture (SNA). In 1979, the Open Systems Interconnection Reference Model (OSI Model) was published. Then, in 1980, Digital, Intel and Xerox (the DIX consortium) published an open standard for Ethernet that was soon adopted as the basis of standardization by the IEEE and the ISO. IBM responded by proposing Token Ring as an alternative to Ethernet but kept such tight control over standardization that competitors were wary of using it. ARCNET was less expensive than either, often much less, more reliable, more flexible, and by the late 1980s it had a market share about equal to that of Ethernet.

After Ethernet abandoned their original clumsy thick-wire and somewhat-less-clumsy thin-coax version and adopted ARCnet's innovative and more maintainable "interconnected stars" cabling topology based on active hubs, Ethernet became more attractive than before, thus Ethernet volumes increased. With more companies entering the market, the price of Ethernet started to fall, and ARCNET volumes tapered off. The same was largely true of Token Ring, although IBM's immense power managed to keep it in the market for some time longer.

APPLETALK

AppleTalk is a suite of protocols developed by Apple Computer for computer networking. It was included in the original Macintosh (1984) and is now deprecated by Apple in favor of TCP/IP networking.
The design fairly rigorously followed the OSI model of protocol layering. Unlike most other early LAN systems, AppleTalk was not built on the archetypal Xerox XNS system, as the intended target was not Ethernet and did not have 48-bit addresses to route. Nevertheless many portions of the AppleTalk system have direct analogs in XNS.
One key differentiator for AppleTalk was that the system contained two protocols aimed at making the system completely self-configuring. The AppleTalk address resolution protocol (AARP) allowed AppleTalk hosts to automatically generate their own network addresses, and the Name Binding Protocol (NBP) was essentially a dynamic DNS system which mapped network addresses to user-readable names. Although systems similar to AARP existed in other systems, Banyan VINES for instance, nothing like NBP has existed until recently.
Both AARP and NBP had defined ways to allow "controller" devices to override the default mechanisms. The concept here was to allow routers to provide all of this information, or additionally "hardwire" the system to known addresses and names. On larger networks where AARP could cause problems as new nodes searched for free addresses, the addition of a router could dramatically reduce "chattiness".

ADDRESSING

An AppleTalk address was a 4-byte quantity. This consisted of a two-byte network number, a one-byte node number, and a one-byte socket number. Of these, only the network number required any configuration, being obtained from a router. Each node dynamically chose its own node number, according to a protocol which handled contention between different nodes accidentally choosing the same number. For socket numbers, a few well-known numbers were reserved for special purposes specific to the AppleTalk protocol itself. Apart from these, all application-level protocols were expected to use dynamically-assigned socket numbers at both the client and server end.
Because of this dynamism, users could not be expected to access services by specifying their address. Instead, all services had names which, being chosen by humans, could be expected to be meaningful to users, and also could be sufficiently long enough to minimize the chance of conflicts.

Note that, because a name translated to an address which included a socket number as well as a node number, a name in AppleTalk mapped directly to a service being provided by a machine, which was entirely separate from the name of the machine itself. Thus, services could be moved to a different machine and, so long as they kept the same service name, there was no need for users to do anything different to continue accessing the service. And the same machine could host any number of instances of services of the same type, without any network connection conflicts.

ASYNCHRONOUS TRANSFER MODE

Asynchronous Transfer Mode (ATM) is a cell relay network protocol which encodes data traffic into small fixed-sized (53 byte; 48 bytes of data and 5 bytes of header information) cells instead of variable sized packets (sometimes known as frames) as in packet-switched networks (such as the Internet Protocol or Ethernet). It is a connection-oriented technology, in which a connection is established between the two
ATM was intended to provide a single unified networking standard that could support both synchronous channel networking (PDH, SDH) and packet-based networking (IP, Frame relay, etc), whilst supporting multiple levels of quality of service for packet traffic.
ATM sought to resolve the conflict between circuit-switched networks and packet-switched networks by mapping both bitstreams and packet-streams onto a stream of small fixed-size 'cells' tagged with virtual circuit identifiers. The cells are typically sent on demand within a synchronous time-slot pattern in a synchronous bit-stream: what is asynchronous here is the sending of the cells, not the low-level bitstream that carries them.
In its original conception, ATM was to be the enabling technology of the 'Broadband Integrated Services Digital Network' (B-ISDN) that would replace the existing PSTN. The full suite of ATM standards provides definitions for layer 1 (physical connections), layer 2 (data link layer) and layer 3 (network) of the classical OSI seven-layer networking model. The ATM standards drew on concepts from the telecommunications community, rather than the computer networking community. For this reason, extensive provision was made for integration of most existing telco technologies and conventions into ATM.
As a result, ATM provides a highly complex technology, with features intended for applications ranging from global telco networks to private local area computer networks. ATM has been a partial success as a technology, with widespread deployment, but generally only used as a transport for IP traffic; its goal of providing a single integrated technology for LANs, public networks, and user services has largely failed.

BLUETOOTH
Bluetooth is an industrial specification for wireless personal area networks (PANs), also known as IEEE 802.15.1. Bluetooth provides a way to connect and exchange information between devices like personal digital assistants (PDAs), mobile phones, laptops, PCs, printers, digital cameras and video game consoles such as the Wii via a secure, globally unlicensed short range radio frequency.

Bluetooth is a radio standard and communications protocol primarily designed for low power consumption, with a short range (power class dependent: 1 meter, 10 meters, 100 meters) based around low-cost transceiver microchips in each device.
Bluetooth lets these devices communicate with each other when they are in range. The devices use a radio communications system, so they do not have to be in line of sight of each other, and can even be in other rooms, so long as the received power is high enough. As a result of different antenna designs, transmission path attenuations, and other variables, observed ranges are variable; however, transmission power levels must fall into one of three classes:

DECNET

DECnet is a proprietary suite of network protocols created by Digital Equipment Corporation, originally released in 1975 in order to connect two PDP-11 minicomputers. It evolved into one of the first peer-to-peer network architectures, thus making DEC into a networking powerhouse in the 1980s.

Initially built with four layers, it later (1992) evolved into a seven layer OSI compliant networking protocol, around the time when open systems (POSIX compliant, i.e. Unix-like) were grabbing marketshare from the proprietary OSes like VAX/VMS and AlphaVMS.
DECnet was built right into the DEC flagship operating system VAX/VMS from its inception. Digital ported it to its own Ultrix variant of UNIX, as well as Apple Macintosh computers and PCs running both DOS and Windows under the name DEC Pathworks, transforming these systems into DECnet end-nodes on a network of VAX machines. More recently, an open-source version has been developed for the Linux OS: see Linux-DECnet on Sourceforge.

ETHERNET

Ethernet is a large and diverse family of frame-based computer networking technologies for local area networks (LANs). The name comes from the physical concept of the ether. It defines a number of wiring and signaling standards for the physical layer, two means of network access at the Media Access Control (MAC)/data link layer, and a common addressing format.

Ethernet has been standardized as IEEE 802.3. Its star-topology, twisted pair wiring form became the most widespread LAN technology in use from the 1990s to the present, largely replacing competing LAN standards such as coaxial cable Ethernet, token ring, FDDI, and ARCNET. In recent years, WiFi, the wireless LAN standardized by IEEE 802.11, has been used instead of Ethernet in many installations

Ethernet was originally based on the idea of computers communicating over a shared coaxial cable acting as a broadcast transmission medium. The methods used show some similarities to radio system, although there are major differences, such as the fact that it is much easier to detect collisions in a cable broadcast system than a radio broadcast. The common cable providing the communication channel was likened to the ether and it was from this reference that the name "Ethernet" was derived.

FIBER DISTRIBUTED DATA INTERFACE

Fiber-Distributed Data Interface (FDDI) provides a standard for data transmission in a local area network that can extend in range up to 200 kilometers (124 miles). The FDDI protocol uses as its basis the token ring protocol. In addition to covering large geographical areas, FDDI local area networks can support thousands of users. As a standard underlying medium it uses optical fiber (though it can use copper cable, in which case one can refer to CDDI). FDDI uses a dual-attached, counter-rotating token-ring topology.
FDDI, as a product of American National Standards Institute X3-T9, conforms to the Open Systems Interconnection (OSI) model of functional layering of LANs using other protocols. FDDI-II, a version of FDDI, adds the capability to add circuit-switched service to the network so that it can also handle voice and video signals. Work has started to connect FDDI networks to the developing Synchronous Optical Network SONET.
An FDDI network contains two token rings, one for possible backup in case the primary ring fails. The primary ring offers up to 100 Mbit/s capacity. When a network has no requirement for the secondary ring to do backup, it can also carry data, extending capacity to 200 Mbit/s. The single ring can extend the maximum distance; a dual ring can extend 100 km (62 miles). FDDI has a larger maximum-frame size than standard 100 Mbit/s ethernet, allowing better throughput.

FRAME RELAY

In the context of computer networking, frame relay (also found written as "frame-relay") consists of an efficient data transmission technique used to send digital information quickly and cheaply in a relay of frames to one or many destinations from one or many end-points. Network providers commonly implement frame relay for voice and data as an encapsulation technique, used between local area networks (LANs) over a wide area network (WAN). Each end-user gets a private line (or leased line) to a frame-relay node. The frame-relay network handles the transmission over a frequently-changing path transparent to all end-users.
As of 2006 native IP-based networks have gradually begun to displace frame relay. With the advent of MPLS, VPN and dedicated broadband services such as cable modem and DSL, the end may loom for the frame relay protocol and encapsulation. There remain, however, many rural areas lacking DSL and cable modem services, and in such cases the least expensive type of "always-on" connection remains a 128-kilobit frame-relay line. Thus a retail chain, for instance, may use frame relay for connecting rural stores into their corporate WAN.

Frame relay has its technical base in the older X.25 packet-switching, designed for transmitting analog data such as voice conversations. Unlike X.25, whose designers expected analog signals, frame relay offers a fast packet technology, which means that the protocol does not attempt to correct errors. When a frame relay network detects an error in a frame, it simply "drops" that frame. The end points have the responsibility for detecting and retransmitting dropped frames. (However, digital networks offer an incidence of error extraordinarily small relative to that of analog networks.)
Frame relay often serves to connect local area networks (LANs) with major backbones as well as on public wide-area networks (WANs) and also in private network environments with leased lines over T-1 lines. It requires a dedicated connection during the transmission period. Frame relay does not provide an ideal path for voice or video transmission, both of which require a steady flow of transmissions. However, under certain circumstances, voice and video transmission do use frame relay.


IEEE 802.11

IEEE 802.11, the Wi-Fi standard, denotes a set of Wireless LAN/WLAN standards developed by working group 11 of the IEEE LAN/MAN Standards Committee (IEEE 802). The term 802.11x is also used to denote this set of standards and is not to be mistaken for any one of its elements. There is no single 802.11x standard. The term IEEE 802.11 is also used to refer to the original 802.11, which is now sometimes called "802.11legacy." For the application of these standards see Wi-Fi.

The 802.11 family currently includes six over-the-air modulation techniques that all use the same protocol. The most popular (and prolific) techniques are those defined by the b, a, and g amendments to the original standard; security was originally included and was later enhanced via the 802.11i amendment. 802.11n is another modulation technique under development. Other standards in the family (c–f, h, j) are service enhancements and extensions or corrections to previous specifications. 802.11b was the first widely accepted wireless networking standard, followed (somewhat counterintuitively) by 802.11a and 802.11g.

Which part of the radio frequency spectrum may be used varies between countries, with the strictest limitations in the USA. While it is true that in the USA 802.11a and g devices may be legally operated without a license, it is not true that 802.11a and g operate in an unlicensed portion of the radio frequency spectrum. Unlicensed (legal) operation of 802.11 a & g is covered under Part 15 of the FCC Rules and Regulations. Frequencies used by channels one (1) through six (6) (802.11b) fall within the range of the 2.4 gigahertz amateur radio band. Licensed amateur radio operators may operate 802.11b devices under Part 97 of the FCC Rules and Regulations that apply.

IEEE-488

The Hewlett-Packard Instrument Bus (HP-IB), is a short-range digital communications standard developed by Hewlett-Packard (HP) in the 1970s for connecting electronic test and measurement devices (e.g. digital multimeters and logic analyzers) to controllers such as computers. The bus is still in wide use for this purpose.

Other manufacturers copied HP-IB, calling their implementation the General Purpose Interface Bus (GPIB). In 1978 the bus was standardized by the Institute of Electrical and Electronics Engineers as the IEEE Standard Digital Interface for Programmable Instrumentation, IEEE-488-1978 (now 488.1).

Design

IEEE-488 allows up to 15 devices to share a single bus by daisy-chaining, with the slowest device participating in the control and data transfer handshakes to determine the speed of the transaction. The maximum data rate is about one megabyte per second. Paraphrasing the 1989 HP Test & Measurement Catalog: HP-IB has a party-line structure wherein all devices on the bus are connected in parallel. The 16 signal lines within the passive interconnecting HP-IB cable are grouped into three clusters according to their functions: Data Bus, Data Byte Transfer Control Bus, and General Interface Management Bus.

QSNET

QsNet is a high speed interconnect designed by Quadrics used in HPC clusters, particularly Linux Beowulf Clusters. Although it can be used with TCP/IP; like SCI, Myrinet and Infiniband it is usually used with a communication API such as MPI or SHMEM called from a parallel program.

The interconnect consists of a PCI card in each compute node and one or more a dedicated switch chasses. These are connected with a copper cables. Within the switch chassis are a number of line cards that carry Elite switch ASICs. These are internally linked to form a fat tree topology. Like other interconnects such as Myrinet very large systems can be build by using multiple switch chasses arranged as spine (top-level) and leaf (node-level) switches. Such systems are usually called federated networks.

RS-232

In telecommunications, RS-232 is a standard for serial binary data interconnection between a DTE (Data terminal equipment) and a DCE (Data communication equipment). It is commonly used in computer serial ports. A similar ITU-T standard is V.24. RS is an abbreviation for "Recommended Standard".

Scope of the standard

The Electronic Industries Alliance (EIA) standard RS-232-C as of 1969 defines:
• Electrical signal characteristics such as voltage levels, signaling rate, timing and slew-rate of signals, voltage withstand level, short-circuit behavior, maximum stray capacitance and cable length
• Interface mechanical characteristics, pluggable connectors and pin identification
• Functions of each circuit in the interface connector
• Standard subsets of interface circuits for selected telecom applications
The standard does not define such elements as character encoding (for example, ASCII, Baudot or EBCDIC), or the framing of characters in the data stream (bits per character, start/stop bits, parity). The standard does not define bit rates for transmission, although the standard says it is intended for bit rates less than 20,000 bits per second. Many modern devices can exceed this speed (38,400 and 57,600 bit/s being common, and 115,200 and 230,400 bit/s making occasional appearances) while still using RS-232 compatible signal levels.

Limitations of the standard

Because the application of RS-232 has extended far beyond the original purpose of interconnecting a terminal with a modem, successor standards have been developed to address the limitations. Issues with the RS-232 standard include:
• The large voltage swings and requirement for positive and negative supplies increases power consumption of the interface and complicates power supply design. The voltage swing requirement also limits the upper speed of a compatible interface.
• Single-ended signalling referred to a common signal ground limit the noise immunity and transmission distance.
• Multi-drop (meaning a connection between more than two devices) operation of an RS-232 compatible interface is not defined; while multi-drop "work-arounds" have been devised, they have limitations in speed and compatibility.
• Asymmetrical definitions of the two ends of the link make the assignment of the role of a newly developed device problematical; the designer must decide on either a DTE-like or DCE-like interface and which connector pin assignments to use.
• The handshaking and control lines of the interface are intended for the setup and takedown of a dial-up communication circuit; in particular, the use of handshake lines for flow control is not reliably implemented in many devices.

SYSTEMS NETWORK ARCHITECTURE

Systems Network Architecture (SNA) is IBM's proprietary networking architecture created in 1974. It is a complete protocol stack for interconnecting computers and their resources. SNA describes the protocol and is, in itself, not actually a program. The implementation of SNA takes the form of various communications packages, most notably VTAM which is the mainframe package for SNA communcations. SNA is still used extensively in banks and other financial transaction networks, as well as in many government agencies. While IBM is still providing support for SNA, one of the primary pieces of hardware, the 3745/3746 communications controller has been removed from market and will be dropped from support sometime after 2010. As a result most sites are working to remove the use of SNA from their networks and move to TCP/IP.

TOKEN RING

Token ring local area network (LAN) technology was developed and promoted by IBM in the early 1980s and standardised as IEEE 802.5 by the Institute of Electrical and Electronics Engineers. Initially very successful, it went into steep decline after the introduction of 10BASE-T for Ethernet and the EIA/TIA 568 cabling standard in the early 1990s. A fierce marketing effort led by IBM sought to claim better performance and reliability over Ethernet for critical applications due to its deterministic access method, but was no more successful than similar battles in the same era over their Micro Channel architecture. IBM no longer uses or promotes token ring. Madge Networks, a one time competitor to IBM, is now considered to be the market leader in token ring.

Stations on a token ring LAN are logically organized in a ring topology with data being transmitted sequentially from one ring station to the next with a control token circulating around the ring controlling access. This token passing mechanism is shared by ARCNET, token bus, and FDDI, and has theoretical advantages over the stochastic CSMA/CD of Ethernet.

TRANSMISSION CONTROL PROTOCOL

The Transmission Control Protocol (TCP) is one of the core protocols of the Internet protocol suite. Using TCP, applications on networked hosts can create connections to one another, over which they can exchange data in packets. The protocol guarantees reliable and in-order delivery of data from sender to receiver. TCP also distinguishes data for multiple connections by concurrent applications (e.g. Web server and e-mail server) running on the same host.
TCP supports many of the Internet's most popular application protocols and resulting applications, including the World Wide Web, e-mail and Secure Shell.

In the Internet protocol suite, TCP is the intermediate layer between the Internet Protocol (IP) below it, and an application above it. Applications often need reliable pipe-like connections to each other, whereas the Internet Protocol does not provide such streams, but rather only unreliable packets. TCP does the task of the transport layer in the simplified OSI model of computer networks.

Applications send streams of octets (8-bit bytes) to TCP for delivery through the network, and TCP divides the byte stream into appropriately sized segments (usually delineated by the maximum transmission unit (MTU) size of the data link layer of the network the computer is attached to). TCP then passes the resulting packets to the Internet Protocol, for delivery through a network to the TCP module of the entity at the other end. TCP checks to make sure that no packets are lost by giving each packet a sequence number, which is also used to make sure that the data are delivered to the entity at the other end in the correct order. The TCP module at the far end sends back an acknowledgement for packets which have been successfully received; a timer at the sending TCP will cause a timeout if an acknowledgement is not received within a reasonable round-trip time (or RTT), and the (presumably lost) data will then be re-transmitted. The TCP checks that no bytes are damaged by using a checksum; one is computed at the sender for each block of data before it is sent, and checked at the receiver.