Developed as a cost effective method by the ATM forum it helps in interconnecting between ATM networks and existing frame based equipment (eg-: routers). Employing the T1/E1 interface it acts as the ATM backbone without actually replacing the existing equipment for an costly ATM equipment.
FUNI
Developed as a cost effective method by the ATM forum it helps in interconnecting between ATM networks and existing frame based equipment (eg-: routers). Employing the T1/E1 interface it acts as the ATM backbone without actually replacing the existing equipment for an costly ATM equipment.
Developed as a cost effective method by the ATM forum it helps in interconnecting between ATM networks and existing frame based equipment (eg-: routers). Employing the T1/E1 interface it acts as the ATM backbone without actually replacing the existing equipment for an costly ATM equipment.
Labels: Computer Science, Computer's Notes, FUNI, Seminar Topics, Seminars
CDMA2000
CDMA2000 is a technology for the development of cdmaOne/IS-95 to 3rd generation services. Also known as IMT-CDMA Multi-Carrier or IS-2000 is the main route for CDMA operators to second-and-a-half ( 2.5G) and third generation (3G) cellular networks. A set of new standards that define the new air interface and the Radio Access and Core Network changes that will enhance network capacity, improve speed and bandwidth to mobile terminals, and allow end-to-end IP services has been created by the standard-setting body behind CDMA2000 known as 3GPP2. CDMA2000 will provide improved services to cdmaOne subscribers, as well as forward and backward capabilities in terminals.
Deployed in various phases, the first phase called CDMA2000 1x, supports an average of 144 kbps packet data in a mobile environment. The second release 1x-EV-DO will support data rates up to 2 Mbits/sec on a dedicated data carrier. Lastly, 1x-EV-DV will supports higher peak rates and simultaneous voice and high-speed data, as well as improved Quality of Service mechanisms. The CDMA2000 PCN(Packet Core Network) is one of the first steps in the evolution of CDMA2000 systems to All-IP and multi-media architecture that allows for the delivery of packet data services with more speed and security.
CDMA2000 is a hybrid 2.5G / 3G technology of mobile telecommunications standards that use CDMA, a multiple access scheme for digital radio, to send voice, data, and signalling data (such as a dialed telephone number) between mobile phones and cell sites. CDMA2000 is considered a 2.5G technology in 1xRTT and a 3G technology in EVDO.
CDMA (code division multiple access) is a mobile digital radio technology where channels are defined with codes (PN sequences). CDMA permits many simultaneous transmitters on the same frequency channel, unlike TDMA (time division multiple access), used in GSM and D-AMPS, and FDMA, used in AMPS ("analog" cellular). Since more phones can be served by fewer cell sites, CDMA-based standards have a significant economic advantage over TDMA- or FDMA-based standards.
CDMA2000 has a relatively long technical history, and remains compatible with the older CDMA telephony methods (such as cdmaOne) first developed by Qualcomm, a commercial company, and holder of several key international patents on the technology.
The CDMA2000 standards CDMA2000 1xRTT, CDMA2000 EV-DO, and CDMA2000 EV-DV are approved radio interfaces for the ITU's IMT-2000 standard and a direct successor to 2G CDMA, IS-95 (cdmaOne). CDMA2000 is standardized by 3GPP2.
CDMA2000 is a registered trademark of the Telecommunications Industry Association (TIA-USA) in the United States, not a generic term like CDMA. (This is similar to how TIA has branded their 2G CDMA standard, IS-95, as cdmaOne.)
CDMA2000 is an incompatible competitor of the other major 3G standard UMTS. It is defined to operate at 450 MHz, 700 MHz, 800 MHz, 900 MHz, 1700 MHz, 1800 MHz, 1900 MHz, and 2100 MHz.
CDMA2000 1xRTT, the core CDMA2000 wireless air interface standard, is also known as 1x, 1xRTT, and IS-2000. The designation "1x", meaning "1 times Radio Transmission Technology", indicates the same RF bandwidth as IS-95: a duplex pair of 1.25 MHz radio channels. This contrasts with 3xRTT, which uses channels 3 times as wide (3.75 MHz) channels. 1xRTT almost doubles the capacity of IS-95 by adding 64 more traffic channels to the forward link, orthogonal to (in quadrature with) the original set of 64. Although capable of higher data rates, most deployments are limited to a peak of 144 kbit/s. IMT-2000 also made changes to the data link layer for the greater use of data services, including medium and link access control protocols and QoS. The IS-95 data link layer only provided "best effort delivery" for data and circuit switched channel for voice (i.e., a voice frame once every 20 ms).
1xRTT officially qualifies as 3G technology, but it is considered by some to be a 2.5G (or sometimes 2.75G) technology. This allows it to be deployed in 2G spectrum in some countries that limit 3G systems to certain bands.
CDMA2000 3x is (also known as EV-DO rev B) is a multi-carrier evolution of the Rev A specification. It maintains the capabilities of EVDO Rev A, and provides the following enhancements:
* Higher rates per carrier (up to 4.9 Mbit/s on the downlink per carrier). Typical deployments are expected to include 3 carriers for a peak rate of 14.7 Mbit/s
* Higher rates by bundling multiple channels together enhance the user experience and enables new services such as high definition video streaming.
* Uses statistical multiplexing across channels to further reduce latency, enhancing the experience for latency-sensitive services such as gaming, video telephony, remote console sessions and web browsing.
* Increased talk-time and standby time
* Hybrid frequency re-use which reduces the interference from the adjacent sectors and improves the rates that can be offered, especially to users at the edge of the cell.
* Efficient support for services that have asymmetric download and upload requirements (i.e. different data rates required in each direction) such as file transfers, web browsing, and broadband multimedia content delivery.
Cdma2000 specification was developed by the Third Generation Partnership Project 2 (3GPP2), a partnership consisting of five telecommunications standards bodies: ARIB and TTC in Japan, CWTS in China, TTA in Korea and TIA in North America. Cdma2000 has already been implemented to several networks as an evolutionary step from cdmaOne as cdma2000 provides full backward compatibility with IS-95B. Cdma2000 is not constrained to only the IMT-2000 band, but operators can also overlay acdma2000 1x system, which supports 144 kbps now and data rates up to 307 kbps in the future, on top of their existing cdmaOne network.
The evolution of cdma2000 1x is labeled cdma2000 1xEV. 1xEV will be implemented in steps: 1xEV-DO and 1xEV-DV. 1xEV-DO stands for "1x Evolution Data Only". 1xEV-DV stands for "1x Evolution Data and Voice". Both 1xEV cdma2000 evolution steps will use a standard 1.25 MHz carrier. 1xEV-DO probably will be available for cdma2000 operators during 2002 and 1xEV-DV solutions will be available approximately late 2003 or early 2004.
Key features of CDMA2000 are:
* Leading performance: CDMA2000 performance in terms of data-speeds, voice capacity and latencies continue to outperform in commercial deployments other comparable technologies
* Efficient use of spectrum: CDMA2000 technologies offer the highest voice capacity and data throughput using the least amount of spectrum, lowering the cost of delivery for operators and delivering superior customer experience for the end users
* Support for advanced mobile services: CDMA2000 1xEV-DO enables the delivery of a broad range of advanced services, such as high-performance VoIP, push-to-talk, video telephony, multimedia messaging, multicasting and multi-playing online gaming with richly rendered 3D graphics
* All-IP – CDMA2000 technologies are compatible with IP and ready to support network convergence. Today, CDMA2000 operators that have deployed IP-based services enjoy more flexibility and higher bandwidth efficiencies, which translate into greater control and significant cost savings
* Devices selection: CDMA2000 offers the broadest selection of devices and has a significant cost advantage compared to other 3G technologies to meet the diverse market needs around the world
* Seamless evolution path : CDMA2000 has a solid and long-term evolution path which is built on the principle of backward and forward compatibility, in-band migration, and support of hybrid network configurations
* Flexibility: CDMA2000 systems have been designed for urban as well as remote rural areas for fixed wireless, wireless local loop (WLL), limited mobility and full mobilility applications in multiple spectrum bands, including 450 MHz, 800 MHz, 1700 MHz, 1900Mhz and 2100 MHz
CDMA2000 Advantages
* Superior Voice Clarity
* High-Speed Broadband Data Connectivity
* Low End-to-End Latency
* Increased Voice and Data Throughput Capacity
* Time-to-Market Performance Advantage
* Long-Term, Robust and Evolutionary Migration Path with Forward and Backward Compatibility
* Differentiated Value-Added Services such as VoIP, PTT, Multicasting, Position Location, etc.
* Flexible Network Architecture with connectivity to ANSI-41, GSM-MAP and IP-based Networks and flexible Backhaul Connectivity (see the text at the end – we can do that later)
* Application, User and Flow-based Quality of Service (QoS)
* Flexible Spectrum Allocations with Excellent Propagation Characteristics
* Robust Link Budget for Extended Coverage and Increased Data Throughputs at the Cell Edge
* Multi-mode, Multi-band, Global Roaming
* Improved Security and Privacy
* Lower Total Cost of Ownership (TCO)
CDMA2000 is a technology for the development of cdmaOne/IS-95 to 3rd generation services. Also known as IMT-CDMA Multi-Carrier or IS-2000 is the main route for CDMA operators to second-and-a-half ( 2.5G) and third generation (3G) cellular networks. A set of new standards that define the new air interface and the Radio Access and Core Network changes that will enhance network capacity, improve speed and bandwidth to mobile terminals, and allow end-to-end IP services has been created by the standard-setting body behind CDMA2000 known as 3GPP2. CDMA2000 will provide improved services to cdmaOne subscribers, as well as forward and backward capabilities in terminals.
Deployed in various phases, the first phase called CDMA2000 1x, supports an average of 144 kbps packet data in a mobile environment. The second release 1x-EV-DO will support data rates up to 2 Mbits/sec on a dedicated data carrier. Lastly, 1x-EV-DV will supports higher peak rates and simultaneous voice and high-speed data, as well as improved Quality of Service mechanisms. The CDMA2000 PCN(Packet Core Network) is one of the first steps in the evolution of CDMA2000 systems to All-IP and multi-media architecture that allows for the delivery of packet data services with more speed and security.
CDMA2000 is a hybrid 2.5G / 3G technology of mobile telecommunications standards that use CDMA, a multiple access scheme for digital radio, to send voice, data, and signalling data (such as a dialed telephone number) between mobile phones and cell sites. CDMA2000 is considered a 2.5G technology in 1xRTT and a 3G technology in EVDO.
CDMA (code division multiple access) is a mobile digital radio technology where channels are defined with codes (PN sequences). CDMA permits many simultaneous transmitters on the same frequency channel, unlike TDMA (time division multiple access), used in GSM and D-AMPS, and FDMA, used in AMPS ("analog" cellular). Since more phones can be served by fewer cell sites, CDMA-based standards have a significant economic advantage over TDMA- or FDMA-based standards.
CDMA2000 has a relatively long technical history, and remains compatible with the older CDMA telephony methods (such as cdmaOne) first developed by Qualcomm, a commercial company, and holder of several key international patents on the technology.
The CDMA2000 standards CDMA2000 1xRTT, CDMA2000 EV-DO, and CDMA2000 EV-DV are approved radio interfaces for the ITU's IMT-2000 standard and a direct successor to 2G CDMA, IS-95 (cdmaOne). CDMA2000 is standardized by 3GPP2.
CDMA2000 is a registered trademark of the Telecommunications Industry Association (TIA-USA) in the United States, not a generic term like CDMA. (This is similar to how TIA has branded their 2G CDMA standard, IS-95, as cdmaOne.)
CDMA2000 is an incompatible competitor of the other major 3G standard UMTS. It is defined to operate at 450 MHz, 700 MHz, 800 MHz, 900 MHz, 1700 MHz, 1800 MHz, 1900 MHz, and 2100 MHz.
CDMA2000 1xRTT, the core CDMA2000 wireless air interface standard, is also known as 1x, 1xRTT, and IS-2000. The designation "1x", meaning "1 times Radio Transmission Technology", indicates the same RF bandwidth as IS-95: a duplex pair of 1.25 MHz radio channels. This contrasts with 3xRTT, which uses channels 3 times as wide (3.75 MHz) channels. 1xRTT almost doubles the capacity of IS-95 by adding 64 more traffic channels to the forward link, orthogonal to (in quadrature with) the original set of 64. Although capable of higher data rates, most deployments are limited to a peak of 144 kbit/s. IMT-2000 also made changes to the data link layer for the greater use of data services, including medium and link access control protocols and QoS. The IS-95 data link layer only provided "best effort delivery" for data and circuit switched channel for voice (i.e., a voice frame once every 20 ms).
1xRTT officially qualifies as 3G technology, but it is considered by some to be a 2.5G (or sometimes 2.75G) technology. This allows it to be deployed in 2G spectrum in some countries that limit 3G systems to certain bands.
CDMA2000 3x is (also known as EV-DO rev B) is a multi-carrier evolution of the Rev A specification. It maintains the capabilities of EVDO Rev A, and provides the following enhancements:
* Higher rates per carrier (up to 4.9 Mbit/s on the downlink per carrier). Typical deployments are expected to include 3 carriers for a peak rate of 14.7 Mbit/s
* Higher rates by bundling multiple channels together enhance the user experience and enables new services such as high definition video streaming.
* Uses statistical multiplexing across channels to further reduce latency, enhancing the experience for latency-sensitive services such as gaming, video telephony, remote console sessions and web browsing.
* Increased talk-time and standby time
* Hybrid frequency re-use which reduces the interference from the adjacent sectors and improves the rates that can be offered, especially to users at the edge of the cell.
* Efficient support for services that have asymmetric download and upload requirements (i.e. different data rates required in each direction) such as file transfers, web browsing, and broadband multimedia content delivery.
Cdma2000 specification was developed by the Third Generation Partnership Project 2 (3GPP2), a partnership consisting of five telecommunications standards bodies: ARIB and TTC in Japan, CWTS in China, TTA in Korea and TIA in North America. Cdma2000 has already been implemented to several networks as an evolutionary step from cdmaOne as cdma2000 provides full backward compatibility with IS-95B. Cdma2000 is not constrained to only the IMT-2000 band, but operators can also overlay acdma2000 1x system, which supports 144 kbps now and data rates up to 307 kbps in the future, on top of their existing cdmaOne network.
The evolution of cdma2000 1x is labeled cdma2000 1xEV. 1xEV will be implemented in steps: 1xEV-DO and 1xEV-DV. 1xEV-DO stands for "1x Evolution Data Only". 1xEV-DV stands for "1x Evolution Data and Voice". Both 1xEV cdma2000 evolution steps will use a standard 1.25 MHz carrier. 1xEV-DO probably will be available for cdma2000 operators during 2002 and 1xEV-DV solutions will be available approximately late 2003 or early 2004.
Key features of CDMA2000 are:
* Leading performance: CDMA2000 performance in terms of data-speeds, voice capacity and latencies continue to outperform in commercial deployments other comparable technologies
* Efficient use of spectrum: CDMA2000 technologies offer the highest voice capacity and data throughput using the least amount of spectrum, lowering the cost of delivery for operators and delivering superior customer experience for the end users
* Support for advanced mobile services: CDMA2000 1xEV-DO enables the delivery of a broad range of advanced services, such as high-performance VoIP, push-to-talk, video telephony, multimedia messaging, multicasting and multi-playing online gaming with richly rendered 3D graphics
* All-IP – CDMA2000 technologies are compatible with IP and ready to support network convergence. Today, CDMA2000 operators that have deployed IP-based services enjoy more flexibility and higher bandwidth efficiencies, which translate into greater control and significant cost savings
* Devices selection: CDMA2000 offers the broadest selection of devices and has a significant cost advantage compared to other 3G technologies to meet the diverse market needs around the world
* Seamless evolution path : CDMA2000 has a solid and long-term evolution path which is built on the principle of backward and forward compatibility, in-band migration, and support of hybrid network configurations
* Flexibility: CDMA2000 systems have been designed for urban as well as remote rural areas for fixed wireless, wireless local loop (WLL), limited mobility and full mobilility applications in multiple spectrum bands, including 450 MHz, 800 MHz, 1700 MHz, 1900Mhz and 2100 MHz
CDMA2000 Advantages
* Superior Voice Clarity
* High-Speed Broadband Data Connectivity
* Low End-to-End Latency
* Increased Voice and Data Throughput Capacity
* Time-to-Market Performance Advantage
* Long-Term, Robust and Evolutionary Migration Path with Forward and Backward Compatibility
* Differentiated Value-Added Services such as VoIP, PTT, Multicasting, Position Location, etc.
* Flexible Network Architecture with connectivity to ANSI-41, GSM-MAP and IP-based Networks and flexible Backhaul Connectivity (see the text at the end – we can do that later)
* Application, User and Flow-based Quality of Service (QoS)
* Flexible Spectrum Allocations with Excellent Propagation Characteristics
* Robust Link Budget for Extended Coverage and Increased Data Throughputs at the Cell Edge
* Multi-mode, Multi-band, Global Roaming
* Improved Security and Privacy
* Lower Total Cost of Ownership (TCO)
Labels: CDMA2000, Computer Science, Computer's Notes, Seminar Topics, Seminars
On-Board Diagnostics
In the automotive context On-Board Diagnostics, or OBD refers to a vehicle's self-diagnostic and reporting capability and presents the vehicle owner or a repair technician with state of health information for various vehicle sub-systems. Earlier systems of OBD could simply illuminate a malfunction indicator light, or MIL, if a problem were detected--but the nature of the problem could not be specified.
But by the introduction of on-board vehicle computers in the early 1980's, which had made OBD possible the amount of diagnostic information available has had dramatic changes. Modern day systems gives diagnostic trouble codes, or DTCs, which allows one to rapidly identify and fix malfunctions within the vehicle and all this is possible due to the use of standardized fast digital communications port which provide abundant realtime data.
On-Board Diagnostics, or OBD, in an automotive context, is a generic term referring to a vehicle's self-diagnostic and reporting capability. OBD systems give the vehicle owner or a repair technician access to state of health information for various vehicle sub-systems. The amount of diagnostic information available via OBD has varied widely since the introduction in the early 1980s of on-board vehicle computers, which made OBD possible. Early instances of OBD would simply illuminate a malfunction indicator light, or MIL, if a problem was detected—but would not provide any information as to the nature of the problem. Modern OBD implementations use a standardized fast digital communications port to provide realtime data in addition to a standardized series of diagnostic trouble codes, or DTCs, which allow one to rapidly identify and remedy malfunctions within the vehicle.
* 1975: Datsun 280z On-board computers begin appearing on consumer vehicles, largely motivated by their need for real-time tuning of fuel injection systems. Simple OBD implementations appear, though there is no standardization in what is monitored or how it is reported.
* 1982: General Motors implements a proprietary interface and protocol. The initial ALDL protocol communicates at 160 baud with Pulse-width modulation (PWM) signaling and monitors very few vehicle systems.
* 1986: An upgraded version of the ALDL protocol appears which communicates at 8192 baud with half-duplex UART signaling. This protocol is defined in GM XDE-5024B.
* ~1987: The California Air Resources Board (CARB) requires that all new vehicles sold in California starting in manufacturer's year 1988 (MY1988) have some basic OBD capability. The requirements they specify are generally referred to as the "OBD-I" standard, though this name is not applied until the introduction of OBD-II. The data link connector and its position are not standardized, nor is the data protocol.
* 1988: The Society of Automotive Engineers (SAE) recommends a standardized diagnostic connector and set of diagnostic test signals.
* ~1994: Motivated by a desire for a state-wide emissions testing program, the CARB issues the OBD-II specification and mandates that it be adopted for all cars sold in California starting in model year 1996 (see CCR Title 13 Section 1968.1 and 40 CFR Part 86 Section 86.094). The DTCs and connector suggested by the SAE are incorporated into this specification.
* 1996: The OBD-II specification is made mandatory for all cars sold in the United States.
* 2001: The European Union makes EOBD mandatory for all petrol vehicles sold in the European Union, starting in MY2001 (see European emission standards Directive 98/69/EC [2] ).
* 2008: All cars sold in the United States are required to use the ISO 15765-4 [3] signaling standard (a variant of the Controller Area Network (CAN) bus).
OBD scan tools can be categorized in several ways ranging from whether they are OEM tools or aftermarket tools, whether they require a computer to operate (stand-alone tool vs PC-based software), and the intended market (professional or hobby/consumer use).
The advantages of PC-based scan tools are:
* Low cost (compared to stand-alone scan tools with similar functionality -if you don't count the cost of a laptop PC).
* Virtually unlimited storage capacity for data logging and other functions.
* Higher resolution screen than handheld tools.
* Availability of multiple software programs.
* Some are capable of reprogramming
The advantages of stand-alone tools:
* Wide selection beginning with simple code read/erase tools starting at as low as $79 retail.
* Simplified operation that requires no computer skills/ PC compatibility issues.
* Rugged designs, intended for use in and around cars (i.e. no lugging a laptop in and around a car).
See List of Standalone OBD-II Scan Tools, List of OBD-II Cables & Scanning Software, and List of OBD-II Gauges & Performance Monitors.
In the automotive context On-Board Diagnostics, or OBD refers to a vehicle's self-diagnostic and reporting capability and presents the vehicle owner or a repair technician with state of health information for various vehicle sub-systems. Earlier systems of OBD could simply illuminate a malfunction indicator light, or MIL, if a problem were detected--but the nature of the problem could not be specified.
But by the introduction of on-board vehicle computers in the early 1980's, which had made OBD possible the amount of diagnostic information available has had dramatic changes. Modern day systems gives diagnostic trouble codes, or DTCs, which allows one to rapidly identify and fix malfunctions within the vehicle and all this is possible due to the use of standardized fast digital communications port which provide abundant realtime data.
On-Board Diagnostics, or OBD, in an automotive context, is a generic term referring to a vehicle's self-diagnostic and reporting capability. OBD systems give the vehicle owner or a repair technician access to state of health information for various vehicle sub-systems. The amount of diagnostic information available via OBD has varied widely since the introduction in the early 1980s of on-board vehicle computers, which made OBD possible. Early instances of OBD would simply illuminate a malfunction indicator light, or MIL, if a problem was detected—but would not provide any information as to the nature of the problem. Modern OBD implementations use a standardized fast digital communications port to provide realtime data in addition to a standardized series of diagnostic trouble codes, or DTCs, which allow one to rapidly identify and remedy malfunctions within the vehicle.
* 1975: Datsun 280z On-board computers begin appearing on consumer vehicles, largely motivated by their need for real-time tuning of fuel injection systems. Simple OBD implementations appear, though there is no standardization in what is monitored or how it is reported.
* 1982: General Motors implements a proprietary interface and protocol. The initial ALDL protocol communicates at 160 baud with Pulse-width modulation (PWM) signaling and monitors very few vehicle systems.
* 1986: An upgraded version of the ALDL protocol appears which communicates at 8192 baud with half-duplex UART signaling. This protocol is defined in GM XDE-5024B.
* ~1987: The California Air Resources Board (CARB) requires that all new vehicles sold in California starting in manufacturer's year 1988 (MY1988) have some basic OBD capability. The requirements they specify are generally referred to as the "OBD-I" standard, though this name is not applied until the introduction of OBD-II. The data link connector and its position are not standardized, nor is the data protocol.
* 1988: The Society of Automotive Engineers (SAE) recommends a standardized diagnostic connector and set of diagnostic test signals.
* ~1994: Motivated by a desire for a state-wide emissions testing program, the CARB issues the OBD-II specification and mandates that it be adopted for all cars sold in California starting in model year 1996 (see CCR Title 13 Section 1968.1 and 40 CFR Part 86 Section 86.094). The DTCs and connector suggested by the SAE are incorporated into this specification.
* 1996: The OBD-II specification is made mandatory for all cars sold in the United States.
* 2001: The European Union makes EOBD mandatory for all petrol vehicles sold in the European Union, starting in MY2001 (see European emission standards Directive 98/69/EC [2] ).
* 2008: All cars sold in the United States are required to use the ISO 15765-4 [3] signaling standard (a variant of the Controller Area Network (CAN) bus).
OBD scan tools can be categorized in several ways ranging from whether they are OEM tools or aftermarket tools, whether they require a computer to operate (stand-alone tool vs PC-based software), and the intended market (professional or hobby/consumer use).
The advantages of PC-based scan tools are:
* Low cost (compared to stand-alone scan tools with similar functionality -if you don't count the cost of a laptop PC).
* Virtually unlimited storage capacity for data logging and other functions.
* Higher resolution screen than handheld tools.
* Availability of multiple software programs.
* Some are capable of reprogramming
The advantages of stand-alone tools:
* Wide selection beginning with simple code read/erase tools starting at as low as $79 retail.
* Simplified operation that requires no computer skills/ PC compatibility issues.
* Rugged designs, intended for use in and around cars (i.e. no lugging a laptop in and around a car).
See List of Standalone OBD-II Scan Tools, List of OBD-II Cables & Scanning Software, and List of OBD-II Gauges & Performance Monitors.
The Real-time Transport (RTP) Protocol
In RTP the data transport is augmented by a control protocol (RTCP) to allow monitoring of the data deliverance in a manner scalable to large multi cast networks, and to provide minimal control and identification functionality. In short The Real-Time Transport Protocol provides end-to-end network transport functions appropriate for applications transmitting real-time data such as audio, video or simulation data, over multicast or unicast network services. RTP and RTCP are designed to be independent of the underlying transport and network layers and does not address resource reservation and does not guarantee quality-of-service for real-time services. The protocol ropes the use of RTP-level translators and mixers.
The Real-time Transport Protocol (RTP) defines a standardized packet format for delivering audio and video over the Internet. It was developed by the Audio-Video Transport Working Group of the IETF and first published in 1996 as RFC 1889, and superseded by RFC 3550 in 2003.
RTP is used extensively in communication and entertainment systems that involve streaming media, such as telephony, video teleconference applications and web-based push to talk features. For these it carries media streams controlled by H.323, MGCP, Megaco, SCCP, or Session Initiation Protocol (SIP) signaling protocols, making it one of the technical foundations of the Voice over IP industry.
RTP is usually used in conjunction with the RTP Control Protocol (RTCP). While RTP carries the media streams (e.g., audio and video) or out-of-band signaling (DTMF), RTCP is used to monitor transmission statistics and quality of service (QoS) information. When both protocols are used in conjunction, RTP is usually originated and received on even port numbers, whereas RTCP uses the next higher odd port number.
RTP was developed by the Audio/Video Transport working group of the IETF standards organization, and it has since been adopted by several other standards organization, including by ITU as part of its H.323 standard.[1] The RTP standard defines a pair of protocols, RTP and the Real-time Transport Control Protocol (RTCP). The former is used for exchange of multimedia data, while the latter is used to periodically send control information and Quality of service parameters.
RTP protocol is designed for end-to-end, real-time, audio or video data flow transport. It allows the recipient to compensate for the jitter and breaks in sequence that may occur during the transfer on an IP network. RTP supports data transfer to multiple destination by using multicast. RTP provides no guarantee of the delivery, but sequencing of the data makes it possible to detect missing packets. RTP is regarded as the primary standard for audio/video transport in IP networks and is used with an associated profile and payload format.
Multimedia applications need timely delivery and can tolerate some loss in packets. For example, loss of a packet in audio application results may result in loss of a fraction of a second of audio data, which, with suitable error concealment can be made unnoticeable. Multimedia applications require timeliness over reliability. The Transmission Control Protocol (TCP), although standardized for RTP use (RFC 4571), is not often used by RTP because of inherent latency introduced by connection establishment and error correction, instead the majority of the RTP implementations are based on the User Datagram Protocol (UDP).[4] Other transport protocols specifically designed for multimedia sessions are SCTP and DCCP, although they are not in widespread use yet.
The design of RTP was based on an architectural principle known as Application Level Framing (ALF). ALF principle is seen as a way to design protocols for emerging multimedia applications. ALF is based on the belief that applications understand their own needs better, and the intelligence should be placed in applications and the network layer should be kept simple. RTP Profiles and Payload formats are used to describe Application specific details.(explained below)
Protocol components
There are two parts to RTP: Data Transfer Protocol and an associated Control Protocol. The RTP data transfer protocol manages delivery of real-time data (audio and video), between end systems. It defines the media payload, incorporating sequence numbers for loss detection, timestamps to enable timing recovery, payload type and source identifiers, and a marker for significant events. Depending on the profile and payload format in use, rules for timestamp and sequence number usage are specified.
The RTP Control Protocol (RTCP) provides reception quality feedback, participant identification and synchronization between media streams. RTCP runs alongside RTP, providing periodic reporting of this information.[7] While the RTP data packets are sent every few milliseconds, the control protocol operates on the scale of seconds. The information in RTCP may be used for synchronization (e.g. lip sync)[7] The RTCP traffic is small when compared to the RTP traffic, typically around 5%.
Sessions
To setup an RTP session, an application defines a pair of destination ports (an IP address with a pair of ports for RTP and RTCP). In a multimedia session, each media stream is carried in a separate RTP session, with its own RTCP packets reporting the reception quality for that session. For example, audio and video would travel in separate RTP session, enabling a receiver to select whether or not to receive a particular stream. An RTP port should be even and the RTCP port should be the next higher port number if possible. Deviations from this rule can be signaled via RTP session descriptions in other protocols (SDP). RTP and RTCP typically use unprivileged UDP ports (1024 to 65535), but may use other transport protocols (most notably, SCTP and DCCP) as well, as the protocol design is transport independent.
Voice over Internet Protocol (VoIP) systems most often use the Session Description Protocol (SDP) to define RTP sessions and negotiate the parameters involved with other peers. The Real Time Streaming Protocol (RTSP) may be also be used to setup and control media session on remote media servers.
In RTP the data transport is augmented by a control protocol (RTCP) to allow monitoring of the data deliverance in a manner scalable to large multi cast networks, and to provide minimal control and identification functionality. In short The Real-Time Transport Protocol provides end-to-end network transport functions appropriate for applications transmitting real-time data such as audio, video or simulation data, over multicast or unicast network services. RTP and RTCP are designed to be independent of the underlying transport and network layers and does not address resource reservation and does not guarantee quality-of-service for real-time services. The protocol ropes the use of RTP-level translators and mixers.
The Real-time Transport Protocol (RTP) defines a standardized packet format for delivering audio and video over the Internet. It was developed by the Audio-Video Transport Working Group of the IETF and first published in 1996 as RFC 1889, and superseded by RFC 3550 in 2003.
RTP is used extensively in communication and entertainment systems that involve streaming media, such as telephony, video teleconference applications and web-based push to talk features. For these it carries media streams controlled by H.323, MGCP, Megaco, SCCP, or Session Initiation Protocol (SIP) signaling protocols, making it one of the technical foundations of the Voice over IP industry.
RTP is usually used in conjunction with the RTP Control Protocol (RTCP). While RTP carries the media streams (e.g., audio and video) or out-of-band signaling (DTMF), RTCP is used to monitor transmission statistics and quality of service (QoS) information. When both protocols are used in conjunction, RTP is usually originated and received on even port numbers, whereas RTCP uses the next higher odd port number.
RTP was developed by the Audio/Video Transport working group of the IETF standards organization, and it has since been adopted by several other standards organization, including by ITU as part of its H.323 standard.[1] The RTP standard defines a pair of protocols, RTP and the Real-time Transport Control Protocol (RTCP). The former is used for exchange of multimedia data, while the latter is used to periodically send control information and Quality of service parameters.
RTP protocol is designed for end-to-end, real-time, audio or video data flow transport. It allows the recipient to compensate for the jitter and breaks in sequence that may occur during the transfer on an IP network. RTP supports data transfer to multiple destination by using multicast. RTP provides no guarantee of the delivery, but sequencing of the data makes it possible to detect missing packets. RTP is regarded as the primary standard for audio/video transport in IP networks and is used with an associated profile and payload format.
Multimedia applications need timely delivery and can tolerate some loss in packets. For example, loss of a packet in audio application results may result in loss of a fraction of a second of audio data, which, with suitable error concealment can be made unnoticeable. Multimedia applications require timeliness over reliability. The Transmission Control Protocol (TCP), although standardized for RTP use (RFC 4571), is not often used by RTP because of inherent latency introduced by connection establishment and error correction, instead the majority of the RTP implementations are based on the User Datagram Protocol (UDP).[4] Other transport protocols specifically designed for multimedia sessions are SCTP and DCCP, although they are not in widespread use yet.
The design of RTP was based on an architectural principle known as Application Level Framing (ALF). ALF principle is seen as a way to design protocols for emerging multimedia applications. ALF is based on the belief that applications understand their own needs better, and the intelligence should be placed in applications and the network layer should be kept simple. RTP Profiles and Payload formats are used to describe Application specific details.(explained below)
Protocol components
There are two parts to RTP: Data Transfer Protocol and an associated Control Protocol. The RTP data transfer protocol manages delivery of real-time data (audio and video), between end systems. It defines the media payload, incorporating sequence numbers for loss detection, timestamps to enable timing recovery, payload type and source identifiers, and a marker for significant events. Depending on the profile and payload format in use, rules for timestamp and sequence number usage are specified.
The RTP Control Protocol (RTCP) provides reception quality feedback, participant identification and synchronization between media streams. RTCP runs alongside RTP, providing periodic reporting of this information.[7] While the RTP data packets are sent every few milliseconds, the control protocol operates on the scale of seconds. The information in RTCP may be used for synchronization (e.g. lip sync)[7] The RTCP traffic is small when compared to the RTP traffic, typically around 5%.
Sessions
To setup an RTP session, an application defines a pair of destination ports (an IP address with a pair of ports for RTP and RTCP). In a multimedia session, each media stream is carried in a separate RTP session, with its own RTCP packets reporting the reception quality for that session. For example, audio and video would travel in separate RTP session, enabling a receiver to select whether or not to receive a particular stream. An RTP port should be even and the RTCP port should be the next higher port number if possible. Deviations from this rule can be signaled via RTP session descriptions in other protocols (SDP). RTP and RTCP typically use unprivileged UDP ports (1024 to 65535), but may use other transport protocols (most notably, SCTP and DCCP) as well, as the protocol design is transport independent.
Voice over Internet Protocol (VoIP) systems most often use the Session Description Protocol (SDP) to define RTP sessions and negotiate the parameters involved with other peers. The Real Time Streaming Protocol (RTSP) may be also be used to setup and control media session on remote media servers.
The H.323
The International Telecommunications Union (ITU) which sets standards for multimedia communications over Local Area Networks (LANs) that do not provide a guaranteed Quality of Service (QoS) has made H.323 as an umbrella recommendation that provides a foundation for audio, video, and data communications across IP-based networks, including the Internet. H.323 includes parts of H.225.0 - RAS, Q.931, H.245 RTP/RTCP and audio/video codecs, such as the audio codecs ( G.711, G.723.1, G.728, etc.) and video codecs (H.261, H.263) that compress and decompress media streams. Media streams are transported on RTP/RTCP. [para]RTP carries the actual media and RTCP carries status and control information. The signalling is transported dependably over TCP. The H.323 standards are important building blocks for a broad new range of collaborative, LAN-based applications for multimedia communication as these networks dictate today's corporate desktops and include packet-switched TCP/IP and IPX over Ethernet, Fast Ethernet and Token Ring network technologies.
H.323 is an umbrella Recommendation from the ITU Telecommunication Standardization Sector (ITU-T) that defines the protocols to provide audio-visual communication sessions on any packet network. The H.323 standard addresses call signaling and control, multimedia transport and control, and bandwidth control for point-to-point and multi-point conferences.
It is widely implemented by voice and videoconferencing equipment manufacturers, is used within various Internet real-time applications such as GnuGK, NetMeeting and X-Meeting, and is widely deployed worldwide by service providers and enterprises for both voice and video services over Internet Protocol (IP) networks.
It is a part of the ITU-T H.32x series of protocols, which also address multimedia communications over Integrated Services Digital Network (ISDN), Public Switched Telephone Network (PSTN) or Signaling System 7 (SS7), and 3G mobile networks.
H.323 Call Signaling is based on the ITU-T Recommendation Q.931 protocol and is suited for transmitting calls across networks using a mixture of IP, PSTN, ISDN, and QSIG over ISDN. A call model, similar to the ISDN call model, eases the introduction of IP telephony into existing networks of ISDN-based PBX systems, including transitions to IP-based Private Branch eXchanges (PBXs).
Within the context of H.323, an IP-based PBX might be an H.323 Gatekeeper or other call control element that provides service to telephones or videophones. Such a device may provide or facilitate both basic services and supplementary services, such as call transfer, park, pick-up, and hold.
While H.323 excels at providing basic telephony functionality and interoperability, H.323’s strength lies in multimedia communication functionality designed specifically for IP networks.
The first version of H.323 was published by the ITU in November 1996 with an emphasis of enabling videoconferencing capabilities over a Local Area Network (LAN), but was quickly adopted by the industry as a means of transmitting voice communication over a variety of IP networks, including WANs and the Internet (see VoIP).
Over the years, H.323 has been revised and re-published with enhancements necessary to better-enable both voice and video functionality over Packet-switched networks, with each version being backward-compatible with the previous version. Recognizing that H.323 was being used for communication, not only on LANs, but over WANs and within large carrier networks, the title of H.323 was changed when published in 1998. The title, which has since remained unchanged, is "Packet-Based Multimedia Communications Systems." The current version of H.323, commonly referred to as "H.323v6", was published in 2006.
One strength of H.323 was the relatively early availability of a set of standards, not only defining the basic call model, but also the supplementary services needed to address business communication expectations.
H.323 was the first VoIP standard to adopt the Internet Engineering Task Force (IETF) standard Real-time Transport Protocol (RTP) to transport audio and video over IP networks.
H.323 is a system specification that describes the use of several ITU-T and IETF protocols. The H.323 standard consists of following components and protocols:
* Call Signaling : H.225
* Media Control : H.245 control protocol for multimedia communication, which describes the messages and procedures used for capability exchange, opening and closing logical channels for audio, video and data, control and indications.
* Audio Codecs : G.711, G.722, G.723, G.728, G.729
* Video Codecs : H.261, H.263
* Data Sharing : T.120
* Media Transport : RTP which is used for sending or receiving multimedia and RTCP for quality feedback.
Many H.323 systems also implement other protocols that are defined in various ITU-T Recommendations to provide supplementary services support or deliver other functionality to the user. Some of those Recommendations are:
* H.235 series describes security within H.323, including security for both signaling and media.
* H.239 describes dual stream use in videoconferencing, usually one for live video, the other for still images.
* H.450 series describes various supplementary services.
* H.460 series defines optional extensions that might be implemented by an endpoint or a Gatekeeper, including ITU-T Recommendations H.460.17, H.460.18, and H.460.19 for Network address translation (NAT) / Firewall (FW) traversal.
In addition to those ITU-T Recommendations, H.323 utilizes various IETF Request for Comments (RFCs) for media transport and media packetization, including the Real-time Transport Protocol (RTP).
The International Telecommunications Union (ITU) which sets standards for multimedia communications over Local Area Networks (LANs) that do not provide a guaranteed Quality of Service (QoS) has made H.323 as an umbrella recommendation that provides a foundation for audio, video, and data communications across IP-based networks, including the Internet. H.323 includes parts of H.225.0 - RAS, Q.931, H.245 RTP/RTCP and audio/video codecs, such as the audio codecs ( G.711, G.723.1, G.728, etc.) and video codecs (H.261, H.263) that compress and decompress media streams. Media streams are transported on RTP/RTCP. [para]RTP carries the actual media and RTCP carries status and control information. The signalling is transported dependably over TCP. The H.323 standards are important building blocks for a broad new range of collaborative, LAN-based applications for multimedia communication as these networks dictate today's corporate desktops and include packet-switched TCP/IP and IPX over Ethernet, Fast Ethernet and Token Ring network technologies.
H.323 is an umbrella Recommendation from the ITU Telecommunication Standardization Sector (ITU-T) that defines the protocols to provide audio-visual communication sessions on any packet network. The H.323 standard addresses call signaling and control, multimedia transport and control, and bandwidth control for point-to-point and multi-point conferences.
It is widely implemented by voice and videoconferencing equipment manufacturers, is used within various Internet real-time applications such as GnuGK, NetMeeting and X-Meeting, and is widely deployed worldwide by service providers and enterprises for both voice and video services over Internet Protocol (IP) networks.
It is a part of the ITU-T H.32x series of protocols, which also address multimedia communications over Integrated Services Digital Network (ISDN), Public Switched Telephone Network (PSTN) or Signaling System 7 (SS7), and 3G mobile networks.
H.323 Call Signaling is based on the ITU-T Recommendation Q.931 protocol and is suited for transmitting calls across networks using a mixture of IP, PSTN, ISDN, and QSIG over ISDN. A call model, similar to the ISDN call model, eases the introduction of IP telephony into existing networks of ISDN-based PBX systems, including transitions to IP-based Private Branch eXchanges (PBXs).
Within the context of H.323, an IP-based PBX might be an H.323 Gatekeeper or other call control element that provides service to telephones or videophones. Such a device may provide or facilitate both basic services and supplementary services, such as call transfer, park, pick-up, and hold.
While H.323 excels at providing basic telephony functionality and interoperability, H.323’s strength lies in multimedia communication functionality designed specifically for IP networks.
The first version of H.323 was published by the ITU in November 1996 with an emphasis of enabling videoconferencing capabilities over a Local Area Network (LAN), but was quickly adopted by the industry as a means of transmitting voice communication over a variety of IP networks, including WANs and the Internet (see VoIP).
Over the years, H.323 has been revised and re-published with enhancements necessary to better-enable both voice and video functionality over Packet-switched networks, with each version being backward-compatible with the previous version. Recognizing that H.323 was being used for communication, not only on LANs, but over WANs and within large carrier networks, the title of H.323 was changed when published in 1998. The title, which has since remained unchanged, is "Packet-Based Multimedia Communications Systems." The current version of H.323, commonly referred to as "H.323v6", was published in 2006.
One strength of H.323 was the relatively early availability of a set of standards, not only defining the basic call model, but also the supplementary services needed to address business communication expectations.
H.323 was the first VoIP standard to adopt the Internet Engineering Task Force (IETF) standard Real-time Transport Protocol (RTP) to transport audio and video over IP networks.
H.323 is a system specification that describes the use of several ITU-T and IETF protocols. The H.323 standard consists of following components and protocols:
* Call Signaling : H.225
* Media Control : H.245 control protocol for multimedia communication, which describes the messages and procedures used for capability exchange, opening and closing logical channels for audio, video and data, control and indications.
* Audio Codecs : G.711, G.722, G.723, G.728, G.729
* Video Codecs : H.261, H.263
* Data Sharing : T.120
* Media Transport : RTP which is used for sending or receiving multimedia and RTCP for quality feedback.
Many H.323 systems also implement other protocols that are defined in various ITU-T Recommendations to provide supplementary services support or deliver other functionality to the user. Some of those Recommendations are:
* H.235 series describes security within H.323, including security for both signaling and media.
* H.239 describes dual stream use in videoconferencing, usually one for live video, the other for still images.
* H.450 series describes various supplementary services.
* H.460 series defines optional extensions that might be implemented by an endpoint or a Gatekeeper, including ITU-T Recommendations H.460.17, H.460.18, and H.460.19 for Network address translation (NAT) / Firewall (FW) traversal.
In addition to those ITU-T Recommendations, H.323 utilizes various IETF Request for Comments (RFCs) for media transport and media packetization, including the Real-time Transport Protocol (RTP).
Labels: Computer Science, Computer's Notes, Seminar Topics, Seminars, The H.323
Registration, Admission and Status (RAS)
The RAS channel is an unreliable channel which is used to carry messages used in the gatekeeper discovery and endpoint registration processes which associate an endpoint's alias address with its call signalling channel transport address. H.225.0 recommends time-outs and retry counts for various messages as the the RAS messages are transmitted on an unreliable channel.[para] Once an endpoint or gatekeeper cannot respond to a request within the specified timeout, may use the Request in Progress (RIP) message to indicate that it is still dealing out the request. An endpoint or gatekeeper receiving the RIP resets its timeout timer and retry counter.
Registration, Admission and Status (RAS), defined in the ITU-T H.225.0/RAS, is the protocol between endpoints (terminals and gateways) and gatekeepers. The RAS is used to perform registration, admission control, bandwidth changes, status, and disengage procedures between endpoints and gatekeepers. An RAS channel is used to exchange RAS messages. This signaling channel is opened between an endpoint and a gatekeeper prior to the establishment of any other channels.
Registration, admission, and status (RAS) is a component of a network protocol that involves the addition of (or refusal to add) new authorized users, the admission of (or refusal to admit) authorized users based on available bandwidth, and the tracking of the status of all users. Formally, RAS is part of the H.225 protocol for H.323 communications networks, designed to support multimedia bandwidths. RAS is an important signaling component in networks using voice over IP (VoIP).
RAS messages are exchanged on a specific frequency called the RAS channel. The RAS channel is the first to be opened, and precedes any communications between endpoints and gatekeepers in the network. Signals in RAS can be categorized as (1) gatekeeper discovery requests and responses; (2) admission, registration, and unregistration messages and responses; (3) location requests and responses; (4) status requests and responses; and (5) bandwidth-control requests and responses.
The RAS channel is an unreliable channel which is used to carry messages used in the gatekeeper discovery and endpoint registration processes which associate an endpoint's alias address with its call signalling channel transport address. H.225.0 recommends time-outs and retry counts for various messages as the the RAS messages are transmitted on an unreliable channel.[para] Once an endpoint or gatekeeper cannot respond to a request within the specified timeout, may use the Request in Progress (RIP) message to indicate that it is still dealing out the request. An endpoint or gatekeeper receiving the RIP resets its timeout timer and retry counter.
Registration, Admission and Status (RAS), defined in the ITU-T H.225.0/RAS, is the protocol between endpoints (terminals and gateways) and gatekeepers. The RAS is used to perform registration, admission control, bandwidth changes, status, and disengage procedures between endpoints and gatekeepers. An RAS channel is used to exchange RAS messages. This signaling channel is opened between an endpoint and a gatekeeper prior to the establishment of any other channels.
Registration, admission, and status (RAS) is a component of a network protocol that involves the addition of (or refusal to add) new authorized users, the admission of (or refusal to admit) authorized users based on available bandwidth, and the tracking of the status of all users. Formally, RAS is part of the H.225 protocol for H.323 communications networks, designed to support multimedia bandwidths. RAS is an important signaling component in networks using voice over IP (VoIP).
RAS messages are exchanged on a specific frequency called the RAS channel. The RAS channel is the first to be opened, and precedes any communications between endpoints and gatekeepers in the network. Signals in RAS can be categorized as (1) gatekeeper discovery requests and responses; (2) admission, registration, and unregistration messages and responses; (3) location requests and responses; (4) status requests and responses; and (5) bandwidth-control requests and responses.
Media Gateway Control Protocol (MGCP)
This protocol controls telephony gateways from external call control elements called media gateway controllers or call agents. Converting among the audio signals carried on telephone circuits and data packets carried over the Internet or over other packet networks is done by telephony gateway. According to the call control architecture assumption of the MGCP the call control intelligence is outside the gateways and handled by external call control elements and these call control elements, or Call Agents, will synchronize with each other to send coherent commands to the gateways under their control.[para] Thus essentially MGCP is a master/slave protocol, where the gateways are expected to execute commands sent by the Call Agents. The MGCP implements the media gateway control interface as a set of transactions. The transactions are composed of a command and a mandatory response.
MGCP is an implementation of the Media Gateway Control Protocol architecture for controlling Media Gateways on Internet Protocol (IP) networks and the public switched telephone network (PSTN). The general base architecture and programming interface is described in RFC 2805 and the current specific MGCP definition is RFC 3435 (obsoleted RFC 2705). It is a successor to the Simple Gateway Control Protocol (SGCP).
MGCP is a signaling and call control protocol used within Voice over IP (VoIP) systems that typically interoperate with the public switched telephone network (PSTN). As such it implements a PSTN-over-IP model with the power of the network residing in a call control center (softswitch, similar to the central office of the PSTN) and the endpoints being "low-intelligence" devices, mostly simply executing control commands. The protocol represents a decomposition of other VoIP models, such as H.323, in which the media gateways (e.g., H.323's gatekeeper) have higher levels of signalling intelligence.
MGCP uses the Session Description Protocol (SDP) for specifying and negotiating the media streams to be transmitted in a call session and the Real-time Transport Protocol (RTP) for framing of the media streams.
Another implementation of the Media Gateway Control Protocol architecture exists in the similarly named Megaco protocol, a collaboration of the Internet Engineering Task Force (RFC 3525) and International Telecommunication Union (Recommendation H.248.1). Both protocols follow the guidelines of the API Media Gateway Control Protocol Architecture and Requirements in RFC 2805. However, the protocols are incompatible due to differences in protocol syntax and underlying connection model.
The distributed system is composed of a Call Agent (or Media Gateway Controller), at least one Media Gateway (MG) that performs the conversion of media signals between circuits and packets, and at least one Signaling gateway (SG) when connected to the PSTN.
The Call Agent uses MGCP to tell the Media Gateway:
* what events should be reported to the Call Agent
* how endpoints should be connected together
* what signals should be played on endpoints.
MGCP also allows the Call Agent to audit the current state of endpoints on a Media Gateway.
The Media Gateway uses MGCP to report events (such as off-hook, or dialed digits) to the Call Agent.
(While any Signaling Gateway is usually on the same physical switch as a Media Gateway, this needn't be so. The Call Agent does not use MGCP to control the Signaling Gateway; rather, SIGTRAN protocols are used to backhaul signaling between the Signaling Gateway and Call Agent).
Every issued MGCP command has a transaction ID and receives a response.
Typically, a Media Gateway is configured with a list of Call Agents from which it may accept programming (where that list normally comprises only one or two Call Agents). In principle, event notifications may be sent to different Call Agents for each endpoint on the gateway (as programmed by the Call Agents, by setting the NotifiedEntity parameter). In practice however, it is usually desirable that at any given moment all endpoints on a gateway should be controlled by the same Call Agent; other Call Agents are available only to provide redundancy in the event that the primary Call Agent fails, or loses contact with the Media Gateway. In the event of such a failure it is the backup Call Agent's responsibility to reprogram the MG so that the gateway comes under the control of the backup Call Agent. Care is needed in such cases; two Call Agents may know that they have lost contact with one another, but this does not guarantee that they are not both attempting to control the same gateway. The ability to audit the gateway to determine which Call Agent is currently controlling can be used to resolve such conflicts.
MGCP assumes that the multiple Call Agents will maintain knowledge of device state among themselves (presumably with an unspecified protocol) or rebuild it if necessary (in the face of catastrophic failure). Its failover features take into account both planned and unplanned outages.
Media Gateway Control Protocol (MGCP), also known as H.248 and Megaco, is a standard protocol for handling the signaling and session management needed during a multimedia conference. The protocol defines a means of communication between a media gateway, which converts data from the format required for a circuit-switched network to that required for a packet-switched network and the media gateway controller. MGCP can be used to set up, maintain, and terminate calls between multiple endpoints. Megaco and H.248 refer to an enhanced version of MGCP.
The standard is endorsed by the Internet Engineering Task Force (IETF) as Megaco (RFC 3015) and by the Telecommunication Standardization Sector of the International Telecommunications Union (ITU-T) as Recommendation H.248. H.323, an earlier UTI-T protocol, was used for local area networks (LANs), but was not capable of scaling to larger public networks. The MCGP and Megaco/H.248 model removes the signaling control from the gateway and puts it in a media gateway controller, which can then control multiple gateways.
MGCP was itself created from two other protocols, Internet Protocol Device Control (IPDC) and Simple Gateway Control Protocol (SGCP). Defined in RFC 2705, the MGCP specifies a protocol at the Application layer level that uses a master-slave model, in which the media gateway controller is the master. MGCP makes it possible for the controller to determine the location of each communication endpoint and its media capabilities so that a level of service can be chosen that will be possible for all participants. The later Megaco/H.248 version of MGCP supports more ports per gateway, as well as multiple gateways, and support for time-division multiplexing (TDM) and asynchronous transfer mode (ATM) communication.
This protocol controls telephony gateways from external call control elements called media gateway controllers or call agents. Converting among the audio signals carried on telephone circuits and data packets carried over the Internet or over other packet networks is done by telephony gateway. According to the call control architecture assumption of the MGCP the call control intelligence is outside the gateways and handled by external call control elements and these call control elements, or Call Agents, will synchronize with each other to send coherent commands to the gateways under their control.[para] Thus essentially MGCP is a master/slave protocol, where the gateways are expected to execute commands sent by the Call Agents. The MGCP implements the media gateway control interface as a set of transactions. The transactions are composed of a command and a mandatory response.
MGCP is an implementation of the Media Gateway Control Protocol architecture for controlling Media Gateways on Internet Protocol (IP) networks and the public switched telephone network (PSTN). The general base architecture and programming interface is described in RFC 2805 and the current specific MGCP definition is RFC 3435 (obsoleted RFC 2705). It is a successor to the Simple Gateway Control Protocol (SGCP).
MGCP is a signaling and call control protocol used within Voice over IP (VoIP) systems that typically interoperate with the public switched telephone network (PSTN). As such it implements a PSTN-over-IP model with the power of the network residing in a call control center (softswitch, similar to the central office of the PSTN) and the endpoints being "low-intelligence" devices, mostly simply executing control commands. The protocol represents a decomposition of other VoIP models, such as H.323, in which the media gateways (e.g., H.323's gatekeeper) have higher levels of signalling intelligence.
MGCP uses the Session Description Protocol (SDP) for specifying and negotiating the media streams to be transmitted in a call session and the Real-time Transport Protocol (RTP) for framing of the media streams.
Another implementation of the Media Gateway Control Protocol architecture exists in the similarly named Megaco protocol, a collaboration of the Internet Engineering Task Force (RFC 3525) and International Telecommunication Union (Recommendation H.248.1). Both protocols follow the guidelines of the API Media Gateway Control Protocol Architecture and Requirements in RFC 2805. However, the protocols are incompatible due to differences in protocol syntax and underlying connection model.
The distributed system is composed of a Call Agent (or Media Gateway Controller), at least one Media Gateway (MG) that performs the conversion of media signals between circuits and packets, and at least one Signaling gateway (SG) when connected to the PSTN.
The Call Agent uses MGCP to tell the Media Gateway:
* what events should be reported to the Call Agent
* how endpoints should be connected together
* what signals should be played on endpoints.
MGCP also allows the Call Agent to audit the current state of endpoints on a Media Gateway.
The Media Gateway uses MGCP to report events (such as off-hook, or dialed digits) to the Call Agent.
(While any Signaling Gateway is usually on the same physical switch as a Media Gateway, this needn't be so. The Call Agent does not use MGCP to control the Signaling Gateway; rather, SIGTRAN protocols are used to backhaul signaling between the Signaling Gateway and Call Agent).
Every issued MGCP command has a transaction ID and receives a response.
Typically, a Media Gateway is configured with a list of Call Agents from which it may accept programming (where that list normally comprises only one or two Call Agents). In principle, event notifications may be sent to different Call Agents for each endpoint on the gateway (as programmed by the Call Agents, by setting the NotifiedEntity parameter). In practice however, it is usually desirable that at any given moment all endpoints on a gateway should be controlled by the same Call Agent; other Call Agents are available only to provide redundancy in the event that the primary Call Agent fails, or loses contact with the Media Gateway. In the event of such a failure it is the backup Call Agent's responsibility to reprogram the MG so that the gateway comes under the control of the backup Call Agent. Care is needed in such cases; two Call Agents may know that they have lost contact with one another, but this does not guarantee that they are not both attempting to control the same gateway. The ability to audit the gateway to determine which Call Agent is currently controlling can be used to resolve such conflicts.
MGCP assumes that the multiple Call Agents will maintain knowledge of device state among themselves (presumably with an unspecified protocol) or rebuild it if necessary (in the face of catastrophic failure). Its failover features take into account both planned and unplanned outages.
Media Gateway Control Protocol (MGCP), also known as H.248 and Megaco, is a standard protocol for handling the signaling and session management needed during a multimedia conference. The protocol defines a means of communication between a media gateway, which converts data from the format required for a circuit-switched network to that required for a packet-switched network and the media gateway controller. MGCP can be used to set up, maintain, and terminate calls between multiple endpoints. Megaco and H.248 refer to an enhanced version of MGCP.
The standard is endorsed by the Internet Engineering Task Force (IETF) as Megaco (RFC 3015) and by the Telecommunication Standardization Sector of the International Telecommunications Union (ITU-T) as Recommendation H.248. H.323, an earlier UTI-T protocol, was used for local area networks (LANs), but was not capable of scaling to larger public networks. The MCGP and Megaco/H.248 model removes the signaling control from the gateway and puts it in a media gateway controller, which can then control multiple gateways.
MGCP was itself created from two other protocols, Internet Protocol Device Control (IPDC) and Simple Gateway Control Protocol (SGCP). Defined in RFC 2705, the MGCP specifies a protocol at the Application layer level that uses a master-slave model, in which the media gateway controller is the master. MGCP makes it possible for the controller to determine the location of each communication endpoint and its media capabilities so that a level of service can be chosen that will be possible for all participants. The later Megaco/H.248 version of MGCP supports more ports per gateway, as well as multiple gateways, and support for time-division multiplexing (TDM) and asynchronous transfer mode (ATM) communication.
RVP Control Protocol (RVPCP)
The control protocol was initially developed for point-to-point data applications like the control messages that configure and maintain the data link between the client and the server. During an RVP/IP session, only one class of RVP/IP control message is exchanged: RVPCP ADD VOICE (operation code 12) packet which is used to send the UDP port used by the client (for subsequent voice data packets) to the server. This message always takes a single parameter of type RVPCP UDP PORT (type code 9), which always has a length of exactly two and a value that is the two-byte UDP port to which voice data packets should be addressed. [para]The server responds with a packet containing the code RVPCP ADD VOICE ACK (operation code 13) which contains exactly one parameter, the server's voice UDP port. If RVP/IP is operating in "dynamic voice" mode, this exchange must be repeated whenever the voice channel needs to be reestablished, i.e., whenever the phone goes off-hook. Most of the functionality of this protocol is unnecessary when using TCP/IP.
The control protocol was initially developed for point-to-point data applications like the control messages that configure and maintain the data link between the client and the server. During an RVP/IP session, only one class of RVP/IP control message is exchanged: RVPCP ADD VOICE (operation code 12) packet which is used to send the UDP port used by the client (for subsequent voice data packets) to the server. This message always takes a single parameter of type RVPCP UDP PORT (type code 9), which always has a length of exactly two and a value that is the two-byte UDP port to which voice data packets should be addressed. [para]The server responds with a packet containing the code RVPCP ADD VOICE ACK (operation code 13) which contains exactly one parameter, the server's voice UDP port. If RVP/IP is operating in "dynamic voice" mode, this exchange must be repeated whenever the voice channel needs to be reestablished, i.e., whenever the phone goes off-hook. Most of the functionality of this protocol is unnecessary when using TCP/IP.
The Session Description Protocol (SDP)
On Internet Multicast backbone (Mbone) a session directory tool is used to advertise multimedia conferences and communicate the conference addresses and conference tool-specific information required for participation and this is done by SDP. SDP describes multimedia sessions for the purpose of session announcement, session invitation and other forms of multimedia session initiation.[para] The SDP communicates the existence of a session and conveys sufficient information to enable participation in the session and such messages are sent by periodically multicasting an announcement packet to a well-known multicast address and port using SAP (session announcement protocol). The messages are sent in the form of UDP packets with a SAP header and a text payload which is the SDP session description.
Session Description Protocol (SDP) is a format for describing streaming media initialization parameters in an ASCII string. The IETF published the original specification as an IETF Proposed Standard in April 1998, and subsequently published a revised specification as an IETF Proposed Standard as RFC 4566 in July 2006.
SDP is intended for describing multimedia communication sessions for the purposes of session announcement, session invitation, and other forms of multimedia session initiation. SDP does not provide the content of the media form itself but simply provides a negotiation between two end points to allow them to agree on a media type and format. This allows SDP to support upcoming media types and formats, enabling systems based on this technology to be forward compatible.
SDP started off as a component of the Session Announcement Protocol (SAP), but found other uses in conjunction with RTP, RTSP, SIP and just as a standalone format for describing multicast sessions.
There are five terms related to SDP:
1. Conference: It is a set of two or more communicating users along with the software they are using.
2. Session : Session is the multimedia sender and receiver and the flowing stream of data.
3. Session Announcement: A session announcement is a mechanism by which a session description is conveyed to users in a proactive fashion, i.e., the session description was not explicitly requested by the user.
4. Session Advertisement : same as session announcement
5. Session Description : A well defined format for conveying sufficient information to discover and participate in a multimedia session.
The Session Description Protocol (SDP) describes multimedia sessions for the purpose of session announcement, session invitation and other forms of multimedia session initiation.
Session directories assist the advertisement of conference sessions and communicate the relevant conference setup information to prospective participants. SDP is designed to convey such information to recipients. SDP is purely a format for session description - it does not incorporate a transport protocol, and is intended to use different transport protocols as appropriate including the Session Announcement Protocol (SAP) , Session Initiation Protocol (SIP) , Real-Time Streaming Protocol (RTSP) , electronic mail using the MIME extensions, and the Hypertext Transport Protocol (HTTP) .
SDP is intended to be general purpose so that it can be used for a wider range of network environments and applications than just multicast session directories. However, it is not intended to support negotiation of session content or media encodings.
On Internet Multicast backbone (Mbone) a session directory tool is used to advertise multimedia conferences and communicate the conference addresses and conference tool-specific information necessary for participation. The SDP does this. It communicates the existence of a session and conveys sufficient information to enable participation in the session.
Many of the SDP messages are sent by periodically multicasting an announcement packet to a well-known multicast address and port using SAP (session announcement protocol). These messages are UDP packets with a SAP header and a text payload. The text payload is the SDP session description. Messages can also be sent using email or the WWW (World Wide Web).
The SDP text messages include:
* Session name and purpose
* Time the session is active
* Media comprising the session
* Information to receive the media
Protocol Structure - SDP (Session Description Protocol)
SDP messages are text messages using the ISO 10646 character set in UTF-8 encoding. SDP Session description (optional fields has an *) is:
v= (protocol version)
o= (owner/creator and session identifier).
s= (session name)
i=* (session information)
u=* (URI of description)
e=* (email address)
p=* (phone number)
c=* (connection information - not required if included in all media)
b=* (bandwidth information)
One or more time descriptions (see below)
z=* (time zone adjustments)
k=* (encryption key)
a=* (zero or more session attribute lines)
Zero or more media descriptions (see below)
Time description
t= (time the session is active)
r=* (zero or more repeat times)
Media description
m= (media name and transport address)
i=* (media title)
c=* (connection information - optional if included at session-level)
b=* (bandwidth information)
k=* (encryption key)
a=* (zero or more media attribute lines)
On Internet Multicast backbone (Mbone) a session directory tool is used to advertise multimedia conferences and communicate the conference addresses and conference tool-specific information required for participation and this is done by SDP. SDP describes multimedia sessions for the purpose of session announcement, session invitation and other forms of multimedia session initiation.[para] The SDP communicates the existence of a session and conveys sufficient information to enable participation in the session and such messages are sent by periodically multicasting an announcement packet to a well-known multicast address and port using SAP (session announcement protocol). The messages are sent in the form of UDP packets with a SAP header and a text payload which is the SDP session description.
Session Description Protocol (SDP) is a format for describing streaming media initialization parameters in an ASCII string. The IETF published the original specification as an IETF Proposed Standard in April 1998, and subsequently published a revised specification as an IETF Proposed Standard as RFC 4566 in July 2006.
SDP is intended for describing multimedia communication sessions for the purposes of session announcement, session invitation, and other forms of multimedia session initiation. SDP does not provide the content of the media form itself but simply provides a negotiation between two end points to allow them to agree on a media type and format. This allows SDP to support upcoming media types and formats, enabling systems based on this technology to be forward compatible.
SDP started off as a component of the Session Announcement Protocol (SAP), but found other uses in conjunction with RTP, RTSP, SIP and just as a standalone format for describing multicast sessions.
There are five terms related to SDP:
1. Conference: It is a set of two or more communicating users along with the software they are using.
2. Session : Session is the multimedia sender and receiver and the flowing stream of data.
3. Session Announcement: A session announcement is a mechanism by which a session description is conveyed to users in a proactive fashion, i.e., the session description was not explicitly requested by the user.
4. Session Advertisement : same as session announcement
5. Session Description : A well defined format for conveying sufficient information to discover and participate in a multimedia session.
The Session Description Protocol (SDP) describes multimedia sessions for the purpose of session announcement, session invitation and other forms of multimedia session initiation.
Session directories assist the advertisement of conference sessions and communicate the relevant conference setup information to prospective participants. SDP is designed to convey such information to recipients. SDP is purely a format for session description - it does not incorporate a transport protocol, and is intended to use different transport protocols as appropriate including the Session Announcement Protocol (SAP) , Session Initiation Protocol (SIP) , Real-Time Streaming Protocol (RTSP) , electronic mail using the MIME extensions, and the Hypertext Transport Protocol (HTTP) .
SDP is intended to be general purpose so that it can be used for a wider range of network environments and applications than just multicast session directories. However, it is not intended to support negotiation of session content or media encodings.
On Internet Multicast backbone (Mbone) a session directory tool is used to advertise multimedia conferences and communicate the conference addresses and conference tool-specific information necessary for participation. The SDP does this. It communicates the existence of a session and conveys sufficient information to enable participation in the session.
Many of the SDP messages are sent by periodically multicasting an announcement packet to a well-known multicast address and port using SAP (session announcement protocol). These messages are UDP packets with a SAP header and a text payload. The text payload is the SDP session description. Messages can also be sent using email or the WWW (World Wide Web).
The SDP text messages include:
* Session name and purpose
* Time the session is active
* Media comprising the session
* Information to receive the media
Protocol Structure - SDP (Session Description Protocol)
SDP messages are text messages using the ISO 10646 character set in UTF-8 encoding. SDP Session description (optional fields has an *) is:
v= (protocol version)
o= (owner/creator and session identifier).
s= (session name)
i=* (session information)
u=* (URI of description)
e=* (email address)
p=* (phone number)
c=* (connection information - not required if included in all media)
b=* (bandwidth information)
One or more time descriptions (see below)
z=* (time zone adjustments)
k=* (encryption key)
a=* (zero or more session attribute lines)
Zero or more media descriptions (see below)
Time description
t= (time the session is active)
r=* (zero or more repeat times)
Media description
m= (media name and transport address)
i=* (media title)
c=* (connection information - optional if included at session-level)
b=* (bandwidth information)
k=* (encryption key)
a=* (zero or more media attribute lines)
Skinny Client Control Protocol (SCCP)
In the present scenario when Telephony systems are moving to a common wiring plant the end station of a LAN or IP- based PBX must be simple to use, familiar and relatively cheap. While the H.323 recommendations are pretty expensive, an H.323 proxy can be used to communicate with the Skinny Client using the SCCP.[para] Here the telephone is a skinny client over IP, in the context of H.323. A proxy is used for the H.225 and H.245 signaling. Skinny messages are carried above TCP and use the port 2000 at the same time the skinny client (i.e. an Ethernet Phone) uses TCP/IP to transmit and receive calls and RTP/UDP/IP to/from a Skinny Client or H.323 terminal for audio.
The Skinny Call Control Protocol (SCCP, or short Skinny) is a proprietary network terminal control protocol originally developed by Selsius Corporation.
The SCCP technology is now owned and defined by Cisco Systems, Inc. as a messaging system between a Skinny client and the Cisco CallManager. Examples of skinny clients include the Cisco 7900 series of IP phones, Cisco IP Communicator softphone and the 802.11b wireless Cisco 7920, along with Cisco Unity voicemail server. Skinny is a lightweight protocol which allows for efficient communication with Cisco CallManager. CallManager acts as a signaling proxy for call events initiated over other common protocols such as H.323, SIP, ISDN and/or MGCP.
A Skinny client uses TCP/IP to communicate with one or more Call Manager applications in a cluster. It uses the Real-time Transport Protocol (RTP) over UDP-transport for the bearer traffic (real-time audio stream) with other Skinny clients or an H.323 terminal. SCCP is a stimulus-based protocol and is designed as a communications protocol for hardware endpoints and other embedded systems, with significant CPU and memory constraints.
Cisco acquired SCCP technology when it acquired Selsius Corporation in 1998.As a remnant of the Selsius origin of the current Cisco IP phones, the default device name format for registered Cisco phones with CallManager is SEP -- as in Selsius Ethernet Phone -- followed by the MAC address. Cisco also has marketed a Skinny-based softphone called Cisco IP Communicator.
AppleTalk
Implementing file transfer, printer sharing, and mail service among Apple systems using the Local Talk interface built into Apple hardware, these were the main tasks of AppleTalk developed by Apple Computer. AppleTalk ports to other network media such as Ethernet with the aod of LocalTalk to Ethernet bridges or by Ethernet add-in boards for Apple machines. In addition to many third-party applications, internetwork routing, transaction and data stream service, naming service, and comprehensive file and print sharing are some of the provisions of this multi-layered protocol. With the introduction of AppleTalk Phase 2 in 1989, the addressing capability of AppleTalk networks were extended and thereby provided compliance with the IEEE 802 standard. Some other additions present in AppleTalk Phase 2 were the range of available network layer addresses and the use of the IEEE 802.2 Logical Link Control (LLC) protocol at the Data Link Layer.
AppleTalk is a proprietary suite of protocols developed by Apple Inc for networking computers. It was included in the original Macintosh (1984) and is now deprecated by Apple in favor of TCP/IP networking. AppleTalk's Datagram Delivery Protocol corresponds closely to the Network layer of the Open Systems Interconnection (OSI) communication model.
The AppleTalk design rigorously followed the OSI model of protocol layering. Unlike most of the early LAN systems, AppleTalk was not built using the archetypal Xerox XNS system. The intended target was not Ethernet, and it did not have 48-bit addresses to route. Nevertheless, many portions of the AppleTalk system have direct analogs in XNS.
One key differentiation for AppleTalk was it contained three protocols aimed at making the system completely self-configuring. The AppleTalk address resolution protocol (AARP) allowed AppleTalk hosts to automatically generate their own network addresses, and the Name Binding Protocol (NBP) was a dynamic Domain Name System (DNS) system, mapping network addresses to user-readable names. Although systems similar to AARP existed in other systems, Banyan VINES for instance, nothing like NBP has existed until recently.
Both AARP and NBP had defined ways to allow "controller" devices to override the default mechanisms. The concept was to allow routers to provide the information or "hardwire" the system to known addresses and names. On larger networks where AARP could cause problems as new nodes searched for free addresses, the addition of a router could reduce "chattiness." Together AARP and NBP made AppleTalk an easy-to-use networking system. New machines were added to the network by plugging them and optionally giving them a name. The NBP lists were examined and displayed by a program known as the Chooser which would display a list of machines on the local network, divided into classes such as file-servers and printers.
One problem for AppleTalk is it was intended to be part of a project known as Macintosh Office, which would consist of a host machine providing routing, printer sharing and file sharing. However this project was canceled in 1986. Despite this, the LaserWriter included built-in AppleTalk. Apple released a file and print server known as the AppleShare File and Print Servers.
Today AppleTalk support is provided for backward compatibility in many products, but the default networking on the Mac is TCP/IP. Starting with Mac OS X v10.2, Bonjour (originally named Rendezvous) provides similar discovery and configuration services for TCP/IP-based networks. Bonjour is Apple's implementation of ZeroConf, which was written specifically to bring NBP's ease-of-use to the TCP/IP world.
AppleTalk Address Resolution Protocol
AARP resolves AppleTalk addresses to physical layer, usually MAC, addresses. It is functionally equivalent to ARP.
AARP is a fairly simple system. When powered on, an AppleTalk machine broadcasts an AARP probe packet asking for a network address, intending to hear back from controllers such as routers. If no address is provided, one is picked at random from the "base subnet", 0. It then broadcasts another packet saying "I am selecting this address", and then waits to see if anyone else on the network complains. If another machine has that address, it will pick another address, and keep trying until it finds a free one. On a network with many machines it may take several tries before a free address is found, so for performance purposes the successful address is "written down" in NVRAM and used as the default address in the future. This means that in most real-world setups where machines are added a few at a time, only one or two tries are needed before the address effectively become constant.
AppleTalk is Apple's design of a simple, inexpensive and flexible network for connecting computers, peripheral devices, and servers. AppleTalk's flexibility allows it to be used to connect peripherals such as the LaserWriter, or act as a stand-alone local-area network for up to 32 nodes, or form portions of a larger network by using bridges and gateway devices.
What is AppleTalk? At a purely physical level, AppleTalk is a network with a bus topology that uses a trunk cable between connection modules. Interfacing with the network is handled by the Serial Communications Control chip found in every Mac. Any device (computer, peripheral, etc.) attaches to a connection box via a short cable (called a drop cable), as shown in figure 1. This type of network is known as a multidrop line or a multipoint link. AppleTalk is capable of supporting up to 32 nodes (devices) per network and can transmit data at a rate of 230,400 bits per second. Nodes can be separated by a maximum cable length of 1000 feet.
AppleTalk, as specified by Apple, is wired using relatively inexpensive shielded, twisted-pair cable and Apple's connection boxes. One box is required per device; in the case of the Mac, the box plugs into the serial printer port in the back of the Mac using an attached drop cable. A trunk cable segment from one node on the network plugs into one port on the connection box, and another cable segment leading to the next node in the network plugs into the other port on the box.
One of the advantages of AppleTalk relates to the design of these connection boxes. The boxes are designed so that the continuity of the trunk cable and the network is maintained even if a device is disconnected from the network by unplugging it from the connection box. (Unplugging the trunk from the connection box does disrupt the integrity of the network, however.) The physical layout of an AppleTalk network can therefore be designed by locating the connection boxes where desired without worrying if a device will be initially connected to each one of the boxes. Additional devices can be added to the network at any time simply by plugging them into the boxes.
There are alternatives to using Apple's connection boxes. Farallon Computing markets their PhoneNET system, which fully supports the AppleTalk protocols. In the case of PhoneNET, the physical transmission medium is ordinary telephone wire, allowing the user to use the in-house telephone wiring for his network. PhoneNET uses the two of the unused wires found in a normal telephone installation, supporting both a telephone and a Mac connected to the same telephone wall box. In addition, PhoneNET links are capable of supporting 3000-foot distances between nodes. Farallon has a series of devices (repeaters, Star Controller) for extending the network.
With the recent announcement of DuPont's system for AppleTalk, users can also use fiber optic connections for an AppleTalk network. A concentrator is also available for constructing star networks. Two advantages of the fiber optics system are its immunity to EMI-RFI interference and improved data security; nodes may be a maximum of 4900 feet apart.
AppleTalk Protocols and the OSI Model
The Physical Layer has the responsibility of bit encoding/decoding, synchronization, signal transmission/ reception and carrier sensing. As mentioned previously, the Serial Communications Control chip in the Mac takes care of the AppleTalk port, which happens to be the printer port on current Macs. As long as connection modules conform to the signal descriptions of the Physical Layer, any transmission medium can be used for the actual network.
The AppleTalk Link Access Protocol (ALAP) must be common to all systems on the network bus and handles the node-to-node delivery of data between devices connected to a single AppleTalk network. ALAP determines when the bus is free, encapsulates the data in frames, sends its data, and recognizes when data should be received. ALAP is also responsible for assigning node numbers to each station on a network. The ALAP software assigns a random node number when the Mac is booted and keeps that number as long as it does not conflict with a previously assigned node number (if it does conflict, ALAP tries again).
The Link Access Protocol uses a method called CSMA/CA, or carrier-sense multiple access with collision avoidance, for access control. Carrier sense means that a sending node first listens to the network to hear if any other node is using the bus and defers to the ongoing transmission. Collision avoidance means that the protocol attempts to minimize collisions between transmitted data packets. In AppleTalk CSMA/CA, all transmitters wait until the bus is idle for a minimum time plus a random amount of added time before transmitting (or retransmitting after a collision).
While the ALAP protocol provides delivery of data over a single AppleTalk network, the Datagram Delivery Protocol (DDP) extends this mechanism to include a group of interconnected AppleTalk networks, known as an internet. An internet can be formed, for example, by using a bridge between two, or more, AppleTalk networks.
AppleTalk's address header (a part of each data packet) is used for identification of a process on the network and consists of a socket number, node number, and network number. A socket is a communication endpoint within a node on the network. Sockets belong to processes or functions that are implemented within software in the node. One Mac may have several AppleTalk connections open at one time, so the node number is not enough to identify a network address. In addition, node numbers are unique only within a single physical network, so DDP requires that each network be assigned a network number. The Datagram Delivery Protocol takes care of assigning socket numbers, as well as node numbers and network numbers, to provide a unique identification for every process occurring on the AppleTalk network.
As we move on to the Transport Layer, several protocols exist to add different types of functionality to the underlying services. The Routing Table Maintenance Protocol (RTMP) allows bridges and internet routers to dynamically discover routes to the different AppleTalk networks in an internet. The routing tables pair network numbers with the local node number of the bridge through which the shortest path to that net exists.
The AppleTalk Transaction Protocol, or ATP, is part of the Transport Layer and is responsible for controlling the transactions (flow of data) between requestor and responder sockets. This transaction-oriented protocol can be contrasted to other types of transport layers which support a two-way link between clients that can act as though they had an error-free hardwired link between them.
The basic function of the Name Binding Protocol (NBP) is the translation of a character string name into the internet address of the corresponding client. A key feature of the network is that most objects are accessible by name rather than by address (better for the user). NBP also introduces the concept of a zone, which is an arbitrary subset of networks in an internet where each network is in one and only one zone. The concept of zones is provided to assist the establishment of departmental or other user-understandable grouping of the entities of the internet. AppleTalk names consist of three fields: the object name (e.g., Dave), the type name (e.g., printer), and the zone name (e.g., Bldg. 1).
The Echo Protocol (EP) is a simple protocol that allows any node to send data to any other node on an AppleTalk internet and receive an echoed copy of that data in return. The Echo Protocol is mainly meant for network maintenance functions.
The specifications for the AppleTalk Data Stream Protocol (ADSP) have not yet been published (Inside AppleTalk, current version dated July 14, 1986). ADSP is designed to provide byte-stream data transmission in a full duplex mode between any two sockets on an AppleTalk internet. The Zone Information Protocol (ZIP) is used to maintain an internet-wide mapping of networks to zone names. Most of ZIP's services are transparent to the normal (non-bridge) node; the majority of ZIP is implemented in the bridges of an internet. ZIP is used by the Name Binding Protocol to determine which networks belong to a given zone.
In the Session Layer, the AppleTalk Session Protocol (ASP) is a general protocol designed to interact with ATP to provide for establishing, maintaining and closing sessions. Central to ASP is the concept of a session; two network entities, one in a workstations and the other in a server, can set up an ASP session between themselves (identified by a unique sessions identifier). ASP is an asymetric protocol in that the workstation initiates the session connection and issues sequences of commands, to which the server responds; the server may not send commands to the workstation.
The specifications for the AppleTalk Filing Protocol (AFP) have not been generally publicized. However, AFP has been finalized with the introduction of the AppleShare file server software from Apple, which uses AFP. AFP is a presentation layer protocol designed to control access to remote file systems.
Implementing file transfer, printer sharing, and mail service among Apple systems using the Local Talk interface built into Apple hardware, these were the main tasks of AppleTalk developed by Apple Computer. AppleTalk ports to other network media such as Ethernet with the aod of LocalTalk to Ethernet bridges or by Ethernet add-in boards for Apple machines. In addition to many third-party applications, internetwork routing, transaction and data stream service, naming service, and comprehensive file and print sharing are some of the provisions of this multi-layered protocol. With the introduction of AppleTalk Phase 2 in 1989, the addressing capability of AppleTalk networks were extended and thereby provided compliance with the IEEE 802 standard. Some other additions present in AppleTalk Phase 2 were the range of available network layer addresses and the use of the IEEE 802.2 Logical Link Control (LLC) protocol at the Data Link Layer.
AppleTalk is a proprietary suite of protocols developed by Apple Inc for networking computers. It was included in the original Macintosh (1984) and is now deprecated by Apple in favor of TCP/IP networking. AppleTalk's Datagram Delivery Protocol corresponds closely to the Network layer of the Open Systems Interconnection (OSI) communication model.
The AppleTalk design rigorously followed the OSI model of protocol layering. Unlike most of the early LAN systems, AppleTalk was not built using the archetypal Xerox XNS system. The intended target was not Ethernet, and it did not have 48-bit addresses to route. Nevertheless, many portions of the AppleTalk system have direct analogs in XNS.
One key differentiation for AppleTalk was it contained three protocols aimed at making the system completely self-configuring. The AppleTalk address resolution protocol (AARP) allowed AppleTalk hosts to automatically generate their own network addresses, and the Name Binding Protocol (NBP) was a dynamic Domain Name System (DNS) system, mapping network addresses to user-readable names. Although systems similar to AARP existed in other systems, Banyan VINES for instance, nothing like NBP has existed until recently.
Both AARP and NBP had defined ways to allow "controller" devices to override the default mechanisms. The concept was to allow routers to provide the information or "hardwire" the system to known addresses and names. On larger networks where AARP could cause problems as new nodes searched for free addresses, the addition of a router could reduce "chattiness." Together AARP and NBP made AppleTalk an easy-to-use networking system. New machines were added to the network by plugging them and optionally giving them a name. The NBP lists were examined and displayed by a program known as the Chooser which would display a list of machines on the local network, divided into classes such as file-servers and printers.
One problem for AppleTalk is it was intended to be part of a project known as Macintosh Office, which would consist of a host machine providing routing, printer sharing and file sharing. However this project was canceled in 1986. Despite this, the LaserWriter included built-in AppleTalk. Apple released a file and print server known as the AppleShare File and Print Servers.
Today AppleTalk support is provided for backward compatibility in many products, but the default networking on the Mac is TCP/IP. Starting with Mac OS X v10.2, Bonjour (originally named Rendezvous) provides similar discovery and configuration services for TCP/IP-based networks. Bonjour is Apple's implementation of ZeroConf, which was written specifically to bring NBP's ease-of-use to the TCP/IP world.
AppleTalk Address Resolution Protocol
AARP resolves AppleTalk addresses to physical layer, usually MAC, addresses. It is functionally equivalent to ARP.
AARP is a fairly simple system. When powered on, an AppleTalk machine broadcasts an AARP probe packet asking for a network address, intending to hear back from controllers such as routers. If no address is provided, one is picked at random from the "base subnet", 0. It then broadcasts another packet saying "I am selecting this address", and then waits to see if anyone else on the network complains. If another machine has that address, it will pick another address, and keep trying until it finds a free one. On a network with many machines it may take several tries before a free address is found, so for performance purposes the successful address is "written down" in NVRAM and used as the default address in the future. This means that in most real-world setups where machines are added a few at a time, only one or two tries are needed before the address effectively become constant.
AppleTalk is Apple's design of a simple, inexpensive and flexible network for connecting computers, peripheral devices, and servers. AppleTalk's flexibility allows it to be used to connect peripherals such as the LaserWriter, or act as a stand-alone local-area network for up to 32 nodes, or form portions of a larger network by using bridges and gateway devices.
What is AppleTalk? At a purely physical level, AppleTalk is a network with a bus topology that uses a trunk cable between connection modules. Interfacing with the network is handled by the Serial Communications Control chip found in every Mac. Any device (computer, peripheral, etc.) attaches to a connection box via a short cable (called a drop cable), as shown in figure 1. This type of network is known as a multidrop line or a multipoint link. AppleTalk is capable of supporting up to 32 nodes (devices) per network and can transmit data at a rate of 230,400 bits per second. Nodes can be separated by a maximum cable length of 1000 feet.
AppleTalk, as specified by Apple, is wired using relatively inexpensive shielded, twisted-pair cable and Apple's connection boxes. One box is required per device; in the case of the Mac, the box plugs into the serial printer port in the back of the Mac using an attached drop cable. A trunk cable segment from one node on the network plugs into one port on the connection box, and another cable segment leading to the next node in the network plugs into the other port on the box.
One of the advantages of AppleTalk relates to the design of these connection boxes. The boxes are designed so that the continuity of the trunk cable and the network is maintained even if a device is disconnected from the network by unplugging it from the connection box. (Unplugging the trunk from the connection box does disrupt the integrity of the network, however.) The physical layout of an AppleTalk network can therefore be designed by locating the connection boxes where desired without worrying if a device will be initially connected to each one of the boxes. Additional devices can be added to the network at any time simply by plugging them into the boxes.
There are alternatives to using Apple's connection boxes. Farallon Computing markets their PhoneNET system, which fully supports the AppleTalk protocols. In the case of PhoneNET, the physical transmission medium is ordinary telephone wire, allowing the user to use the in-house telephone wiring for his network. PhoneNET uses the two of the unused wires found in a normal telephone installation, supporting both a telephone and a Mac connected to the same telephone wall box. In addition, PhoneNET links are capable of supporting 3000-foot distances between nodes. Farallon has a series of devices (repeaters, Star Controller) for extending the network.
With the recent announcement of DuPont's system for AppleTalk, users can also use fiber optic connections for an AppleTalk network. A concentrator is also available for constructing star networks. Two advantages of the fiber optics system are its immunity to EMI-RFI interference and improved data security; nodes may be a maximum of 4900 feet apart.
AppleTalk Protocols and the OSI Model
The Physical Layer has the responsibility of bit encoding/decoding, synchronization, signal transmission/ reception and carrier sensing. As mentioned previously, the Serial Communications Control chip in the Mac takes care of the AppleTalk port, which happens to be the printer port on current Macs. As long as connection modules conform to the signal descriptions of the Physical Layer, any transmission medium can be used for the actual network.
The AppleTalk Link Access Protocol (ALAP) must be common to all systems on the network bus and handles the node-to-node delivery of data between devices connected to a single AppleTalk network. ALAP determines when the bus is free, encapsulates the data in frames, sends its data, and recognizes when data should be received. ALAP is also responsible for assigning node numbers to each station on a network. The ALAP software assigns a random node number when the Mac is booted and keeps that number as long as it does not conflict with a previously assigned node number (if it does conflict, ALAP tries again).
The Link Access Protocol uses a method called CSMA/CA, or carrier-sense multiple access with collision avoidance, for access control. Carrier sense means that a sending node first listens to the network to hear if any other node is using the bus and defers to the ongoing transmission. Collision avoidance means that the protocol attempts to minimize collisions between transmitted data packets. In AppleTalk CSMA/CA, all transmitters wait until the bus is idle for a minimum time plus a random amount of added time before transmitting (or retransmitting after a collision).
While the ALAP protocol provides delivery of data over a single AppleTalk network, the Datagram Delivery Protocol (DDP) extends this mechanism to include a group of interconnected AppleTalk networks, known as an internet. An internet can be formed, for example, by using a bridge between two, or more, AppleTalk networks.
AppleTalk's address header (a part of each data packet) is used for identification of a process on the network and consists of a socket number, node number, and network number. A socket is a communication endpoint within a node on the network. Sockets belong to processes or functions that are implemented within software in the node. One Mac may have several AppleTalk connections open at one time, so the node number is not enough to identify a network address. In addition, node numbers are unique only within a single physical network, so DDP requires that each network be assigned a network number. The Datagram Delivery Protocol takes care of assigning socket numbers, as well as node numbers and network numbers, to provide a unique identification for every process occurring on the AppleTalk network.
As we move on to the Transport Layer, several protocols exist to add different types of functionality to the underlying services. The Routing Table Maintenance Protocol (RTMP) allows bridges and internet routers to dynamically discover routes to the different AppleTalk networks in an internet. The routing tables pair network numbers with the local node number of the bridge through which the shortest path to that net exists.
The AppleTalk Transaction Protocol, or ATP, is part of the Transport Layer and is responsible for controlling the transactions (flow of data) between requestor and responder sockets. This transaction-oriented protocol can be contrasted to other types of transport layers which support a two-way link between clients that can act as though they had an error-free hardwired link between them.
The basic function of the Name Binding Protocol (NBP) is the translation of a character string name into the internet address of the corresponding client. A key feature of the network is that most objects are accessible by name rather than by address (better for the user). NBP also introduces the concept of a zone, which is an arbitrary subset of networks in an internet where each network is in one and only one zone. The concept of zones is provided to assist the establishment of departmental or other user-understandable grouping of the entities of the internet. AppleTalk names consist of three fields: the object name (e.g., Dave), the type name (e.g., printer), and the zone name (e.g., Bldg. 1).
The Echo Protocol (EP) is a simple protocol that allows any node to send data to any other node on an AppleTalk internet and receive an echoed copy of that data in return. The Echo Protocol is mainly meant for network maintenance functions.
The specifications for the AppleTalk Data Stream Protocol (ADSP) have not yet been published (Inside AppleTalk, current version dated July 14, 1986). ADSP is designed to provide byte-stream data transmission in a full duplex mode between any two sockets on an AppleTalk internet. The Zone Information Protocol (ZIP) is used to maintain an internet-wide mapping of networks to zone names. Most of ZIP's services are transparent to the normal (non-bridge) node; the majority of ZIP is implemented in the bridges of an internet. ZIP is used by the Name Binding Protocol to determine which networks belong to a given zone.
In the Session Layer, the AppleTalk Session Protocol (ASP) is a general protocol designed to interact with ATP to provide for establishing, maintaining and closing sessions. Central to ASP is the concept of a session; two network entities, one in a workstations and the other in a server, can set up an ASP session between themselves (identified by a unique sessions identifier). ASP is an asymetric protocol in that the workstation initiates the session connection and issues sequences of commands, to which the server responds; the server may not send commands to the workstation.
The specifications for the AppleTalk Filing Protocol (AFP) have not been generally publicized. However, AFP has been finalized with the introduction of the AppleShare file server software from Apple, which uses AFP. AFP is a presentation layer protocol designed to control access to remote file systems.
Labels: AppleTalk, Computer Science, Computer's Notes, Seminar Topics, Seminars
Stealth virus
This virus hides from the operating system when the system checks the location where the virus resides, by forging results that would be anticipated from an uninfected system. The different kinds of virus, one of them known as fast-infector virus infects not only programs that are executed but also those that are merely accessed therefore running antiviral scanning software on a computer infected by such a virus can infect every program on the computer. Another kind called the slow-infector virus infects files only while they are modified, so that the modification appears legitimate to checksumming software. Yet another kind called the sparse-infector virus infects only on certain occasions—for example, it may infect every tenth program executed. This strategy makes it more difficult to detect the virus.
In computer security, a stealth virus is a computer virus that uses various mechanisms to avoid detection by antivirus software. Generally, stealth describes any approach to doing something while avoiding notice. Viruses that escape notice without being specifically designed to do so -- whether because the virus is new, or because the user hasn't updated their antivirus software -- are sometimes described as stealth viruses too. Stealth viruses are nothing new: the first known virus for PCs, Brain (reportedly created by software developers as an anti-piracy measure), was a stealth virus that infected the boot sector in storage.
Typically, when an antivirus program runs, a stealth virus hides itself in memory, and uses various tricks to also hide changes it has made to any files or boot records. The virus may maintain a copy of the original, uninfected data and monitor system activity. When the program attempts to access data that's been altered, the virus redirects it to a storage area maintaining the original, uninfected data. A good antivirus program should be able to find a stealth virus by looking for evidence in memory as well as in areas that viruses usually attack.
The term stealth virus is also used in medicine, to describe a biological virus that hides from the host immune system.
A computer virus that actively hides itself from antivirus software by either masking the size of the file that it hides in or temporarily removing itself from the infected file and placing a copy of itself in another location on the drive, replacing the infected file with an uninfected one that it has stored on the hard drive.
This virus hides from the operating system when the system checks the location where the virus resides, by forging results that would be anticipated from an uninfected system. The different kinds of virus, one of them known as fast-infector virus infects not only programs that are executed but also those that are merely accessed therefore running antiviral scanning software on a computer infected by such a virus can infect every program on the computer. Another kind called the slow-infector virus infects files only while they are modified, so that the modification appears legitimate to checksumming software. Yet another kind called the sparse-infector virus infects only on certain occasions—for example, it may infect every tenth program executed. This strategy makes it more difficult to detect the virus.
In computer security, a stealth virus is a computer virus that uses various mechanisms to avoid detection by antivirus software. Generally, stealth describes any approach to doing something while avoiding notice. Viruses that escape notice without being specifically designed to do so -- whether because the virus is new, or because the user hasn't updated their antivirus software -- are sometimes described as stealth viruses too. Stealth viruses are nothing new: the first known virus for PCs, Brain (reportedly created by software developers as an anti-piracy measure), was a stealth virus that infected the boot sector in storage.
Typically, when an antivirus program runs, a stealth virus hides itself in memory, and uses various tricks to also hide changes it has made to any files or boot records. The virus may maintain a copy of the original, uninfected data and monitor system activity. When the program attempts to access data that's been altered, the virus redirects it to a storage area maintaining the original, uninfected data. A good antivirus program should be able to find a stealth virus by looking for evidence in memory as well as in areas that viruses usually attack.
The term stealth virus is also used in medicine, to describe a biological virus that hides from the host immune system.
A computer virus that actively hides itself from antivirus software by either masking the size of the file that it hides in or temporarily removing itself from the infected file and placing a copy of itself in another location on the drive, replacing the infected file with an uninfected one that it has stored on the hard drive.
Cuckoo Egg
The term and the concept of Cuckoo Egg is quiet strange. When you download copy protected songs, you may come across Cuckoo Egg yourself. In the first 30 seconds of the downloaded song you may hear something other than the initial song which could be the cuckoo clock sound effects or a series of random sounds and noises that are free of any copyright ownerships. This happens for not buying the CD in the first place!
A Cuckoo Egg is an edited MP3 file that appears to be a copyright protected song being distributed via the Internet without the permission of the copyright owner. The initial portion of the song (first 30 seconds or so) will be of the real song. The remainder of the song however, has been overwritten by something other than the initial song; usually cuckoo clock sound effects or or a series of random sounds and noises which are free of any copyright ownerships. Cuckoo Eggs will have the correct file size and playing time as the original copyrighted MP3 file will have.
The Cuckoo Egg project was started to discourage people from trading copy protected music files online with Napster.
The term and the concept of Cuckoo Egg is quiet strange. When you download copy protected songs, you may come across Cuckoo Egg yourself. In the first 30 seconds of the downloaded song you may hear something other than the initial song which could be the cuckoo clock sound effects or a series of random sounds and noises that are free of any copyright ownerships. This happens for not buying the CD in the first place!
A Cuckoo Egg is an edited MP3 file that appears to be a copyright protected song being distributed via the Internet without the permission of the copyright owner. The initial portion of the song (first 30 seconds or so) will be of the real song. The remainder of the song however, has been overwritten by something other than the initial song; usually cuckoo clock sound effects or or a series of random sounds and noises which are free of any copyright ownerships. Cuckoo Eggs will have the correct file size and playing time as the original copyrighted MP3 file will have.
The Cuckoo Egg project was started to discourage people from trading copy protected music files online with Napster.
Labels: Computer Science, Computer's Notes, Cuckoo Egg, Seminar Topics, Seminars
The Role of Software in Nuclear Engineering
The Radiation Safety Information Computational Center (RSICC) is focused on collecting, organizing, and disseminating computational codes and nuclear data associated with radiation transport and safety. Established in 1963 as the Radiation Shielding Information Center, RSICC currently has a library of approximately 1700 code and data packages used for radiation source characterization, dosimetry, neutral- and charged-particle shielding, criticality safety, radiation dispersion modeling, and reactor physics. Although a large number of these software packages represent an archiving of historical information, approximately 2000 software packages are distributed each year because they represent current state-of-the-art software that is valuable for general- and special-purpose nuclear analyses. These software packages are widely distributed worldwide, especially to nuclear engineering students and professors.
The Radiation Safety Information Computational Center (RSICC) is focused on collecting, organizing, and disseminating computational codes and nuclear data associated with radiation transport and safety. Established in 1963 as the Radiation Shielding Information Center, RSICC currently has a library of approximately 1700 code and data packages used for radiation source characterization, dosimetry, neutral- and charged-particle shielding, criticality safety, radiation dispersion modeling, and reactor physics. Although a large number of these software packages represent an archiving of historical information, approximately 2000 software packages are distributed each year because they represent current state-of-the-art software that is valuable for general- and special-purpose nuclear analyses. These software packages are widely distributed worldwide, especially to nuclear engineering students and professors.
Ultrasonics and Acousto-Optics for the Nondestructive Testing of Complex Materials
Posted by Ashwin.S at 6:50 AMUltrasonics and Acousto-Optics for the Nondestructive Testing of Complex Materials
Nondestructive testing has become an essential tool for failure prevention, conditionbased maintenance, structural health monitoring, as well as for quality control during production processes. Ultrasound and Acousto-Optics are applied to examine the mechanical properties of isotropic and anisotropic materials, and also to detect material defects. Fiber reinforced composites are ubiquitous in many industries, yet the interaction of ultrasound with such materials is particularly challenging because of effects such as surface roughness, the fact that materials in principle may be triclinic instead of orthotropic, the presence of a pre-stress, piezoelectric effects, the existence of a coating, the finite dimensions of material parts, etc. Some important recent results will be presented. The presentation will focus on the interaction of ultrasound with multi-layered fiber reinforced composites and crystals, the propagation of ultrasound in piezoelectric materials that are subject to a pre-stress, and diffraction phenomena on 1D and 2D corrugated surfaces.
Nondestructive testing has become an essential tool for failure prevention, conditionbased maintenance, structural health monitoring, as well as for quality control during production processes. Ultrasound and Acousto-Optics are applied to examine the mechanical properties of isotropic and anisotropic materials, and also to detect material defects. Fiber reinforced composites are ubiquitous in many industries, yet the interaction of ultrasound with such materials is particularly challenging because of effects such as surface roughness, the fact that materials in principle may be triclinic instead of orthotropic, the presence of a pre-stress, piezoelectric effects, the existence of a coating, the finite dimensions of material parts, etc. Some important recent results will be presented. The presentation will focus on the interaction of ultrasound with multi-layered fiber reinforced composites and crystals, the propagation of ultrasound in piezoelectric materials that are subject to a pre-stress, and diffraction phenomena on 1D and 2D corrugated surfaces.
Subscribe to:
Posts (Atom)