Saturday, November 21, 2009

Controller-Pilot Communications




























From pre-flight to landing, all Instrument Flight Rule (IFR) flights are conducted with controller-pilot communications. An IFR flight over a long distance requires many communications with many different controllers.

After the flight plan is filed for a commercial jetliner and the aircraft preflight is completed, the pilot is ready to taxi. A call is made to Clearance Delivery in local control (the airport's control tower) for either verification of the "clearance filed" or to receive a "modified clearance." Pilots are encouraged to file for "preferred" routes, if there are any. Pilots always like to hear "cleared as filed" as this means their flight plan was received without requiring any changes. When pilots receive an amended clearance, they copy and read back to verify. The controllers will warn a flight crew if the new clearance is a long or complicated notation. A clearance delivery controller at Chicago's O'Hare (ORD) airport would warn a pilot of complicated changes with the statement, "Hope you have a sharp pencil handy." The crew receiving the clearance would recognize that they would have to listen carefully and write quickly. After the pilot has clearance, he/she is instructed to contact ground control in local control (the airport's control tower) on the frequency given by the clearance delivery controller. Next, the ground controller clears the pilot to taxi to the takeoff runway. At large airports this can take a considerable amount of time, involving many turns on many taxiways with many stops (for further clearance if the taxi path crosses runways along the way). All clearances have a "cleared to" phrase that gives further directions on how to proceed once the aircraft arrives at that point.Once the pilot is at the takeoff runway in the run-up area, he/she contacts the airport tower. When the tower controller clears the aircraft for takeoff, the controller also instructs the pilot as to the heading and altitude to climb to after takeoff. Clearance for many flights specifies a standard Departure Procedure (DP).

After takeoff and the initial climb out from the departure airport, the local controller hands off the flight to the departure controller located in the Terminal Radar Approach Control (TRACON). The "hand-off" consists of the local controller telling the pilot to contact departure control and giving the radio frequency to which the pilot must switch. This hand-off also takes place electronically as the aircraft's transponder code is received by the controller in the TRACON. The signal appears on the controller's radar screen as a "target" with its data block. The pilot then contacts the departure controller located in the TRACON who then provides necessary altitude or heading changes to position the aircraft for its next flight phase: en route. The departure controller then hands off the flight to a controller in an Air Route Traffic Control Center (ARTCC).

The ARTCC controller then monitors the aircraft along the en route portion of the flight. A coast-to-coast flight will fly through many different ARTCC sections before the flight is handed off to an approach controller. The original flight clearance that was given probably contained a Standard Terminal Arrival Route (STAR) for the arrival phase of the flight. If there are no delays or weather problems, the STAR will be routinely followed.

The Approach Controller gives the pilot descent altitudes and vectors (headings) to a final approach fix. When the aircraft arrives at the final approach fix, it will be cleared to fly a published approach. The flight will next be handed off to the destination airport's tower controller for landing instructions. The tower controller clears the flight to land. Upon landing, the tower controller directs the pilot to an exit taxiway. The pilot also receives the next radio frequency to which he/she must switch the radio in order to contact the ground controller.After exiting the runway, the pilot contacts the ground controller for taxi clearance and gate instructions. The pilot parks the aircraft at the gate, terminating the flight.

NASA Research
Modern aircraft cockpits and air traffic control centers are very complex, high-technology environments in which to work. Understanding and optimizing the ways in which humans and high-technology systems work together are critical to aviation safety and the development of new aviation systems.
The Crew-Vehicle Systems Research Facility at NASA's Ames Research Center was designed for the study of human factors in aviation safety. The facility is used to analyze performance characteristics of flight crews; help develop new designs for future aviation environments; evaluate new and contemporary air traffic control procedures; and develop new training and simulation techniques required by the continued technical evolution of flight systems.

The facility is home to a Boeing 747-400 flight simulator, the Advanced Concepts Flight Simulator, and an air traffic control system simulator. Together, these systems provide full mission flight simulation research capability. Visual systems provide out-the-window cues in both cockpits. The Air Traffic Control System simulator provides a realistic air traffic control environment, including communication with the cockpits allowing the study of air-to-ground communications systems as they impact crew performance. Dedicated experimenter labs for each simulator provide full monitoring and control capability for each simulation system.


















Radar communication system







Radar
Radar is actually an acronymthat stands for RAdio Detection And Ranging. It was developed in the early 1940s. Radar uses the echo principle.Radar equipment emits a high energy radio signal from an antenna. The signal travels out from the source untilit is reflected back by contact with an object. The radar antenna relays this signal to a scope where the imageis displayed. Using the time it takes for the emitted signal to reach the object and reflect back to its source,the distance to the object can be computed. The radar signal is moving at the speed of light and can make sucha trip in microseconds.

In aviation, a ground radarantenna sends radio signal pulses into the sky. These signals are reflected back by aircraft flying in the airspace.The radar scope displays the direction and distance from which the signals are reflected back. This coupled witheach aircraft's transponder signal identifies the aircraft on the radarscope. Also, all airliners are equippedwith radar equipment in the aircraft's nose. Short bursts of radio signals are emitted from the nose cone of theaircraft. These signals reflect off clouds ahead of the aircraft. The on-board computer calculates the distanceand displays the object (the cloud) on the on-board radar screen.

The Flight PlanCommercial airline companies employ flight planners who perform all the necessary data gathering and analyses necessaryto complete a flight plan. These flight plans are then given to the pilots during a flight briefing before thepilot begins the aircraft preflight check. These flight plans contain information similar to what is required fora small aircraft pilot's flight plan. Small aircraft pilots and charter pilots perform their own flight planning and submit their flight plans to the Flight Service Station (FSS) that services their departure airport. The FSS enters the flight plan information into their system. Among the many services offered by the FSS, it is responsible for processing flight plans. After a pilot files a flight plan with an FSS facility, a record of the flight plan is made that includes the aircraft description and tail numbers, departure and destination airports, route of flight, estimated time of departure (ETD), estimated time of arrival (ETA) and number of people on board. About an hour before takeoff or once airborne, the pilot "opens" the VFR flight plan. This ensures that the FSS will keep track of the airplane's ETA. Along the route the pilot radios the FSS with occasional position reports. This helps the FSS to track the route. If the pilot gets disoriented along the way, an FSS specialist could locate the aircraft with a VHF direction finder or use radar. Within thirty minutes of completing a flight, the pilot needs to close the VFR flight plan. If the pilot changes the final destination or will be at least 15 minutes later than estimated, the pilot needs to inform the FSS facility accordingly. If the pilot does not close the flight plan or indicate changes to the FSS, the FSS will initiate search and rescue procedures believing the aircraft has been "lost".

Flight Tracking Strip and Data BlockUpon acceptance of a flight plan for a commercial jetliner flight, a "flight tracking strip" is generated in the departure control tower. This strip contains essentially the same information from the flight plan, but in an abbreviated format. This strip communicates to air traffic controllers along the route information about the flight that assists controllers in directing the pilot. This strip is physically handed off from controller to controller within the same air traffic management facility (such as the local control tower). It is also electronically handed off from one air traffic management facility to another as the flight moves from one airspace sector to another.Each air traffic management facility has a slightly different look for their flight tracking slips.

Every commercial flight is equipped with a transponder. This electronic device is connected to the on-board computer. It transmits coded radio signals to the controller's radar receiver. These signals contain information about the flight: aircraft's identification letters or flight number and its altitude. Upon departure, pilots receive a 4-digit transponder code and set their transponder to that code. The terminology is "Squawk" 1200. The standard transponder code for VFR flights is 1200. When the code is set, the radar "blip" for that flight shows as an enhanced signal on the controller's radar screen. The aircraft is shown in motion on the screen and is followed by a box with the flight's information in it: the data block. This way controllers can visually track each flight as it flies through their designated airspace.



Digital Systems




As demand for mobile telephone service has increased, service providers found that basic engineering assumptions borrowed from wireline (landline) networks did not hold true in mobile systems. While the average landline phone call lasts at least 10 minutes, mobile calls usually run 90 seconds. Engineers who expected to assign 50 or more mobile phones to the same radio channel found that by doing so they increased the probability that a user would not get dial tone—this is known as call-blocking probability. As a consequence, the early systems quickly became saturated, and the quality of service decreased rapidly. The critical problem was capacity. The general characteristics of time division multiple access (TDMA), Global System for Mobile Communications (GSM), personal communications service (PCS) 1900, and code division multiple access (CDMA) promise to significantly increase the efficiency of cellular telephone systems to allow a greater number of simultaneous conversations

The advantages of digital cellular technologies over analog cellular networks include increased capacity and security. Technology options such as TDMA and CDMA offer more channels in the same analog cellular bandwidth and encrypted voice and data. Because of the enormous amount of money that service providers have invested in AMPS hardware and software, providers look for a migration from AMPS to digital analog mobile phone service (DAMPS) by overlaying their existing networks with TDMA architectures.

ison

Time Division Multiple Access (TDMA)

North American digital cellular (NADC) is called DAMPS and TDMA. Because AMPS preceded digital cellular systems, DAMPS uses the same setup protocols as analog AMPS. TDMA has the following characteristics:
IS–54 standard specifies traffic on digital voice channels
initial implementation triples the calling capacity of AMPS systems
capacity improvements of 6 to 15 times that of AMPS are possible
many blocks of spectrum in 800 MHz and 1900 MHz are used
all transmissions are digital

TDMA/FDMA application 7. 3 callers per radio carrier (6 callers on half rate later), providing 3 times the AMPS capacity

TDMA is one of several technologies used in wireless communications. TDMA provides each call with time slots so that several calls can occupy one bandwidth. Each caller is assigned a specific time slot. In some cellular systems, digital packets of information are sent during each time slot and reassembled by the receiving equipment into the original voice components. TDMA uses the same frequency band and channel allocations as AMPS. Like NAMPS, TDMA provides three to six time channels in the same bandwidth as a single AMPS channel. Unlike NAMPS, digital systems have the means to compress the spectrum used to transmit voice information by compressing idle time and redundancy of normal speech. TDMA is the digital standard and has 30-kHz bandwidth. Using digital voice encoders, TDMA is able to use up to six channels in the same bandwidth where AMPS uses one channel.

Extended Time Division Multiple Access (E–TDMA)

The E–TDMA standard claims a capacity of fifteen times that of analog cellular systems. This capacity is achieved by compressing quiet time during conversations. E–TDMA divides the finite number of cellular frequencies into more time slots than TDMA. This allows the system to support more simultaneous cellular calls.

Fixed Wireless Access (FWA)

FWA is a radio-based local exchange service in which telephone service is provided by common carriers. It is primarily a rural application—that is, it reduces the cost of conventional wireline. FWA extends telephone service to rural areas by replacing a wireline local loop with radio communications. Other labels for wireless access include fixed loop, fixed radio access, wireless telephony, radio loop, fixed wireless, radio access, and Ionica. FWA systems employ TDMA or CDMA access technologies.
Personal Communications Service (PCS)

The future of telecommunications includes PCS. PCS at 1900 MHz (PCS 1900) is the North American implementation of digital cellular system (DCS) 1800 (GSM). Trial networks were operational in the United States by 1993, and in 1994 the Federal Communications Commission (FCC) began spectrum auctions. As of 1995, the FCC auctioned commercial licenses. In the PCS frequency spectrum, the operator's authorized frequency block contains a definite number of channels. The frequency plan assigns specific channels to specific cells, following a reuse pattern that restarts with each nth cell. The uplink and downlink bands are paired mirror images. As with AMPS, a channel number implies one uplink and one downlink frequency (e.g., Channel 512 = 1850.2-MHz uplink paired with 1930.2-MHz downlink).

Code Division Multiple Access (CDMA)

CDMA is a digital air interface standard, claiming 8 to 15 times the capacity of analog. It employs a commercial adaptation of military, spread-spectrum, single-sideband technology. Based on spread spectrum theory, it is essentially the same as wireline service—the primary difference is that access to the local exchange carrier (LEC) is provided via wireless phone. Because users are isolated by code, they can share the same carrier frequency, eliminating the frequency reuse problem encountered in AMPS and DAMPS. Every CDMA cell site can use the same 1.25-MHz band, so with respect to clusters, n = 1. This greatly simplifies frequency planning in a fully CDMA environment.

CDMA is an interference-limited system. Unlike AMPS/TDMA, CDMA has a soft capacity limit; however, each user is a noise source on the shared channel and the noise contributed by users accumulates. This creates a practical limit to how many users a system will sustain. Mobiles that transmit excessive power increase interference to other mobiles. For CDMA, precise power control of mobiles is critical in maximizing the system's capacity and increasing battery life of the mobiles. The goal is to keep each mobile at the absolute minimum power level that is necessary to ensure acceptable service quality. Ideally, the power received at the base station from each mobile should be the same (minimum signal to interference).

4. North American Analog Cellular Systems

Originally devised in the late 1970s to early 1980s, analog systems have been revised somewhat since that time and operate in the 800-MHz range. A group of government, telco, and equipment manufacturers worked together as a committee to develop a set of rules (protocols) that govern how cellular subscriber units (mobiles) communicate with the cellular system. System development takes into consideration many different, and often opposing, requirements for the system, and often a compromise between conflicting requirements results. Cellular development involves the following basic topics:

frequency and channel assignments
type of radio modulation
maximum power levels
modulation parameters
messaging protocols
call-processing sequences

The Advanced Mobile Phone Service (AMPS)

AMPS was released in 1983 using the 800-MHz to 900-MHz frequency band and the 30-kHz bandwidth for each channel as a fully automated mobile telephone service. It was the first standardized cellular service in the world and is currently the most widely used standard for cellular communications. Designed for use in cities, AMPS later expanded to rural areas. It maximized the cellular concept of frequency reuse by reducing radio power output. The AMPS telephones (or handsets) have the familiar telephone-style user interface and are compatible with any AMPS base station. This makes mobility between service providers (roaming) simpler for subscribers. Limitations associated with AMPS include the following:
low calling capacity
limited spectrum
no room for spectrum growth
poor data communications
minimal privacy
inadequate fraud protection

AMPS is used throughout the world and is particularly popular in the United States, South America, China, and Australia. AMPS uses frequency modulation (FM) for radio transmission. In the United States, transmissions from mobile to cell site use separate frequencies from the base station to the mobile subscriber.

Narrowband Analog Mobile Phone Service (NAMPS)

Since analog cellular was developed, systems have been implemented extensively throughout the world as first-generation cellular technology. In the second generation of analog cellular systems, NAMPS was designed to solve the problem of low calling capacity. NAMPS is now operational in 35 U.S. and overseas markets, and NAMPS was introduced as an interim solution to capacity problems. NAMPS is a U.S. cellular radio system that combines existing voice processing with digital signaling, tripling the capacity of today's AMPS systems. The NAMPS concept uses frequency division to get 3 channels in the AMPS 30-kHz single channel bandwidth. NAMPS provides 3 users in an AMPS channel by dividing the 30-kHz AMPS bandwidth into 3 10-kHz channels. This increases the possibility of interference because channel bandwidth is reduced.

5. Cellular System Components

The cellular system offers mobile and portable telephone stations the same service provided fixed stations over conventional wired loops. It has the capacity to serve tens of thousands of subscribers in a major metropolitan area. The cellular communications system consists of the following four major components that work together to provide mobile service to subscribers.
public switched telephone network (PSTN)
mobile telephone switching office (MTSO)
cell site with antenna system
mobile subscriber unit (MSU)

PSTN

The PSTN is made up of local networks, the exchange area networks, and the long-haul network that interconnect telephones and other communication devices on a worldwide basis.
Mobile Telephone Switching Office (MTSO)
The MTSO is the central office for mobile switching. It houses the mobile switching center (MSC), field monitoring, and relay stations for switching calls from cell sites to wireline central offices (PSTN). In analog cellular networks, the MSC controls the system operation. The MSC controls calls, tracks billing information, and locates cellular subscribers.

The Cell Site

The term cell site is used to refer to the physical location of radio equipment that provides coverage within a cell. A list of hardware located at a cell site includes power sources, interface equipment, radio frequency transmitters and receivers, and antenna systems.
Mobile Subscriber Units (MSUs)
The mobile subscriber unit consists of a control unit and a transceiver that transmits and receives radio transmissions to and from a cell site. The following three types of MSUs are available:
the mobile telephone (typical transmit power is 4.0 watts)
the portable (typical transmit power is 0.6 watts)
the transportable (typical transmit power is 1.6 watts)
The mobile telephone is installed in the trunk of a car, and the handset is installed in a convenient location to the driver. Portable and transportable telephones are hand-held and can be used anywhere. The use of portable and transportable telephones is limited to the charge life of the internal battery.

3. Cellular System Architecture











Increases in demand and the poor quality of existing service led mobile service providers to research ways to improve the quality of service and to support more users in their systems. Because the amount of frequency spectrum available for mobile cellular use was limited, efficient use of the required frequencies was needed for mobile cellular coverage. In modern cellular telephony, rural and urban regions are divided into areas according to specific provisioning guidelines. Deployment parameters, such as amount of cell-splitting and cell sizes, are determined by engineers experienced in cellular system architecture.
Provisioning for each region is planned according to an engineering plan that includes cells, clusters, frequency reuse, and handovers.

Cells

A cell is the basic geographic unit of a cellular system. The term cellular comes from the honeycomb shape of the areas into which a coverage region is divided. Cells are base stations transmitting over small geographic areas that are represented as hexagons. Each cell size varies depending on the landscape. Because of constraints imposed by natural terrain and man-made structures, the true shape of cells is not a perfect hexagon.

Clusters

A cluster is a group of cells. No channels are reused within a cluster. Figure 4 illustrates a seven-cell cluster.

Frequency Reuse

Because only a small number of radio channel frequencies were available for mobile systems, engineers had to find a way to reuse radio channels to carry more than one conversation at a time. The solution the industry adopted was called frequency planning or frequency reuse. Frequency reuse was implemented by restructuring the mobile telephone system architecture into the cellular concept.

The concept of frequency reuse is based on assigning to each cell a group of radio channels used within a small geographic area. Cells are assigned a group of channels that is completely different from neighboring cells. The coverage area of cells is called the footprint. This footprint is limited by a boundary so that the same group of channels can be used in different cells that are far enough away from each other so that their frequencies do not interfere

Cells with the same number have the same set of frequencies. Here, because the number of available frequencies is 7, the frequency reuse factor is 1/7. That is, each cell is using 1/7 of available cellular channels.

Cell Splitting

Unfortunately, economic considerations made the concept of creating full systems with many small areas impractical. To overcome this difficulty, system operators developed the idea of cell splitting. As a service area becomes full of users, this approach is used to split a single area into smaller ones. In this way, urban centers can be split into as many areas as necessary to provide acceptable service levels in heavy-traffic regions, while larger, less expensive cells can be used to cover remote rural regions

Handoff

The final obstacle in the development of the cellular network involved the problem created when a mobile subscriber traveled from one cell to another during a call. As adjacent areas do not use the same radio channels, a call must either be dropped or transferred from one radio channel to another when a user crosses the line between adjacent cells. Because dropping the call is unacceptable, the process of handoff was created. Handoff occurs when the mobile telephone network automatically transfers a call from radio channel to radio channel as a mobile crosses adjacent cells
During a call, two parties are on one voice channel. When the mobile unit moves out of the coverage area of a given cell site, the reception becomes weak. At this point, the cell site in use requests a handoff. The system switches the call to a stronger-frequency channel in a new site without interrupting the call or alerting the user. The call continues as long as the user is talking, and the user does not notice the handoff at all.

Cellular Communications








Definition

A cellular mobile communications system uses a large number of low-power wireless transmitters to create cells—the basic geographic service area of a wireless communications system. Variable power levels allow cells to be sized according to the subscriber density and demand within a particular region. As mobile users travel from cell to cell, their conversations are handed off between cells to maintain seamless service. Channels (frequencies) used in one cell can be reused in another cell some distance away. Cells can be added to accommodate growth, creating new cells in unserved areas or overlaying cells in existing areas.

Overview
This tutorial discusses the basics of radio telephony systems, including both analog and digital systems. Upon completion of this tutorial, you should be able to describe the basic components of a cellular system and identify digital wireless technologies

1. Mobile Communications Principles
Each mobile uses a separate, temporary radio channel to talk to the cell site. The cell site talks to many mobiles at once, using one channel per mobile. Channels use a pair of frequencies for communication—one frequency (the forward link) for transmitting from the cell site and one frequency (the reverse link) for the cell site to receive calls from the users. Radio energy dissipates over distance, so mobiles must stay near the base station to maintain communications. The basic structure of mobile networks includes telephone systems and radio services. Where mobile radio service operates in a closed network and has no access to the telephone system, mobile telephone service allows interconnection to the telephone network
Early Mobile Telephone System Architecture
Traditional mobile service was structured in a fashion similar to television broadcasting: One very powerful transmitter located at the highest spot in an area would broadcast in a radius of up to 50 kilometers. The cellular concept structured the mobile telephone network in a different way. Instead of using one powerful transmitter, many low-power transmitters were placed throughout a coverage area. For example, by dividing a metropolitan region into one hundred different areas (cells) with low-power transmitters using 12 conversations (channels) each, the system capacity theoretically could be increased from 12 conversations—or voice channels using one powerful transmitter—to 1,200 conversations (channels) using one hundred low-power transmitters.
2. Mobile Telephone System Using the Cellular Concept

Interference problems caused by mobile units using the same channel in adjacent areas proved that all channels could not be reused in every cell. Areas had to be skipped before the same channel could be reused. Even though this affected the efficiency of the original concept, frequency reuse was still a viable solution to the problems of mobile telephony systems.
Engineers discovered that the interference effects were not due to the distance between areas, but to the ratio of the distance between areas to the transmitter power (radius) of the areas. By reducing the radius of an area by 50 percent, service providers could increase the number of potential customers in an area fourfold. Systems based on areas with a one-kilometer radius would have one hundred times more channels than systems with areas 10 kilometers in radius. Speculation led to the conclusion that by reducing the radius of areas to a few hundred meters, millions of calls could be served.

The cellular concept employs variable low-power levels, which allow cells to be sized according to the subscriber density and demand of a given area. As the population grows, cells can be added to accommodate that growth. Frequencies used in one cell cluster can be reused in other cells. Conversations can be handed off from cell to cell to maintain constant phone service as the user moves between cells.
The cellular radio equipment (base station) can communicate with mobiles as long as they are within range. Radio energy dissipates over distance, so the mobiles must be within the operating range of the base station. Like the early mobile radio system, the base station communicates with mobiles via a channel. The channel is made of two frequencies, one for transmitting to the base station and one to receive information from the base station.




COMPONENTS FOR COMMUNICATIONS SATELLITES


Basic Communications Satellite Components

Every communications satellite in its simplest form (whether low earth or geosynchronous) involves the transmission of information from an originating ground station to the satellite (the uplink), followed by a retransmission of the information from the satellite back to the ground (the downlink). The downlink may either be to a select number of ground stations or it may be broadcast to everyone in a large area. Hence the satellite must have a receiver and a receive antenna, a transmitter and a transmit antenna, some method for connecting the uplink to the downlink for retransmission, and prime electrical power to run all of the electronics. The exact nature of these components will differ, depending on the orbit and the system architecture, but every communications satellite must have these basic components.

Transmitters

The amount of power which a satellite transmitter needs to send out depends a great deal on whether it is in low earth orbit or in geosynchronous orbit. This is a result of the fact that the geosynchronous satellite is at an altitude of 22,300 miles, while the low earth satellite is only a few hundred miles. The geosynchronous satellite is nearly 100 times as far away as the low earth satellite. We can show fairly easily that this means the higher satellite would need almost 10,000 times as much power as the low-orbiting one, if everything else were the same. (Fortunately, of course, we change some other things so that we don't need 10,000 times as much power.)
For either geosynchronous or low earth satellites, the power put out by the satellite transmitter is really puny compared to that of a terrestrial radio station. Your favorite rock station probably boasts of having many kilowatts of power. By contrast, a 200 watt transmitter would be very strong for a satellite.
Antennas

One of the biggest differences between a low earth satellite and a geosynchronous satellite is in their antennas. As mentioned earlier, the geosynchronous satellite would require nearly 10,000 times more transmitter power, if all other components were the same. One of the most straightforward ways to make up the difference, however, is through antenna design. Virtually all antennas in use today radiate energy preferentially in some direction. An antenna used by a commercial terrestrial radio station, for example, is trying to reach people to the north, south, east, and west. However, the commercial station will use an antenna that radiates very little power straight up or straight down. Since they have very few listeners in those directions (except maybe for coal miners and passing airplanes) power sent out in those directions would be totally wasted.The communications satellite carries this principle even further. All of its listeners are located in an even smaller area, and a properly designed antenna will concentrate most of the transmitter power within that area, wasting none in directions where there are no listeners. The easiest way to do this is simply to make the antenna larger. Doubling the diameter of a reflector antenna (a big "dish") will reduce the area of the beam spot to one fourth of what it would be with a smaller reflector. We describe this in terms of the gain of the antenna. Gain simply tells us how much more power will fall on 1 square centimeter (or square meter or square mile) with this antenna than would fall on that same square centimeter (or square meter or square mile) if the transmitter power were spread uniformly (isotropically) over all directions. The larger antenna described above would have four times the gain of the smaller one. This is one of the primary ways that the geosynchronous satellite makes up for the apparently larger transmitter power which it requires.
One other big difference between the geosynchronous antenna and the low earth antenna is the difficulty of meeting the requirement that the satellite antennas always be "pointed" at the earth. For the geosynchronous satellite, of course, it is relatively easy. As seen from the earth station, the satellite never appears to move any significant distance. As seen from the satellite, the earth station never appears to move. We only need to maintain the orientation of the satellite. The low earth orbiting satellite, on the other hand, as seen from the ground is continuously moving. It zooms across our field of view in 5 or 10 minutes.Likewise, the earth station, as seen from the satellite is a moving target. As a result, both the earth station and the satellite need some sort of tracking capability which will allow its antennas to follow the target during the time that it is visible. The only alternative is to make that antenna beam so wide that the intended receiver (or transmitter) is always within it. Of course, making the beam spot larger decreases the antenna gain as the available power is spread over a larger area , which in turn increases the amount of power which the transmitter must provide.
Power Generation
You might wonder why we don't actually use transmitters with thousands of watts of power, like your favorite radio station does. You might also have figured out the answer already. There simply isn't that much power available on the spacecraft. There is no line from the power company to the satellite. The satellite must generate all of its own power. For a communications satellite, that power usually is generated by large solar panels covered with solars cells - just like the ones in your solar-powered calculator. These convert sunlight into electricity. Since there is a practical limit to the how big a solar panel can be, there is also a practical limit to the amount of power which can generated. In addition, unfortunately, transmitters are not very good at converting input power to radiated power so that 1000 watts of power into the transmitter will probably result in only 100 or 150 watts of power being radiated. We say that transmitters are only 10 or 15% efficient. In practice the solar cells on the most "powerful" satellites generate only a few thousand watts of electrical power.Satellites must also be prepared for those periods when the sun is not visible, usually because the earth is passing between the satellite and the sun. This requires that the satellite have batteries on board which can supply the required power for the necessary time and then recharge by the time of the next period of eclipse.

COMMUNICATIONS SATELLITES







Why Satellites for Communications

By the end of World War II, the world had had a taste of "global communications." Edward R. Murrow's radio broadcasts from London had electrified American listeners. We had, of course, been able to do transatlantic telephone calls and telegraph via underwater cables for almost 50 years. At exactly this time, however, a new phenomenon was born. The first television programs were being broadcast, but the greater amount of information required to transmit television pictures required that they operate at much higher frequencies than radio stations. For example, the very first commercial radio station (KDKA in Pittsburgh) operated ( and still does) at 1020 on the dial. This number stood for 1020 KiloHertz - the frequency at which the station transmitted. Frequency is simply the number of times that an electrical signal "wiggles" in 1 second. Frequency is measured in Hertz. One Hertz means that the signal wiggles 1 time/second. A frequency of 1020 kiloHertz means that the electrical signal from that station wiggles 1,020,000 times in one second.

Television signals, however required much higher frequencies because they were transmitting much more information - namely the picture. A typical television station (channel 7 for example) would operate at a frequency of 175 MHz. As a result, television signals would not propagate the way radio signals did.Both radio and television frequency signals can propagate directly from transmitter to receiver. This is a very dependable signal, but it is more or less limited to line of sight communication. The mode of propagation employed for long distance (1000s of miles) radio communication was a signal which traveled by bouncing off the charged layers of the atmosphere (ionosphere) and returning to earth. The higher frequency television signals did not bounce off the ionosphere and as a result disappeared into space in a relatively short distance.

Consequently, television reception was a "line-of-sight" phenomenon, and television broadcasts were limited to a range of 20 or 30 miles or perhaps across the continent by coaxial cable. Transatlantic broadcasts were totally out the question. If you saw European news events on television, they were probably delayed at least 12 hours, and involved the use of the fastest airplane available to carry conventional motion pictures back to the U.S. In addition, of course, the appetite for transatlantic radio and telephone was increasing rapidly. Adding this increase to the demands of the new television medium, existing communications capabilities were simply not able to handle all of the requirements. By the late 1950s the newly developed artificial satellites seemed to offer the potential for satisfying many of these needs.

Low Earth-Orbiting Communications Satellites

In 1960, the simplest communications satellite ever conceived was launched. It was called Echo, because it consisted only of a large (100 feet in diameter) aluminized plastic balloon. Radio and TV signals transmitted to the satellite would be reflected back to earth and could be received by any station within view of the satellite.

Unfortunately, in its low earth orbit, the Echo satellite circled the earth every ninety minutes. This meant that although virtually everybody on earth would eventually see it, no one person, ever saw it for more than 10 minutes or so out of every 90 minute orbit. In 1958, the Score satellite had been put into orbit. It carried a tape recorder which would record messages as it passed over an originating station and then rebroadcast them as it passed over the destination. Once more, however, it appeared only briefly every 90 minutes - a serious impediment to real communications. In 1962, NASA launched the Telstar satellite for AT&T.


Telstar's orbit was such that it could "see" Europe" and the US simultaneously during one part of its orbit. During another part of its orbit it could see both Japan and the U.S. As a result, it provided real- time communications between the United States and those two areas - for a few minutes out of every hour.

Geosynchronous Communications Satellites

The solution to the problem of availability, of course, lay in the use of the geosynchronous orbit. In 1963, the necessary rocket booster power was available for the first time and the first geosynchronous satellite , Syncom 2, was launched by NASA. For those who could "see" it, the satellite was available 100% of the time, 24 hours a day. The satellite could view approximately 42% of the earth. For those outside of that viewing area, of course, the satellite was NEVER available.
However, a system of three such satellites, with the ability to relay messages from one to the other could interconnect virtually all of the earth except the polar regions. The one disadvantage (for some purposes) of the geosynchronous orbit is that the time to transmit a signal from earth to the satellite and back is approximately ? of a second - the time required to travel 22,000 miles up and 22,000 miles back down at the speed of light. For telephone conversations, this delay can sometimes be annoying. For data transmission and most other uses it is not significant. In any event, once Syncom had demonstrated the technology necessary to launch a geosynchronous satellite, a virtual explosion of such satellites followed.Today, there are approximately 150 communications satellites in orbit, with over 100 in geosynchronous orbit. One of the biggest sponsors of satellite development was Intelsat, an internationally-owned corporation which has launched 8 different series of satellites (4 or 5 of each series) over a period of more than 30 years. Spreading their satellites around the globe and making provision to relay from one satellite to another, they made it possible to transmit 1000s of phone calls between almost any two points on the earth. It was also possible for the first time, due to the large capacity of the satellites, to transmit live television pictures between virtually any two points on earth. By 1964 (if you could stay up late enough), you could for the first time watch the Olympic games live from Tokyo. A few years later of course you could watch the Vietnam war live on the evening news.

Microwave Line-of-Sight Systems

What Are Microwaves

Microwave frequencies range from 300 MHz to 30 GHz, corresponding to wavelengths of 1 meter to 1 cm. These frequencies are useful for terrestrial and satellite communication systems, both fixed and mobile. In the case of point-to-point radio links, antennas are placed on a tower or other tall structure at sufficient height to provide a direct, unobstructed line-of-sight (LOS) path between the transmitter and receiver sites. In the case of mobile radio systems, a single tower provides point-to-multipoint coverage, which may include both LOS and non-LOS paths. LOS microwave is used for both short- and long-haul telecommunications to complement wired media such as optical transmission systems. Applications include local loop, cellular back haul, remote and rugged areas, utility companies, and private carriers. Early applications of LOS microwave were based on analog modulation techniques, but today’s microwave systems used digital modulation for increased capacity and performance.

Standards

In the United States, radio channel assignments are controlled by the Federal Communications Commission (FCC) for commercial carriers and by the National Telecommunications and Information Administration (NTIA) for government systems. The FCC's regulations for use of spectrum establish eligibility rules, permissible use rules, and technical specifications. FCC regulatory specifications are intended to protect against interference and to promote spectral efficiency. Equipment type acceptance regulations include transmitter power limits, frequency stability, out-of-channel emission limits, and antenna directivity.The International Telecommunications Union Radio Committee (ITU-R) issues recommendations on radio channel assignments for use by national frequency allocation agencies. Although the ITU-R itself has no regulatory power, it is important to realize that ITU-R recommendations are usually adopted on a worldwide basis.

Historical Milestones

1950s Analog Microwave Radio
Used FDM/FM in 4, 6, and 11 GHz bands for long-haul
Introduced into telephone networks by Bell System
1970s Digital Microwave Radio
Replaced analog microwaves
Became bandwidth efficient with introduction of advanced modulation techniques (QAM and TCM)
Adaptive equalization and diversity became necessary for high data rates
1990s and 2000s
Digital microwave used for cellular back-haul
Change in MMDS and ITFS spectrum to allow wireless cable and point-to-multipoint broadcasting
IEEE 802.16 standard or WiMax introduces new application for microwave radio
Wireless local and metro area networks capitalize on benefits of microwave radio
Principles and Operation
Microwave Link Structure. The basic components required for operating a radio link are the transmitter, towers, antennas, and receiver. Transmitter functions typically include multiplexing, encoding, modulation, up-conversion from baseband or intermediate frequency (IF) to radio frequency (RF), power amplification, and filtering for spectrum control. Receiver functions include RF filtering, down-conversion from RF to IF, amplification at IF, equalization, demodulation, decoding, and demultiplexing. To achieve point-to-point radio links, antennas are placed on a tower or other tall structure at sufficient height to provide a direct, unobstructed line-of-sight (LOS) path between the transmitter and receiver sites.
Microwave System Design. The design of microwave radio systems involves engineer¬ing of the path to evaluate the effects of prop¬agation on performance, development of a frequency allocation plan, and proper selection of radio and link components. This design process must ensure that outage requirements are met on a per link and system basis. The frequency allocation plan is based on four elements: the local fre¬quency regulatory authority requirements, selected radio transmitter and receiver characteristics, antenna characteristics, and potential intrasystem and intersystem RF interference. Microwave Propagation Characteristics. Various phenomena associated with propagation, such as multipath fading and interference, affect microwave radio performance. The modes of propagation between two radio antennas may include a direct, line-of-sight (LOS) path but also a ground or surface wave that parallels the earth's surface, a sky wave from signal components reflected off the troposphere or ionosphere, a ground reflected path, and a path diffracted from an obstacle in the terrain. The presence and utility of these modes depend on the link geometry, both distance and terrain between the two antennas, and the operating frequency. For frequencies in the microwave (~2 – 30 GHz) band, the LOS propagation mode is the predominant mode available for use; the other modes may cause interference with the stronger LOS path. Line-of-sight links are limited in distance by the curvature of the earth, obstacles along the path, and free-space loss. Average distances for conservatively designed LOS links are 25 to 30 mi, although distances up to 100 mi have been used. For frequencies below 2 GHz, the typical mode of propagation includes non-line-of-sight (NLOS) paths, where refraction, diffraction, and reflection may extend communications coverage beyond LOS distances. The performance of both LOS and NLOS paths is affected by several phenomena, including free-space loss, terrain, atmosphere, and precipitation.

Strengths and Weaknesses

Strengths

Adapts to difficult terrain
Loss versus distance (D) = Log D (not linear)
Flexible channelization
Relatively short installation time
Can be transportable
Cost usually less than cable
No “back-hoe” fading

Weaknesses

Paths could be blocked by buildings
Spectral congestion
Interception possible
Possible regulatory delays
Sites could be difficult to maintain
Towers need periodic maintenance
Atmospheric fading

Business Implications and Applications

The tremendous growth in wireless services is made possible today through the use of microwaves for backhaul in wireless and mobile networks and for point-to-multipoint networks. Towers can be used for both mobile, e.g. cellular, and point-to-point applications, enhancing the potential for microwave as wireless systems grow. Increases in spectrum allocations and advances in spectrum efficiency through technology have created business opportunities in the field of microwave radio. Telecommunications carriers, utility companies, and private carriers all use microwave to complement wired and optical networks.

Free-space optical communication




In telecommunications, Free Space Optics (FSO) is an optical communication technology that uses light propagating in free space to transmit data between two points. The technology is useful where the physical connections by the means of fibre optic cables are impractical due to high costs or other considerations.


History

Optical communications, in various forms, have been used for thousands of years. The Ancient Greeks polished their shields to send signals during battle. In the modern era, semaphores and wireless solar telegraphs called heliographs were developed, using coded signals to communicate with their recipients.
In 1880 Alexander Graham Bell and his assistant Sarah Orr created the photophone, which Bell considered his most important invention. The device allowed for the transmission of sound on a beam of light. On June 3, 1880, Bell conducted the world's first wireless telephone transmission between two building rooftops.
The invention of lasers in the 1960s revolutionized free space optics. Military organizations were particularly interested and boosted their development. However the technology lost market momentum when the installation of optical fiber networks for civilian uses was at its peak.

Usage and technologies


Free Space Optics are additionally used for communications between spacecraft. The optical links can be implemented using infrared laser light, although low-data-rate communication over short distances is possible using LEDs. Maximum range for terrestrial links is in the order of 2-3 km,but the stability and quality of the link is highly dependent on atmospheric factors such as rain, fog, dust and heat. Amateur radio operators have achieved significantly farther distances (173 miles in at least one occasion) using incoherent sources of light from high-intensity LEDs. However, the low-grade equipment used limited bandwidths to about 4kHz. In outer space, the communication range of free-space optical communication is currently in the order of several thousand kilometers, but has the potential to bridge interplanetary distances of millions of kilometers, using optical telescopes as beam expanders. IrDA is also a very simple form of free-space optical communications.


Applications


Typically scenarios for use are:
LAN-to-LAN connections on campuses at Fast Ethernet or Gigabit Ethernet speeds.
LAN-to-LAN connections in a city. example, Metropolitan area network.
To cross a public road or other barriers which the sender and receiver do not own.
Speedy service delivery of high-bandwidth access to optical fiber networks.
Converged Voice-Data-Connection.
Temporary network installation (for events or other purposes).
Reestablish high-speed connection quickly (disaster recovery).
As an alternative or upgrade add-on to existing wireless technologies.
As a safety add-on for important fiber connections (redundancy).
For communications between spacecraft, including elements of a satellite constellation.
For inter- and intra-chip communication.
The light beam can be very narrow, which makes FSO hard to intercept, improving security. In any case, it is comparatively easy to encrypt any data traveling across the FSO connection for additional security. FSO provides vastly improved EMI behavior using light instead of microwaves.


Typically scenarios for use are:
LAN-to-LAN connections on campuses at Fast Ethernet or Gigabit Ethernet speeds.
LAN-to-LAN connections in a city. example, Metropolitan area network.
To cross a public road or other barriers which the sender and receiver do not own.
Speedy service delivery of high-bandwidth access to optical fiber networks.
Converged Voice-Data-Connection.
Temporary network installation (for events or other purposes).
Reestablish high-speed connection quickly (disaster recovery).
As an alternative or upgrade add-on to existing wireless technologies.
As a safety add-on for important fiber connections (redundancy).
For communications between spacecraft, including elements of a satellite constellation.
For inter- and intra-chip communication.
The light beam can be very narrow, which makes FSO hard to intercept, improving security. In any case, it is comparatively easy to encrypt any data traveling across the FSO connection for additional security. FSO provides vastly improved EMI behavior using light instead of microwaves.


Advantages
Ease of deployment
License-free operation
High bit rates
Low bit error rates
Immunity to electromagnetic interference
Full duplex operation
Protocol transparency
Very secure due to the high directionality and narrowness of the beam(s)
No Fresnel zone necessary
Disadvantages

For terrestrial applications, the principal limiting factors are:
Beam dispersion
Atmospheric absorption
Rain
Fog (10..~100 dB/km attenuation)
Snow
Scintillation
Background light
Shadowing
Pointing stability in wind
Pollution / smog
If the sun goes exactly behind the transmitter, it can swamp the signal.
These factors cause an attenuated receiver signal and lead to higher bit error ratio (BER). To overcome these issues, vendors found some solutions, like multi-beam or multi-path architectures, which use more than one sender and more than one receiver. Some state-of-the-art devices also have larger fade margin (extra power, reserved for rain, smog, fog). To keep an eye-safe environment, good FSO systems have a limited laser power density and support laser classes 1 or 1M. Atmospheric and fog attenuation, which are exponential in nature, limit practical range of FSO devices to several kilometres.




Optical communication


Optical communication is any form of telecommunication that uses light as the transmission medium.
An optical communication system consists of a transmitter, which encodes a message into an optical signal, a channel, which carries the signal to its destination, and a receiver, which reproduces the message from the received optical signal.


Forms of optical communication


There are many forms of non-technological optical communication, including body language and sign language.
Techniques such as semaphore lines, ship flags, smoke signals, and beacon fires were the earliest form of technological optical communication.
The heliograph uses a mirror to reflect sunlight to a distant observer. By moving the mirror the distant observer sees flashes of light that can be used to send a prearranged signaling code. Navy ships often use a signal lamp to signal in Morse code in a similar way.
Distress flares are used by mariners in emergencies, while lighthouses and navigation lights are used to communicate navigation hazards.
Aircraft use the landing lights at airports to land safely, especially at night. Aircraft landing on an aircraft carrier use a similar system to land correctly on the carrier deck. The light systems communicate the correct position of the aircraft relative to the best landing glideslope.
Optical fiber is the most common medium for modern digital optical communication.
Free-space optical communication is also used today in a variety of applications.


FCD-155E


RAD's FCD-155E Ethernet over SDH/SONET add/drop multiplexer transports next-generation Ethernet and TDM traffic over STM-1/OC-3 fiber or copper links.
The FCD-155E add/drop multiplexer also supports E1, T1, E3, and T3 services. The traffic is mapped into the SDH/SONET frame and can be terminated at any point on the network.Used as an add/drop multiplexer on the SDH/SONET ring (or as a terminal multiplexer at the remote site), the FCD-155E supports generic framing procedure (GFP) and virtual concatenation, enabling IP channel bandwidth configuration in increments of 2 Mbps (VC-12), 1.5 Mbps (VT 1.5) or 50 Mbps (VC-3 or STS-1) at up to 100 Mbps wire speed.
Carriers and service providers will deploy the product to leverage optical bandwidth for revenue-generating Ethernet services, while enterprises, utilities and campuses can use the FCD-155E to provide LAN services over existing fiber optic infrastructure.


SDH SONET ADMs


RAD’s SDH or SONET add/drop multiplexers transport TDM and next-generation Ethernet traffic over the SDH or SONET ring, leveraging the SDH or SONET infrastructure to provide Internet access and LAN connectivity in addition to traditional E1, T1 tributary services.
Deployed by carriers and service providers to leverage optical bandwidth for revenue-generating Ethernet services, enterprises, utilities and campuses also deploy SDH or SONET ADMs to provide LAN services over fiber optic infrastructure.
Service providers and end users benefit from the SDH, SONET add/drop multiplexer’s managed Ethernet bandwidth utilization, cutting capital expenditures and operating costs, while enabling a larger range of services.

FCD-155
STM-1/OC-3 Terminal Multiplexer
RAD's FCD-155 STM-1/OC-3 terminal multiplexer transports Ethernet traffic over SDH or SONET networks, enabling carriers and service providers to launch next-generation services while continuing to support E1, T1, E3 or T3 traffic.
Installed at the customer site, the FCD-155 Ethernet over SDH or SONET terminal multiplexer supports generic framing procedure (GFP) and virtual concatenation, enabling IP channel bandwidth configuration in increments of 2 Mbps (VC-12), 1.5 Mbps (VT 1.5) or 48.384 Mbps (VC-3 or STS-1) at up to 100 Mbps wire-speed.The FCD-155 is widely deployed by carriers and service providers to leverage their optical bandwidth for revenue-generating Ethernet services, while enterprises, utilities and campuses use the FCD-155 to provide LAN services over existing fiber optic infrastructures.By using the FCD-155, both service providers and customers benefit from better bandwidth utilization, service granularity and a larger selection of services.

SONET / SDH Technical Summary

What is SONET & SDH?
SONET and SDH are a set of related standards for synchronous data transmission over fiber optic networks. SONET is short for Synchronous Optical NETwork and SDH is an acronym for Synchronous Digital Hierarchy. SONET is the United States version of the standard published by the American National Standards Institutue (ANSI). SDH is the international version of the standard published by the International Telecommunications Union (ITU).
The SONET/SDH Digital Hierarchy
The following table lists the hierarchy of the most common SONET/SDH data rates:

Optical Level
Electrical Level
Line Rate (Mbps)
Payload Rate (Mbps)
Overhead Rate (Mbps)
SDH Equivalent
OC-1
STS-1
51.840
50.112
1.728
-
OC-3
STS-3
155.520
150.336
5.184
STM-1
OC-12
STS-12
622.080
601.344
20.736
STM-4
OC-48
STS-48
2488.320
2405.376
82.944
STM-16
OC-192
STS-192
9953.280
9621.504
331.776
STM-64
OC-768
STS-768
39813.120
38486.016
1327.104
STM-256

Other rates (OC-9, OC-18, OC-24, OC-36, OC-96) are referenced in some of the standards documents but were never widely implemented. It is possible other higher rates (e.g. OC-3072) may be defined in in the future.
The "line rate" refers to the raw bit rate carried over the optical fiber. A portion of the bits transferred over the line are designated as "overhead". The overhead carries information that provides OAM&P (Operations, Administration, Maintenance, and Provisioning) capabilities such as framing, multiplexing, status, trace, and performance monitoring. The "line rate" minus the "overhead rate" yields the "payload rate" which is the bandwidth available for transferring user data such as packets or ATM cells.
The SONET/SDH level designations sometimes include a "c" suffix (such as "OC-48c"). The "c" suffix indicates a "concatenated" or "clear" channel. This implies that the entire payload rate is available as a single channel of communications (i.e. the entire payload rate may be used by a single flow of cells or packets). The opposite of concatenated or clear channel is "channelized". In a channelized link the payload rate is subdivided into multiple fixed rate channels. For example, the payload of an OC-48 link may be subdivided into four OC-12 channels. In this case the data rate of a single cell or packet flow is limited by the bandwidth of an individual channel.
ANSI SONET Standards
The American National Standards Institute (ANSI) coordinates and approves SONET standards. The standards are actually developed by Committee T1 which is sponsored by the Alliance for Telecommunications Industry Solutions (ATIS) and accredited by ANSI to create network interconnection and interoperability standards for the United States. T1X1 and T1M1 are the primary T1 Technical Subcommittees responsible for SONET. T1X1 deals with "digital hierarchy and synchronization". T1M1 deals with "internetworking operations, administration, maintenance, and provisioning (OAM&P). Listed below are some of the most commonly cited SONET standards available from ANSI. ANSI T1.105: SONET - Basic Description including Multiplex Structure, Rates and Formats
ANSI T1.105.01: SONET - Automatic Protection Switching
ANSI T1.105.02: SONET - Payload Mappings
ANSI T1.105.03: SONET - Jitter at Network Interfaces
ANSI T1.105.03a: SONET - Jitter at Network Interfaces - DS1 Supplement
ANSI T1.105.03b: SONET - Jitter at Network Interfaces - DS3 Wander Supplement
ANSI T1.105.04: SONET - Data Communication Channel Protocol and Architectures
ANSI T1.105.05: SONET - Tandem Connection Maintenance
ANSI T1.105.06: SONET - Physical Layer Specifications
ANSI T1.105.07: SONET - Sub-STS-1 Interface Rates and Formats Specification
ANSI T1.105.09: SONET - Network Element Timing and Synchronization
ANSI T1.119: SONET - Operations, Administration, Maintenance, and Provisioning (OAM&P) - Communications
ANSI T1.119.01: SONET: OAM&P Communications Protection Switching Fragment
ITU-T SDH Standards
The International Telecommunications Union (ITU) coordinates the development of SDH standards. ITU was formerly known as the CCITT. It is sponsored by the United Nations and coordinates the development of telecommunications standards for the entire world. Listed below are some of the most commonly cited SDH standards available from ITU. ITU-T G.707: Network Node Interface for the Synchronous Digital Hierarchy (SDH)
ITU-T G.781: Structure of Recommendations on Equipment for the Synchronous Digital Hierarchy (SDH)
ITU-T G.782: Types and Characteristics of Synchronous Digital Hierarchy (SDH) Equipment
ITU-T G.783: Characteristics of Synchronous Digital Hierarchy (SDH) Equipment Functional Blocks
ITU-T G.803: Architecture of Transport Networks Based on the Synchronous Digital Hierarchy (SDH)
Telcordia Documents
Telcordia Technologies (formerly Bell Communications Research, or "Bellcore") has issued over 50 documents that relate to SONET. The document most commonly cited is listed below. Telcordia documents are very expensive with the following document listing for US $2250 (at last check).
GR 253 CORE: SONET Transport Systems: Common Generic Criteria
SONET Interoperability Forum (SIF)
The SONET Interoperability Form (SIF) was formed in 1994 to identify SONET interoperability issues. As solutions are defined, reviewed, and approved they are published as SIF Approved Documents.
ATM over SONET
The following standard from ITU defines the mapping of an ATM cell stream into an SDH frame structure.
ITU-T I.432: B-ISDN User-Network Interface - Physical Layer Specification
Similar specifications are available from the ATM Forum with the following being one example:
622.08 Mb/s Physical Layer Specification
Packet Over SONET (POS)
The Internet Engineering Task Force (IETF) has released RFCs that describe the use of Point-to-Point Protocol for transferring IP traffic natively over SONET and SDH circuits. These RFCs were developed by the PPP Extensions Working Group of IETF.
IETF RFC2615: PPP over SONET/SDH
IETF RFC1661: The Point-to-Point Protocol (PPP)
IETF RFC1662: PPP in HDLC-like Framing

Packet over SONET/SDH

Packet over SONET/SDH, abbreviated POS, is a communications protocol for transmitting packets in the form of the Point to Point Protocol (PPP) over SDH or SONET, which are both standard protocols for communicating digital information using lasers or light emitting diodes (LEDs) over optical fibre at high line rates. POS is defined by RFC 2615 as PPP over SONET/SDH. PPP is the Point to Point Protocol that was designed as a standard method of communicating over point-to-point links. Since SONET/SDH utilises point-to-point circuits, PPP is well suited for use over these links. Scrambling is performed during insertion of the PPP packets into the SONET/SDH frame.

Applications of POS

The most important application of POS is to support sending of IP packets across Wide Area Networks. Large amounts of traffic on the Internet are carried over POS links.
POS is also one of the link layers used by the IEEE 802.17 Resilient Packet Ring standard.

History of POS

Cisco was involved in making POS an important Wide Area Network protocol. PMC-Sierra produced an important series of early semiconductor devices which implemented POS.

Details about the name "POS"

POS is a double nested abbreviation as the S in POS stands for "SONET/SDH", which itself stands for "Synchronous Optical Network/Synchronous Digital Hierarchy".
Complementary Interfaces
The System Packet Interface series of standards from the Optical Internetworking Forum including SPI-4.2 and SPI-3 and their predecessors PL-4 and PL-3 are commonly used as the interface between packet processing devices and framer devices that implement POS. The acronym PL-4 means POS-PHY Layer 4.

Framing structure

The frame consists of two parts, the transport overhead and the path virtual envelope.
Transport overhead
The transport overhead is used for signaling and measuring transmission error rates, and is composed as follows:
Section overhead - called RSOH (regenerator section overhead) in SDH terminology: 27 octets containing information about the frame structure required by the terminal equipment.
Line overhead - called MSOH (multiplex section overhead) in SDH: 45 octets containing information about alarms, maintenance and error correction as may be required within the network.
Pointer – It points to the location of the J1 byte in the payload.
Path virtual envelope
Data transmitted from end to end is referred to as path data. It is composed of two components:
Payload overhead (POH): 9 octets used for end to end signaling and error measurement.
Payload: user data (774 bytes for STM-0/STS-1, or 2340 octets for STM-1/STS-3c)
For STS-1, the payload is referred to as the synchronous payload envelope (SPE), which in turn has 18 stuffing bytes, leading to the STS-1 payload capacity of 756 bytes.
The STS-1 payload is designed to carry a full PDH DS3 frame. When the DS3 enters a SONET network, path overhead is added, and that SONET network element (NE) is said to be a path generator and terminator. The SONET NE is said to be line terminating if it processes the line overhead. Note that wherever the line or path is terminated, the section is terminated also. SONET regenerators terminate the section but not the paths or line.
An STS-1 payload can also be subdivided into 7 VTGs (virtual tributary groups). Each VTG can then be subdivided into 4 VT1.5 signals, each of which can carry a PDH DS1 signal. A VTG may instead be subdivided into 3 VT2 signals, each of which can carry a PDH E1 signal. The SDH equivalent of a VTG is a TUG2; VT1.5 is equivalent to VC11, and VT2 is equivalent to VC12.
Three STS-1 signals may be multiplexed by time-division multiplexing to form the next level of the SONET hierarchy, the OC-3 (STS-3), running at 155.52 Mbps. The multiplexing is performed by interleaving the bytes of the three STS-1 frames to form the STS-3 frame, containing 2,430 bytes and transmitted in 125 microseconds.
Higher speed circuits are formed by successively aggregating multiples of slower circuits, their speed always being immediately apparent from their designation. For example, four STS-3 or AU4 signals can be aggregated to form a 622.08 Mbps signal designated as OC-12 or STM-4.
The highest rate that is commonly deployed is the OC-192 or STM-64 circuit, which operates at rate of just under 10 Gbps. Speeds beyond 10 Gbps are technically viable and are under evaluation. [Few vendors are offering STM-256 rates now, with speeds of nearly 40Gbps]. Where fiber exhaustion is a concern, multiple SONET signals can be transported over multiple wavelengths over a single fiber pair by means of wavelength-division multiplexing, including dense wavelength division multiplexing (DWDM) and coarse wavelength-division multiplexing (CWDM). DWDM circuits are the basis for all modern transatlantic cable systems and other long-haul circuits.
SONET/SDH and relationship to 10 Gigabit Ethernet
Another circuit type amongst data networking equipment is 10 Gigabit Ethernet (10GbE). This is similar to the line rate of OC-192/STM-64 (9.953 Gbps). The Gigabit Ethernet Alliance created two 10 Gigabit Ethernet variants: a local area variant (LAN PHY), with a line rate of exactly 10,000,000 kbps and a wide area variant (WAN PHY), with the same line rate as OC-192/STM-64 (9,953,280 kbps). The Ethernet wide area variant encapsulates its data using a light-weight SDH/SONET frame so as to be compatible at low level with equipment designed to carry those signals.
However, 10 Gigabit Ethernet does not explicitly provide any interoperability at the bitstream level with other SDH/SONET systems. This differs from WDM system transponders, including both coarse and dense WDM systems (CWDM, DWDM) that currently support OC-192 SONET signals, which can normally support thin-SONET framed 10 Gigabit Ethernet.
SONET/SDH data rates
SONET/SDH Designations and bandwidths
SONET Optical Carrier Level
SONET Frame Format
SDH level and Frame Format
Payload bandwidth (kbps)
Line Rate (kbps)
OC-1
STS-1
STM-0
50,112
51,840
OC-3
STS-3
STM-1
150,336
155,520
OC-12
STS-12
STM-4
601,344
622,080
OC-24
STS-24

1,202,688
1,244,160
OC-48
STS-48
STM-16
2,405,376
2,488,320
OC-192
STS-192
STM-64
9,621,504
9,953,280
OC-768
STS-768
STM-256
38,486,016
39,813,120
OC-3072
STS-3072
STM-1024
153,944,064
159,252,480
In the above table, payload bandwidth is the line rate less the bandwidth of the line and section overheads. User throughput must also deduct path overhead from this, but path overhead bandwidth is variable based on the types of cross-connects built across the optical system.
Note that the data rate progression starts at 155Mb/s and increases by multiples of 4. The only exception is OC-24 which is standardised in ANSI T1.105, but not a SDH standard rate in ITU-T G.707. Other rates such as OC-9, OC-18, OC-36, and OC-96, and OC-1536 are sometimes described, but it is not clear if they were ever deployed, and are certainly not common, and are not standards compliant.
The next logical rate of 160 Gb/s OC-3072/STM-1024 has not yet been standardised, due to the cost of high-rate transceivers and the ability to more cheaply multiplex wavelengths at 10 and 40 Gb/s.
Physical layer
The physical layer actually comprises a large number of layers within it, only one of which is the optical/transmission layer (which includes bitrates, jitter specifications, optical signal specifications and so on). The SONET and SDH standards come with a host of features for isolating and identifying signal defects and their origins.
SONET/SDH network management protocols
SONET equipment is often managed with the TL1 protocol. TL1 is a traditional telecom language for managing and reconfiguring SONET network elements. TL1 (or whatever command language a SONET Network Element utilizes) must be carried by other management protocols, including SNMP, CORBA and XML.
There are some features that are fairly universal in SONET Network Management. First of all, most SONET NEs have a limited number of management interfaces defined. These are:
Electrical interface. The electrical interface (often 50 Ω) sends SONET TL1 commands from a local management network physically housed in the Central Office where the SONET NE is located. This is for "local management" of that NE and, possibly, remote management of other SONET NEs.
Craft interface. Local "craftspersons" can access a SONET NE on a "craft port" and issue commands through a dumb terminal or terminal emulation program running on a laptop. This interface can also be hooked-up to a console server, allowing for remote out-of-band management and logging.
SONET and SDH have dedicated data communication channels (DCC)s within the section and line overhead for management traffic. Generally, section overhead (regenerator section in SDH) is used. According to ITU-T G.7712, there are three modes used for management:
IP-only stack, using PPP as data-link
OSI-only stack, using LAP-D as data-link
Dual (IP+OSI) stack using PPP or LAP-D with tunneling functions to communicate between stacks.
An interesting fact about modern NEs is that, to handle all of the possible management channels and signals, most NEs actually contain a router for routing the network commands and underlying (data) protocols.
The main functions of network management include:
Network and NE provisioning. In order to allocate bandwidth throughout a network, each NE must be configured. Although this can be done locally, through a craft interface, it is normally done through a network management system (sitting at a higher layer) that in turn operates through the SONET/SDH network management network.
Software upgrade. NE software upgrade is in modern NEs done mostly through the SONET/SDH management network.
Performance management. NEs have a very large set of standards for Performance Management. The PM criteria allow for monitoring not only the health of individual NEs, but for the isolation and identification of most network defects or outages. Higher-layer Network monitoring and management software allows for the proper filtering and troubleshooting of network-wide PM so that defects and outages can be quickly identified and responded to.
Equipment
With recent advances in SONET and SDH chipsets, the traditional categories of NEs are breaking down. Nevertheless, as network architectures have remained relatively constant, even newer equipment (including "Multiservice Provisioning Platforms") can be examined in light of the architectures they will support. Thus, there is value in viewing new (as well as traditional) equipment in terms of the older categories.
Regenerator
Traditional regenerators terminate the section overhead, but not the line or path. Regenerators extend long haul routes in a way similar to most regenerators, by converting an optical signal that has already traveled a long distance into electrical format and then retransmitting a regenerated high-power signal.
Since the late 1990s, regenerators have been largely replaced by optical amplifiers. Also, some of the functionality of regenerators has been absorbed by the transponders of wavelength-division multiplexing systems.
Add-drop multiplexer
Add-drop multiplexers (ADMs) are the most common type of NEs. Traditional ADMs were designed to support one of the network architectures, though new generation systems can often support several architectures, sometimes simultaneously. ADMs traditionally have a "high speed side" (where the full line rate signal is supported), and a "low speed side", which can consist of electrical as well as optical interfaces. The low speed side takes in low speed signals which are multiplexed by the NE and sent out from the high speed side, or vice versa.
Digital cross connect system
Recent digital cross connect systems (DCSs or DXCs) support numerous high-speed signals, and allow for cross connection of DS1s, DS3s and even STS-3s/12c and so on, from any input to any output. Advanced DCSs can support numerous subtending rings simultaneously.
Network architectures
Currently, SONET (and SDH) have a limited number of architectures defined. These architectures allow for efficient bandwidth usage as well as protection (i.e. the ability to transmit traffic even when part of the network has failed), and are key in understanding the almost worldwide usage of SONET and SDH for moving digital traffic. The three main architectures are:
Linear APS (automatic protection switching), also known as 1+1: This involves 4 fibers: 2 working fibers (1 in each direction), and two protection fibers. Switching is based on the line state, and may be unidirectional, with each direction switching independently, or bidirectional, where the NEs at each end negotiate so that both directions are generally carried on the same pair of fibers.
UPSR (unidirectional path-switched ring): In a UPSR, two redundant (path-level) copies of protected traffic are sent in either direction around a ring. A selector at the egress node determines the higher-quality copy and decides to use the best copy, thus coping if deterioration in one copy occurs due to a broken fiber or other failure. UPSRs tend to sit nearer to the edge of a network and, as such, are sometimes called "collector rings". Because the same data is sent around the ring in both directions, the total capacity of a UPSR is equal to the line rate N of the OC-N ring. For example if we had an OC-3 ring with 3 STS-1s used to transport 3 DS-3s from ingress node A to the egress node D, then 100% of the ring bandwidth (N=3) would be consumed by nodes A and D. Any other nodes on the ring, say B and C could only act as pass through nodes. The SDH analog of UPSR is subnetwork connection protection (SNCP); however, SNCP does not impose a ring topology, but may also be used in mesh topologies.
BLSR (bidirectional line-switched ring): BLSR comes in two varieties, 2-fiber BLSR and 4-fiber BLSR. BLSRs switch at the line layer. Unlike UPSR, BLSR does not send redundant copies from ingress to egress. Rather, the ring nodes adjacent to the failure reroute the traffic "the long way" around the ring. BLSRs trade cost and complexity for bandwdith efficiency as well as the ability to support "extra traffic", which can be pre-empted when a protection switching event occurs. BLSRs can operate within a metropolitan region or, often, will move traffic between municipalities. Because a BLSR does not send redundant copies from ingress to egress the total bandwidth that a BLSR can support is not limited to the line rate N of the OC-N ring and can actually be larger than N depending upon the traffic pattern on the ring. The best case of this is that all traffic is between adjacent nodes. The worst case is when all traffic on the ring egresses from a single node, i.e. the BLSR is serving as a collector ring. In this case the bandwidth that the ring can support is equal to the line rate N of the OC-N ring. This is why BLSRs are seldom if ever deployed in collector rings but often deployed in inter-office rings. The SDH equivalent of BLSR is called Multiplex Section-Shared Protection Ring (MS-SPRING).
Synchronization
Clock sources used by synchronization in telecommunications networks are rated by quality, commonly called a 'stratum' level. Typically, a network element (NE) uses the highest quality stratum available to it, which can be determined by monitoring the synchronization status messages(SSM) of selected clock sources.
As for synchronization sources available to an NE, these are:
Local external timing. This is generated by an atomic Caesium clock or a satellite-derived clock by a device in the same central office as the NE. The interface is often a DS1, with sync status messages supplied by the clock and placed into the DS1 overhead.
Line-derived timing. An NE can choose (or be configured) to derive its timing from the line-level, by monitoring the S1 sync status bytes to ensure quality.
Holdover. As a last resort, in the absence of higher quality timing, an NE can go into "holdover" until higher quality external timing becomes available again. In this mode, an NE uses its own timing circuits as a reference.
Timing loops
A timing loop occurs when NEs in a network are each deriving their timing from other NEs, without any of them being a "master" timing source. This network loop will eventually see its own timing "float away" from any external networks, causing mysterious bit errors and ultimately, in the worst cases, massive loss of traffic. The source of these kinds of errors can be hard to diagnose. In general, a network that has been properly configured should never find itself in a timing loop, but some classes of silent failures could nevertheless cause this issue
Next-generation SONET/SDH
SONET/SDH development was originally driven by the need to transport multiple PDH signals like DS1, E1, DS3 and E3 along with other groups of multiplexed 64 kbps pulse-code modulated voice traffic. The ability to transport ATM traffic was another early application. In order to support large ATM bandwidths, the technique of concatenation was developed, whereby smaller multiplexing containers (eg, STS-1) are inversely multiplexed to build up a larger container (eg, STS-3c) to support large data-oriented pipes.
One problem with traditional concatenation, however, is inflexibility. Depending on the data and voice traffic mix that must be carried, there can be a large amount of unused bandwidth left over, due to the fixed sizes of concatenated containers. For example, fitting a 100 Mbps Fast Ethernet connection inside a 155 Mbps STS-3c container leads to considerable waste. More important is the need for all intermediate NEs to support the newly introduced concatenation sizes. This problem was later overcome with the introduction of Virtual Concatenation.
(Virtual concatenation (VCAT) allows for a more arbitrary assembly of lower order multiplexing containers, building larger containers of fairly arbitrary size (e.g. 100 Mbps) without the need for intermediate NEs to support this particular form of concatenation. Virtual Concatenation increasingly leverages X.86 or Generic Framing Procedure (GFP) protocols in order to map payloads of arbitrary bandwidth into the virtually concatenated container.
Link Capacity Adjustment Scheme (LCAS) allows for dynamically changing the bandwidth via dynamic virtual concatenation, multiplexing containers based on the short-term bandwidth needs in the network.
The set of next generation SONET/SDH protocols to enable Ethernet transport is referred to as Ethernet over SONET/SDH (EoS).