`
`OMPUTER NETWORKS
`
`ANDREW S. TANENBAUM
`
`3:25,:
`.J-i—“I—L.
`Broadcasting
`
`m ulilifl Hal ...;.._.
`
`
`
`
`
`WELCOME
`TO THE
`INFORMATION
`SUPER
`HIGHWAY
`
`
`
`
`
` Comcast, Ex. 1214
`
`1
`
`Comcast, Ex. 1214
`
`
`
`Computer Networks
`
`Third Edition
`
`Andrew S. Tanenbaum
`
`Vrr'je Unlversr'teit
`Amsterdam, The Netherlands
`
`For book and bookstore information
`
`Prentice Hall PTF?
`Upper Saddle River, New Jersey 07458
`
`2
`
`
`
`Library of Congress Cataloging in Publication Data
`
`Tanenbaum, Andrew S. 1944-.
`Computer networks 1' Andrew S. Tanenbaum. -- 3rd ed.
`p.
`cm.
`Includes bibliographical references and index.
`ISBN 0-13-349945—6
`
`1.Computer networks.
`TK5105.5.T36 1996
`004.6wdc20
`
`I. Title.
`
`96-4121
`CIP
`
`Editorial/production manager: Camille Trenracoste
`Interior design and composition: Andrew S. Tanenbaum
`Cover design director: Jerry Vorra
`Cover designer: Don Marrinetri, DM Graphics, Inc.
`Cover concept: Andrew S. Tanenbanm, from an idea by Marilyn Tremaine
`Interior graphics: Hadei Studio
`Manufacturing manager: Alexis R. Heydr
`Acquisitions editor: Mary Franz
`Editorial Assistant: Noreen Regina
`
`
`
`© 1996 by Prentice Hall PTR
`PrenticeAHall, Inc.
`A Simon & Schuster Company
`Upper Saddle River, New Jersey 07458
`
`The publisher offers discounts on this book when ordered in bulk quantities. For more information,
`contact:
`
`Corporate Sales Department, Prentice Hall PTR, One Lake Street, Upper Saddle River, NJ 07458.
`Phone: (800) 382-3419; Fax: (201) 236-7141. E—mail: corpsales@prenhall.corn
`
`All rights reserved. No part of this book may be reproduced, in any form or by any means. without
`permission in writing from the publisher.
`
`All product names mentioned herein are the trademarks of their respective owners.
`
`Printed in the United States of America
`10
`9
`8
`7
`6
`5
`4
`
`ISBN 0-13-349945-6
`
`Prentice-Hall International (UK) Limited, London
`Prentice-Hall of Australia Pty. Limited, Sydney
`Prentice-Hall Canada Inc, Toronto
`PrenticerHall Hispanoamericana, S.A., Mexico
`Prentice-Hall of India Private Limited, New Delhi
`Prentice-Hall of Japan, Inc., Tokyo
`Simon & Schuster Asia Pte. Ltd., Singapore
`Editora Prentice-Hall do Brasil, Ltda., Rio de Janeiro
`
`3
`
`
`
`The Network Layer
`
`The network layer is concerned with controlling the operation of the subnet.
`A key design issue is determining how packets are routed from source to destina-
`tion. Routes can be based on static tables that are “wired into” the network and
`rarely changed. They can also be determined at the start of each conversation, for
`example a terminal session. Finally, they can be highly dynamic, being deter;
`mined anew for each packet, to reflect the current network load.
`If too many packets are present in the subnet at the same time, they will get in
`each other’s way, forming bottlenecks. The control of such congestion also
`belongs to the network layer.
`Since the operators of the subnet may well expect remuneration for their
`efforts, there is often some accounting function built into the network layer. At
`the very least, the software must count how many packets or characters or bits are
`sent by each customer, to produce billing information. When a packet crosses a
`national border, with different rates on each side,
`the accounting can become
`complicated.
`When a packet has to travel from one network to another to get to its destina-
`tion, many problems can arise. The addressing used by the second network may
`be different from the first one. The second one may not accept the packet at all
`because it is too large. The protocols may differ, and so on.
`It is up to the net-
`work layer to overcome all these problems to allow heterogeneous networks to be
`interconnected.
`In broadcast networks, the routing problem is simple, so the network layer is
`often thin or even nonexistent.
`
`The Transport Layer
`
`The basic function of the transport layer is to accept data from the session
`layer, split it up into smaller units if need be, pass these to the network layer, and
`ensure that the pieces all arrive correctly at the other end. Furthermore, all this
`must be done efficiently, and in a way that isolates the upper layers from the inev-
`itable changes in the hardware technology.
`Under normal conditions, the transport layer creates a distinct network con—
`nection for each transport connection required by the session layer.
`If the trans—
`port connection requires a high throughput, however, the transport layer might
`create multiple network connections, dividing the data among the network con—
`nections to improve throughput. On the other hand, if creating or maintaining a
`network connection is expensive,
`the transport
`layer might multiplex several
`transport connections onto the same network connection to reduce the cost.
`In all
`cases, the transport layer is required to make the multiplexing transparent to the
`session layer.
`The transport layer also determines what type of service to provide the session
`
`4
`
`
`
`layer, and ultimately, the users of the network. The most popular type of transport
`connection is an error—free point-to—point channel that delivers messages or bytes
`in the order in which they were sent. However, other possible kinds of transport
`service are transport of isolated messages with no guarantee about the order of
`delivery, and broadcasting of messages to multiple destinations. The type of ser~
`vice is determined when the connection is established.
`In
`The transport layer is a true end—to-end layer, from source to destination.
`other words, a program on the source machine carries on a conversation with a
`similar program on the destination machine, using the message headers and con-
`trol messages.
`In the lower layers, the protocols are between each machine and its
`immediate neighbors, and not by the ultimate source and destination machines,
`which may be separated by many routers. The difference between layers 1
`through 3, which are chained, and layers 4 through 7, which are end—to-end,
`is
`illustrated in Fig. 1-16.
`Many hosts are multiprogrammed, which implies that multiple connections
`will be entering and leaving each host. There needs to be some way to tell which
`message belongs to which connection. The transport header (H 4 in Fig. 1—11) 'is
`one place this information can be put.
`the
`In addition to multiplexing several message streams onto one channel,
`transport layer must take care of establishing and deleting connections across the
`network. This requires some kind of naming mechanism, so that a process on one
`machine has a way of describing with whom it wishes to converse. There must
`also be a mechanism to regulate the flow of information, so that a fast host cannot
`overrun a slow one. Such a mechanism is called flow control and plays a key
`role in the transport layer (also in other layers). Flow control between hosts is
`distinct from flow control between routers, although we will later see that similar
`principles apply to both.
`
`The Session Layer
`
`The session layer allows users on different machines to establish sessions
`between them. A session allows ordinary data transport, as does the transport
`layer, but it also provides enhanced services useful in some applications. A ses-
`sion might be used to allow a user to log into a remote timesharing system or to
`transfer a file between two machines.
`
`One of the services of the session layer is to manage dialogue control. Ses-
`sions can allow traffic to go in both directions at the same time, or in only one
`direction at a time. If traffic can only go one way at a time (analogous to a single
`railroad track), the session layer can help keep track of whose turn it is.
`A related session service is token management. For some protocols, it is
`essential that both sides do not attempt the same operation at the same time. To
`manage these activities, the session layer provides tokens that can be exchanged.
`Only the side holding the token may perform the critical operation.
`
`5
`
`
`
`118
`
`THE PHYSICAL LAYER
`
`CHAP. 2
`
`FTTH should be the goal from the beginning is a matter of some debate within the
`telephone industry.
`‘
`An alternative design using the existing cable TV infrastructure is shown in
`Fig. 2-23(b). Here a multidrop cable is used instead of the point-to-point system
`characteristic of the telephone system.
`It is likely that both Fig. 2-23(a) and
`Fig. 2—23(b) will coexist in the future, as telephone companies and cable TV
`operators become direct competitors for voice, data, and possibly even television
`service. For more information about this topic, see (Cook and Stern, 1994; Miki,
`1994b; and Mochida, 1994).
`
`2.4.4. Trunks and Multiplexing
`
`It costs
`Economies of scale play an important role in the telephone system.
`essentially the same amount of money to install and maintain a high—bandwidth
`trunk as a low-bandwidth trunk between two switching offices (i.e.,
`the costs
`come from having to dig the trench and not from the copper wire or Optical fiber).
`Consequently, telephone companies have developed elaborate schemes for multi-
`plexing many conversations over a single physical trunk. These multiplexing
`schemes can be divided into two basic categories: FDM (Frequency Division
`Multiplexing), and TDM (Time Division Multiplexing). In FDM the frequency
`spectrum is divided among the logical channels, with each user having exclusive
`possession of some frequency band.
`In TDM the users take turns (in a round
`robin), each one periodically getting the entire bandwidth for a little burst of time.
`AM radio broadcasting provides illustrations of both kinds of multiplexing.
`The allocated spectrum is about 1 MHz, roughly 500 to 1500 kHz. Different fre-
`quencies are allocated to different logical channels (stations), each operating in a
`portion of the spectrum, with the interchannel separation great enough to prevent
`interference. This system is an example of frequency division multiplexing.
`In
`addition (in some countries), the individual stations have two logical subchannels:
`music and advertising. These two alternate in time on the same frequency, first a
`burst of music, then a burst of advertising, then more music, and so on. This
`situation is time division multiplexing.
`Below we will examine frequency division multiplexing. After that we will
`see how FDM can be applied to fiber optics (wavelength division multiplexing).
`Then we will turn to TDM, and end with an advanced TDM system used for fiber
`optics (SONET).
`
`Frequency Division Multiplexing
`
`Figure 2—24 shows how three voice—grade telephone channels are multiplexed
`using FDM. Filters limit the usable bandwidth to about 3000 Hz per voice-grade
`channel. When many channels are multiplexed together, 4000 Hz is allocated to
`each channel to keep them well separated. First the voice channels are raised in
`
`6
`
`
`
`frequency, each by a different amount. Then they can be combined, because no
`two channels now occupy the same portion of the spectrum. Notice that even
`though there are gaps (guard bands) between the channels, there is some overlap
`between adjacent channels, because the filters do not have sharp edges. This
`overlap means that a strong spike at the edge of one channel will be felt in the
`adjacent one as nonthermal noise.
`
`Channel 1
`
`1
`
`a L
`
`Channe|2
`
`Channelz
`Channeli
`Channel3
`
`5
`3
`
`= 12
`iii
`
`<
`
`_.
`
`:5E3 AH m
`
`Channel 3
`
`16: 172
`
`3003100
`
`Frequency (Hz)
`
`Frequency(()kHz
`
`(a)
`
`(b)
`
`68
`Frequency (kHz)
`
`72
`
`(C)
`
`(a) The original bandwidths.
`Fig. 2-24. Frequency division multiplexing.
`(b) The bandwidths raised in frequency.
`(c) The multiplexed channel.
`
`The FDM schemes used around the world are to some degree standardized. A
`widespread standard is 12 4000-Hz voice channels (3000 Hz for the user, plus two
`guard bands of 500 Hz each) multiplexed into the 60 to 108 kHz band. This unit
`is called a group. The 12— to 60-kHz band is sometimes used for another group.
`Many carriers offer a 48— to 56-kbps leased line service to customers, based on the
`group. Five groups (60 voice channels) can be multiplexed to form a super-
`group. The next unit is the mastergroup, which is five supergroups (CCITT
`standard) or ten supergroups (Bell system). Other standards up to 230,000 voice
`channels also exist.
`
`Wavelength Division Multiplexing
`
`For fiber optic channels, a variation of frequency division multiplexing is
`used.
`It is called WDM (Wavelength Division Multiplexing). A simple way of
`achieving FDM on fibers is depicted in Fig. 2—25. Here two fibers come together
`
`7
`
`
`
`at a prism (or more likely, a diffraction grating), each with its energy in a different
`band. The two beams are passed through the prism or grating, and combined onto
`a single shared fiber for transmission to a distant destination, where they are split
`again.
`
`Fiber 1
`spectrum
`
`Fiber 2
`spectrum
`
`Spectrum
`on the
`shared fiber
`
`0
`a.
`
`O
`c.
`
`O
`a.
`
`it
`
`it
`
`it
`
`Prism or
`
`/ diffraction grating
`
`\ Shared fiber
`
`Fig. 2-25. Wavelength division multiplexing.
`
`There is really nothing new here. As long as each channel has its own fre-
`quency range. and all the ranges are disjoint, they can be multiplexed together on
`the long-haul fiber. The only difference with electrical FDM is that an optical
`system using a diffraction grating is completely passive, and thus highly reliable.
`It should be noted that the reason WDM is popular is that the energy on a sin-
`gle fiber is typically only a few gigahertz wide because it is carrently impossible
`to convert between electrical and optical media any faster. Since the bandwidth
`of a single fiber band is about 25,000 GHz (see Fig. 2—6), there is great potential
`for multiplexing many channels together over long-haul routes. A necessary con—
`dition, however, is that the incoming channels use different frequencies.
`A potential application of WDM is in the FTTC systems described earlier.
`Initially, a telephone company could run a single fiber from an end office to a
`neighborhood junction box where it met up with twisted pairs from the houses.
`Years later, when the cost of fiber is lower and the demand for it is higher, the
`twisted pairs can be replaced by fiber and all the local loops joined onto the fiber
`running to the end office using WDM.
`In the example of Fig. 2—25, we have a fixed wavelength system. Bits from
`fiber 1 go to fiber 3, and bits from fiber 2 go to fiber 4.
`It is not possible to have
`bits go from fiber 1 to fiber 4. Howaver, it is also possible to build WDM systems
`that are switched.
`In such a device, there are many input fibers and many output
`
`8
`
`
`
`fibers, and the data from any input fiber can go to any output fiber. Typically, the
`coupler is a passive star, with the light from every input fiber illuminating the star.
`Although spreading the energy over it outputs dilutes it by a factor rt, such systems
`are practical for hundreds of channels.
`Of course, if the light from one of the incoming fibers is at 1.50206 microns
`and potentially might have to go to any output fiber, all the output fibers need tun»
`able filters so the selected one can set itself to 1.50206 microns. Such optical tun—
`able filters can be built from Fabry-Perot or Mach-Zehnder interferometers.
`Alternatively, the input fibers could be tunable and the output ones fixed. Having
`both be tunable is an unnecessary expense and is rarely worth it.
`
`Time Division Multiplexing
`
`it
`Although FDM is still used over copper wires or microwave channels,
`In
`requires analog circuitry and is not amenable to being done by a computer.
`contrast, TDM can be handled entirely by digital electronics, so it has become far
`more widespread in recent years. Unfortunately, it can only be used for digital
`data. Since the local loops produce analog signals, a conversion is needed from
`analog to digital
`in the end office, where all the individual local loops come
`together to be combined onto outgoing trunks. We will now look at how multiple
`analog voice signals are digitized and combined onto a single outgoing digital
`trunk.
`(Remember that computer data sent over a modem are also analog when
`they get to the end office.)
`The analog signals are digitized in the end office by a device called a codec
`(coder-decoder), producing a 7— or 8-bit number (see Fig. 2—17). The codec makes
`8000 samples per second (125 usec/sample) because the Nyquist theorem says
`that this is sufficient to capture all the information from the 4-kHz telephone
`channel bandwidth. At a lower sampling rate, information would be lost; at a
`higher one, no extra information would be gained. This technique is called PCM
`(Pulse Code Modulation). PCM forms the heart of the modern telephone sys—
`tem. As a consequence, virtually all time intervals within the telephone system
`are multiples of 125 usec.
`When digital transmission began emerging as a feasible technology, CCITT
`was unable to reach agreement on an international standard for PCM. Conse-
`quently, there are now a variety of incompatible schemes in use in different coun-
`tries around the world.
`International hookups between incompatible countries
`require (often expensive) “black boxes” to convert the originating country’s sys-
`tem to that of the receiving country.
`One method that is in widespread use in North America and Japan is the T1
`carrier, depicted in Fig. 2—26. (Technically speaking, the format is called D81 and
`the carrier is called Tl, but we will not make that subtle distinction here.) The T]
`carrier consists of 24 voice channels multiplexed together. Usually, the analog
`signals are sampled on a round-robin basis with the resulting analog stream being
`
`9
`
`
`
`122
`
`THE PHYSICAL LAYER
`
`CHAP. 2
`
`fed to the codec rather than having 24 separate codecs and then merging the digi—
`tal output. Each of the 24 channels, in turn, gets to insert 8 bits into the output
`stream. Seven bits are data, and one is for control, yielding 7 X 8000 = 56,000 bps
`of data, and 1 X 8000 = 8000 bps of Signaling information per channel.
`
`
`
`+———————————— 193 Bit frame (125 uses) _————~————————»
`
`\_____v____2
`7 Data \ Bit 8 is for
`bits per
`signaling
`channel
`per sample
`
`is
`\ Bit ‘1
`a framing
`code
`
`Fig. 2-26. The T1 carrier (1.544 Mbps).
`
`A frame consists of 24 X 8 = 192 bits, plus one extra bit for framing, yielding
`193 bits every 125 nsec. This gives a gross data rate of 1.544 Mbps. The 193rd
`bit is used for frame synchronization.
`It takes on the pattern 0101010101 .
`.
`.
`.
`Normally, the receiver keeps checking this hit to make sure that it has not lost
`synchronization. If it does get out of sync, the receiver can scan for this pattern to
`get resynchronized. Analog customers cannot generate the bit pattern at all,
`because it corresponds to a sine wave at 4000 Hz, which would be filtered out.
`Digital customers can, of course, generate this pattern, but the odds are against its
`being present when the frame slips. When a T1 system is being used entirely for
`data, only 23 of the channels are used for data. The 24th one is used for a special
`synchronization pattern, to allow faster recovery in the event that the frame slips.
`When CCITT finally did reach agreement, they felt that 8000 bps of signaling
`information was far too much, so its 1.544—Mbps standard is based upon an 8—
`rather than a 7-bit data item; that is, the analog signal is quantized into 256 rather
`than 128 discrete levels. Two (incompatible) variations are provided.
`In
`common-channel signaling, the extra bit (which is attached onto the rear rather
`than the front of the 193 bit frame) takes on the values 10101010 .
`.
`.
`in the odd
`frames and contains signaling information for all the channels in the even frames.
`In the other variation, channel associated signaling, each channel has its own
`private signaling subchannel. A private subchannel is arranged by allocating one
`of the eight user bits in every sixth frame for signaling purposes, so five out of six
`samples are 8 bits wide, and the other one is only 7 bits wide. CCITT also has a
`
`10
`
`10
`
`
`
`SEC. 2.4
`
`THE TELEPHONE SYSTEM
`
`123
`
`recommendation for a PCM carrier at 2.048 Mbps called E1. This carrier has 32
`8—bit data samples packed into the basic lZS-ttsec frame. Thirty of the channels
`are used for information and two are used for signaling. Each group of four
`frames provides 64 signaling bits, half of which are used for channel associated
`signaling and half of which are used for frame synchronization or are reserved for
`each country to use as it wishes. Outside North America and Japan, the 2.048-
`Mbps carrier is in widespread use.
`Once the voice signal has been digitized, it is tempting to try to use statistical
`techniques to reduce the number of bits needed per channel. These techniques are
`appropriate not only to encoding speech, but to the digitization of any analog sig—
`nal. All of the compaction methods are based upon the principle that the signal
`changes relatively slowly compared to the sampling frequency, so that much of
`the information in the 7- or Subit digital level is redundant.
`One method, called differential pulse code modulation, consists of output-
`ting not the digitized amplitude, but the difference between the current value and
`the previous one. Since jumps of :16 or more on a scale of 128 are unlikely, 5
`bits should suffice instead of 7.
`If the signal does occasionally jump wildly, the
`encoding logic may require several sampling periods to “catch up.” For speech,
`the error introduced can be ignored.
`
`Consecutive samples
`always differ by :1
`
`15
`
`Signal changed too
`rapidly for encoding
`
`10C
`.9
`its
`.5
`'61
`
`E 5
`
`2 3
`
`’g
`
`i4—.
`
`-" 1
`
`Sampling
`interval
`
`
`
`Time
`
`Bit stream
`sent
`
`Fig. 2-27. Delta modulation.
`
`A variation of this compaction method requires each sampled value to differ
`from its predecessor by either +1 or -1. A single bit
`is transmitted,
`telling
`whether the new sample is above or below the previous one. This technique,
`called delta modulation, is illustrated in Fig. 2-27. Like all compaction tech-
`niques that assume small
`level changes between consecutive samples, delta
`
`11
`
`11
`
`
`
`124
`
`THE PHYSICAL LAYER
`
`CHAP. 2
`
`. encoding can get into trouble if the signal changes too fast, as shown in the figure.
`When this happens, information is lost.
`An improvement to differential PCM is to extrapolate the previous few values
`to predict the next value and then to encode the difference between the actual sig-
`nal and the predicted one. The transmitter and receiver must use the same predic—
`tion algorithm, of course. Such schemes are called predictive encoding. They
`are useful because they reduce the size of the numbers to be encoded, hence the
`number of bits to be sent.
`Although PCM is widely used on interoffice trunks, the computer user gets
`relatively little benefit from it if all data must be sent to the end office in the form
`of a modulated analog sine wave at 28.8 kbps.
`It would be nice if the carrier
`would attach the local loop directly to the PCM trunk system, so that the computer
`could output digital data directly onto the local loop at 1.544 or 2.048 Mbps.
`Unfortunately, the local loops cannot run at these speeds for very far.
`Time division multiplexing allows multiple Tl carriers to be multiplexed into
`higher-order carriers. Figure 2—28 shows how this can be done. At the left we see
`four T1 channels being multiplexed onto one T2 channel. The multiplexing at T2
`and above is done bit for bit, rather than byte for byte with the 24 voice channels
`that make up a T1 frame. Four T1 streams at 1.544 Mbps should generate 6.176
`Mbps, but T2 is actually 6.312 Mbps. The extra bits are used for framing and
`recovery, in case the carrier slips.
`
`4 T1 streams in
`
`6 T2 streams in
`
`7 T3 streams in
`
`I'll-Elm /
`
`IIIIIII e—u
`
`IIIIIEI
`IllIIII
`1.544 Mbps
`
`T1
`
`1 T2 streams out
`
`/
`
`/
`
`‘-
`‘x
`411 - Elfllllm Z: 6:1 —> ZIIIIDIJ _. 7:1 HFIEEEIID
`j;
`:;
`/
`
`6.312 Mbps
`
`T2
`
`44.736 Mbps
`
`274.176 Mbps
`
`T3
`
`T4
`
`Fig. 2-28. Multiplexing T1 streams onto higher carriers.
`
`At the next level, six T2 streams are combined bitwise to form a T3 stream.
`Then seven T3 streams are joined to form a T4 stream. At each step a small
`amount of overhead is added for framing and recovery.
`Just as there is little agreement on the basic carrier between the United States
`and the rest of the world, there is equally little agreement on how it is to be multi-
`plexed into higher bandwidth carriers. The U.S. scheme of stepping up by 4, 6,
`and 7 did not strike everyone else as the way to go, so the CCITT standard calls
`for multiplexing four streams onto one stream at each level. Also, the framing
`and recovery data are different. The CCITT hierarchy for 32, 128, 512, 2048, and
`8192 channels runs at speeds of 2.048, 8.848, 34.304, 139.264, and 565.148 Mbps.
`
`12
`
`12
`
`
`
`330
`
`THE MEDIUM ACCESS SUBLAYER
`
`CHAP. 4
`
`74.6.3. FDM
`
`the most
`Frequency division multiplexing is the oldest and probably still
`widely used channel allocation scheme. A typical 36—Mbps transponder might be
`divided statically into 500 or so 64,000-bps PCM channels, each one operating at
`its own unique frequency to avoid interfering with the others.
`Although simple, FDM also has some drawbacks. First, guard bands are
`needed between the channels to keep the stations separated. This requirement
`exists because it is not possible to build transmitters that output all their energy in
`the main band and nothing in the side bands. The amount of bandwidth wasted in
`the guard bands can be a substantial fraction of the total.
`Second, the stations must be carefully power controlled. If a station puts out
`too much power in the main band, it will also automatically put out too much
`power in the side bands, spilling over into adjacent channels and causing interfer—
`ence. Finally, FDM is entirely an analog technique and does not lend itself well
`to implementation in software.
`If the number of stations is small and fixed, the frequency channels can be
`allocated statically in advance. However, if the number of stations, or the load on
`each one can fluctuate rapidly, some form of dynamic allocation of the frequency
`bands is needed. One such mechanism is the SPADE system used on some early
`Intelsat satellites. Each SPADE transponder was divided into 794 simplex (64-
`kbps) PCM voice channels, along with a 128-kbps common signaling channel.
`The PCM channels were used in pairs to provide full duplex service. The total
`transponder bandwidth used was 50 Mbps for the uplink portion and another 50
`Mbps for the downlink.
`The common signaling channel was divided into units of 50 msec. A unit
`contained 50 slots of l msec (128 bits). Each slot was “owned” by one of (not
`more than) 50 ground stations. When a ground station had data to send, it picked
`a currently unused channel at random and wrote the number of that channel in its
`next 128-bit slot.
`If the selected channel was still unused when the request was
`seen on the downlink, the channel was considered allocated and all other stations
`refrained from trying to acquire it. If two or more stations tried to allocate the
`same channel in the same frame, a collision occurred and they had to try again
`later. When a station was finished using its channel,
`it sent a deallocation mes-
`sage in its slot on the common channel.
`
`4.6.4. TDM
`
`It requires
`Like FDM, TDM is well understood and widely used in practice.
`time synchronization for the slots, but this can be provided by a reference station,
`as described for slotted ALOHA above. Similarly to FDM, for a small and
`unvarying number of stations, the slot assignment can be set up in advance and
`
`13
`
`13
`
`
`
`SEC. 4.6
`
`SATELLITE NETWORKS
`
`331
`
`never changed, but for a varying number of stations, or a fixed number of stations
`with time-varying loads, time slots must be assigned dynamically.
`Slot assignment can be done in a centralized or a decentralized way. As an
`example of centralized slot assignment, let us consider the experimental ACTS
`(Advanced Communication Technology Satellite), which was designed for a
`few dozen stations (Palmer and White, 1990). ACTS was launched in 1992 and
`has four independent
`llO-Mbps TDM channels,
`two uplink and two downlink.
`Each channel is organized as a sequence of l-msec frames, each frame containing
`1728 time slots. Each time slot has a 64-bit payload, allowing each one to hold a
`64—kbps voice channel.
`The beams can be switched from one geographical area to another, but since
`moving the beam takes several slot times, channels originating or terminating in
`the same geographic area are normally assigned to contiguous time slots to
`increase dwell
`time and minimize time lost
`to beam motion. Thus time slot
`management requires a thorough knowledge of station geography to minimize the
`number of wasted time slots. For this and other reasons, time slot management is
`done by one of the ground stations, the MCS (Master Control Station).
`The basic operation of ACTS is a continuous three-step process, each step
`taking 1 msec.
`In step I, the satellite receives a frame and stores it in a 1728—
`entry onboard RAM.
`In step 2, an onboard computer copies each input entry to
`the corresponding output entry (possibly for the other antenna).
`In step 3, the out-
`put frame is transmitted on the downlink.
`Initially, each station is assigned at least one time slot. To acquire additional
`channels (for new voice calls), a station sends a short request message to the
`MCS. Similarly, it can release an existing channel with a message to the MCS.
`These messages make use of a small number of overhead bits and provide a spe-
`cial control channel to the MCS with a capacity of about 13 messages/sec per sta-
`tion. The channels are dedicated; there is no contention for them.
`Dynamic TDM slot allocation is also possible. Below we will discuss three
`schemes.
`In each of these, TDM frames are divided into time slots, with each slot
`having a (temporary) owner. Only the owner may use a time slot.
`The first scheme assumes that there are more slots than stations, so each sta-
`tion can be assigned a home slot (Binder, 1975).
`If there are more slots than sta~
`tions, the extra slots are not assigned to anyone. If the owner of a slot does not
`want it during the current group, it goes idle. An empty slot is a signal to every—
`one else that the owner has no traffic. During the next frame, the slot becomes
`available to anyone who wants it, on a contention (ALOHA) basis.
`thus
`If the owner wants to retrieve “his” home slot, he transmits a frame,
`forcing a collision (if there was other traffic). After a collision, everyone except
`the owner must desist from using the slot in the next frame. Thus the owner can
`always begin transmitting within two frame times in the worst case. At low chan—
`nel utilization the system does not perform as well as normal slotted ALOHA,
`since after each collision, the collidees must abstain for one frame to see if the
`
`14
`
`14
`
`
`
`owner wants the slot back. Fig. 4-5l(a) shows a frame with eight slots, seven of
`which are owned by G, A, F, E, B, C, and D, respectively. The eighth slot is not
`owned by anyone and can be fought over.
`
`
`Owner G A F
`E B C D
`Reservation subslots\
`Group1’G A|F BB’EB lF’CGB DE] H m HE]
`
`
`GroupzlA .C D B
`A
`G B
`D E m
`WymBBBLH NE W
`GrouprlGIAEBleB
`|Al:l l0
`l W 33 l
`[W
`
`
`
`
`
`
`
`
`
`
`D
`
`
`D
`
`(a
`
`(b)
`
`(C)
`
`(c) Roberts. The
`(b) Crowther.
`(a) Binder.
`Fig. 4-51. Reservation schemes.
`shaded boxes indicate collisions. For each of the three schemes, four consecu-
`tive groups of slots are shown.
`
`A second scheme is applicable even when the number of stations is unknown
`and varying (Crowther et al., 1973).
`In this method, slots do not have permanent
`owners, as in Binder’s.
`Instead, stations compete for slots using slotted ALOHA.
`Whenever
`a transmission is
`successful,
`the station making the successful
`transmission is entitled to that slot in the next frame as well. Thus, as long as a
`station has data to send,
`it can continue doing so indefinitely (subject to some
`“Please-do—not-be-aipig" rules).
`In essence the proposal allows a dynamic mix of
`slotted ALOHA and TDM, with the number of slots devoted to each varying with
`demand. Fig. 4-51 (b) also shows a frame with eight slots. Initially, E is using the
`last slot, but after two frames, it no longer needs it. It lies idle for one frame, and
`then D picks it up and keeps it until it is done.
`A third scheme, due to Roberts (1973), requires stations to make advance
`requests before transmitting. Each frame contains, say, one special slot [the last
`one in Fig. 4-51(c)] which is divided into V smaller subslots used to make reserva—
`tions. When a station wants to send data, it broadcasts a short request frame in a
`randomly-chosen reservation subslot. If the reservation is successful (i.e., no col-
`lision), then the next regular slot (or slots) is reserved. At all times everyone must
`keep track of the queue length (number of slots reserved), so that when any station
`makes a successful reservation it will know how many data slots to skip before
`transmit