`
`EXAMPLE DATA LINK PROTOCOLS
`
`231
`
`PPP-Point-to-Point Protocol
`
`To improve the situation, the IETF set up a group to devise a data link proto(cid:173)
`col for point-to-point lines that solved all these problems and that could become
`an official Internet Standard. This work culminated in PPP (Point-to-Point Pro(cid:173)
`tocol), which is defined in RFC 1661 and further elaborated on in several other
`RFCs (e.g., RFCs 1662 and 1663). PPP handles error detection, supports multiple
`protocols, allows IP addresses to be negotiated at connection time, permits
`authentication, and has many other improvements over SLIP. While many Inter(cid:173)
`net service providers still support both SLIP and PPP, the future clearly lies with
`PPP, not only for dial-up lines, but also for leased router-router lines.
`PPP provides three things:
`
`1. A framing method that unambiguously delineates the end of one
`frame and the start of the next one. The frame format also handles
`error detection.
`
`2. A link control protocol for bringing lines up, testing them, negotiat(cid:173)
`ing options, and bringing them down again gracefully when they are
`no longer needed. This protocol is called LCP (Link Control Pro(cid:173)
`tocol).
`
`3. A way to negotiate network-layer options in a way that is indepen(cid:173)
`dent of the network layer protocol to be used. The method chosen is
`to have a different NCP (Network Control Protocol) for each net(cid:173)
`work layer supported.
`To see how these pieces fit together, let us consider the typical scenario of a
`home user calling up an Internet service provider to make a home PC a temporary
`Internet host. The PC first calls the provider's router via a modem. After the
`router's modem has answered the phone and established a physical connection,
`the PC sends the router a series of LCP packets in the payload field of one or
`more PPP frames. These packets, and their responses, select the PPP parameters
`to be used.
`Once these have been agreed upon, a series of NCP packets are sent to config(cid:173)
`ure the network layer. Typically, the PC wants to run a TCP/IP protocol stack, so
`it needs an IP address. There are not enough IP addresses to go around, so nor(cid:173)
`mally each Internet provider gets a block of them and then dynamically assigns
`one to each newly attached PC for the duration of its login session. If a provider
`owns n IP addresses, it can have up ton machines logged in simultaneously, but
`its total customer base may be many times that. The NCP for IP is used to do the
`IP address assignment.
`At this point, the PC is now an Internet host and can send and receive IP pack(cid:173)
`ets, just as hardwired hosts can. When the user is finished, NCP is used to tear
`down the network layer connection and free up the IP address. Then LCP is used
`
`Ex.1006.249
`
`DELL
`
`
`
`
`
`
`
`234
`
`THE DAT A LINK LA YER
`
`CHAP. 3
`
`quality, to see if they consider it good enough to set up a connection. Finally, the
`LCP protocol also allows lines to be taken down when they are no longer needed.
`Eleven types of LCP packets are defined in RFC 1661. These are listed in
`Fig. 3-29. The four Configure- types allow the initiator (I) to propose option
`values and the responder (R) to accept or reject them. In the latter case, the
`responder can make an alternative proposal or announce that it is not willing to
`negotiate certain options at all. The options being negotiated and their proposed
`values are part of the LCP packets.
`
`Name
`
`Direction
`
`Description
`
`Configure-request
`
`I --c> R
`
`List of proposed options and values
`
`Configure-ack
`
`Configure-nak
`
`Configure-reject
`
`l~R
`
`l~R
`
`l~R
`
`All options are accepted
`
`Some options are not accepted
`
`Some options are not negotiable
`
`Terminate-request
`
`I --c> R
`
`Request to shut the line down
`
`Terminate-ack
`
`Code-reject
`
`Protocol-reject
`
`Echo-request
`
`Echo-reply
`
`l~R
`
`l~R
`
`l~R
`
`I --c> R
`
`l~R
`
`OK, line shut down
`
`Unknown request received
`
`Unknown protocol requested
`
`Please send this frame back
`
`Here is the frame back
`
`--
`
`--
`
`- -
`
`Discard-request
`
`I --c> R
`
`Just discard this frame (for testing)
`
`Fig. 3-29. The LCP packet types.
`
`The Terminate- codes are used to shut a line down when it is no longer
`needed. The Code-reject and Protocol-reject codes are used by the responder to
`indicate that it got something that it does not understand. This situation could
`mean that an undetected transmission error has occurred, but more likely it means
`that the initiator and responder are running different versions of the LCP protocol.
`The Echo- types are used to test the line quality. Finally, Discard-request is used
`for debugging. If either end is having trouble getting bits onto the wire, the pro(cid:173)
`grammer can use this type for testing. If it manages to get through, the receiver
`just throws it away, rather than taking some other action, which might confuse the
`person doing the testing.
`The options that can be negotiated include setting the maximum payload size
`for data frames, enabling authentication and choosing a protocol to use, enabling
`line quality monitoring during normal operation, and selecting various header
`compression options.
`There is little to say about the NCP protocols in a general way. Each one is
`specific to some network layer protocol and allows configuration requests to be
`
`Ex.1006.252
`
`DELL
`
`
`
`SEC. 3.6
`
`EXAMPLE DATA LINK PROTOCOLS
`
`235
`
`made that are specific to that protocol. For IP, for example, dynamic address
`assignment is the most important possibility.
`
`3.6.3. The Data Link Layer in A TM
`
`It is now time to begin our journey up through the A TM protocol layers of
`Fig. 1-30. The ATM physical layer covers roughly the OSI physical and data link
`layers, with the physical medium dependent sublayer being functionally like the
`OSI physical layer and the transmission convergence (TC) sublayer having data
`link functionality. There are no physical layer characteristics specific to ATM.
`Instead, ATM cells are carried by SONET, FDDI, and other transmission systems.
`Therefore we will concentrate here on the data link functionality of the TC sub(cid:173)
`layer, but we will discuss some aspects of the interface with the lower sublayer
`later on.
`When an application program produces a message to be sent, that message
`works its way down the A TM protocol stack, having headers and trailers added
`and undergoing segmentation into cells. Eventually, the cells reach the TC sub(cid:173)
`layer for transmission. Let us see what happens to them on the way out the door.
`
`Cell Transmission
`
`The first step is header checksumming. Each cell contains a 5-byte header
`consisting of 4 bytes of virtual circuit and control information followed by a 1-
`byte checksum. Although the contents of the header are not relevant to the TC
`sublayer, curious readers wishing a sneak preview should turn to Fig. 5-62. The
`checksum only covers the first four header bytes, not the payload field. It consists
`of the remainder after the 32 header bits have been divided by the polynomial
`x 8 + x 2 + x +I. To this the constant 01010101 is added, to provide robustness in
`the face of headers containing mostly 0 bits.
`The decision to checksum only the header was made to reduce the probability
`of cells being delivered incorrectly due to a header error, but to avoid paying the
`price of checksumming the much larger payload field. It is up to higher layers to
`perform this function, if they so desire. For many real-time applications, such as
`voice and video, losing a few bits once in a while is acceptable (although for some
`compression schemes, all frames are equal but some frames are more equal).
`Because it covers only the header, the 8-bit checksum field is called the HEC
`(Header Error Control).
`A factor that played a major role in this checksumming scheme is the fact that
`ATM was designed for use over fiber, and fiber is highly reliable. Furthermore, a
`major study of the U.S. telephone network has shown that during normal opera(cid:173)
`tion 99.64 percent of all errors on fiber optic lines are single-bit errors (AT&T and
`Bellcore, 1989). The HEC scheme corrects all single-bit errors and detects many
`
`-----,
`
`Ex.1006.253
`
`DELL
`
`
`
`236
`
`THE DATA LINK LAYER
`
`CHAP. 3
`
`multibit errors as well. If we assume that the probability of a single-bit error is
`1 o-8
`, then the probability of a cell containing a detectable multibit header error is
`about 10-13
`. The probability of a cell slipping through with an undetected header
`error is about 10-20 , which means that at OC-3 speed, one bad cell header will get
`through every 90,000 years. Although this may sound like a long time, once the
`earth has, say, 1 billion A TM telephones, each used 10 percent of the time, over
`1000 bad cell headers per year will go undetected.
`For applications that need reliable transmission in the data link layer, Shac(cid:173)
`ham and McKenney (1990) have developed a scheme in which a sequence of con(cid:173)
`secutive cells are EXCLUSIVE ORed together. The result, an entire cell, is
`appended to the sequence. If one cell is lost or badly garbled, it can be recon(cid:173)
`structed from the available information.
`Once the HEC has been generated and inserted into the cell header, the cell is
`ready for transmission. Transmission media come in two categories: asynchro(cid:173)
`nous and synchronous. When an asynchronous medium is used, a cell can be sent
`whenever it is ready to go. No timing restrictions exist.
`With a synchronous medium, cells must be transmitted according to a prede(cid:173)
`fined timing pattern. If no data cell is available when needed, the TC sublayer
`must invent one. These are called idle cells.
`Another kind of nondata cell is the OAM (Operation And Maintenance)
`cell. OAM cells are also used by the ATM switches for exchanging control and
`other information necessary for keeping the system running. OAM cells also have
`some other special functions. For example, the 155.52-Mbps OC-3 speed matches
`the gross data rate of SONET, but an STM-1 frame has a total of 10 columns of
`overhead out of 270, so the SONET payload is only 260/270 x 155.52 Mbps or
`149.76 Mbps. To keep from swamping SONET, an ATM source using SONET
`would normally put out an OAM cell as every 27th cell, to slow the data rate
`down to 26/27 of 155.52 Mbps and thus match SONET exactly. The job of
`matching the A TM output rate to the rate of the underlying transmission system is
`an important task of the TC sublayer.
`On the receiver's side, idle cells are processed in the TC sublayer, but OAM
`cells are given to the ATM layer. OAM cells are distinguished from data cells by
`having the first three header bytes be all zeros, something not allowed for data
`cells. The fourth byte describes the nature of the OAM cell.
`Another important task of the TC sublayer is generating the framing informa(cid:173)
`tion for the underlying transmission system, if any. For example, an ATM video
`camera might just produce a sequence of cells on the wire, but it might also pro(cid:173)
`duce SONET frames with the ATM cells embedded inside the SONET payload.
`In the latter case, the TC sublayer would generate the SONET framing and pack
`the A TM cells inside, not entirely a trivial business since a SO NET payload does
`not hold an integral number of 53-byte cells.
`Although the telephone companies clearly intend to use SONET as the under(cid:173)
`lying transmission system for A TM, mappings from A TM onto the payload fields
`
`Ex.1006.254
`
`DELL
`
`
`
`SEC. 3.6
`
`EXAMPLE DATA LINK PROTOCOLS
`
`237
`
`of other systems have also been defined, and new ones are being worked on. In
`particular, mappings onto Tl, T3, and FDDI also exist.
`
`Cell Reception
`
`On output, the job of the TC sublayer is to take a sequence of cells, add a
`HEC to each one, convert the result to a bit stream, and match the bit stream to
`the speed of the underlying physical transmission system by inserting OAM cells
`as filler. On input, the TC sublayer does exactly the reverse. It takes an incoming
`bit stream, locates the cell boundaries, verifies the headers (discarding cells with
`invalid headers), processes the OAM cells, and passes the data cells up to the
`ATM layer.
`The hardest part is locating the cell boundaries in the incoming bit stream. At
`the bit level, a cell is just a sequence of 53 x 8 = 424 bits. No 01111110 flag
`bytes are present to mark the start and end of a cell, as they are in HDLC. In fact,
`there are no markers at all. How can cell boundaries be recognized under these
`circumstances?
`In some case, the underlying physical layer provides help. With SONET, for
`example, cells can be aligned with the synchronous payload envelope, so the SPE
`pointer in the SONET header points to the start of the first full cell. However,
`sometimes the physical layer provides no assistance in framing. What then?
`The trick is to use the HEC. As the bits come in, the TC sublayer maintains a
`40-bit shift register, with bits entering on the left and exiting on the right. The TC
`sublayer then inspects the 40 hilts to see if it is potentially a valid cell header. If it
`is, the rightmost 8 bits will be valid HEC over the leftmost 32 bits. If this condi(cid:173)
`tion does not hold, the buffer does not hold a valid cell, in which case all the bits
`in the buffer are shifted right one bit, causing one bit to fall off the end, and a new
`input bit is inserted at the left end. This process is repeated until a valid HEC is
`located. At that point, the cell boundary is known because the shift register con(cid:173)
`tains a valid header.
`The trouble with this heuristic is that the HEC is only 8 bits wide. For any
`given shift register, even one containing random bits, the probability of finding a
`valid HEC is 11256, a moderately large value. Used by itself, this procedure
`would incorrectly detect cell headers far too often.
`To improve the accuracy of the recognition algorithm, the finite state machine
`of Fig. 3-30 is used. Three states are used: HUNT, PRESYNCH, and SYNCH. In
`the HUNT state, the TC sublayer is shifting bits into the shift registers one at a
`time looking for a valid HEC. As soon as one is found, the finite state machine
`switches to PRESYNCH state, meaning that it has tentatively located a cell boun(cid:173)
`dary. It now shifts in the next 424 bits (53 bytes) without examining them. If its
`guess about the cell boundary was correct, the shift register should now contain
`another valid cell header, so it once again runs the HEC algorithm. If the HEC is
`
`Ex.1006.255
`
`DELL
`
`
`
`238
`
`THE DATA LINK LAYER
`
`CHAP. 3
`
`incorrect, the TC goes back to the HUNT state and continues to search bit-by-bit
`for a header whose HEC is correct.
`
`Bit-by-bit
`check
`
`Cell-by-cell
`check
`
`incorrect
`HE Cs
`
`correct
`HE; Cs
`
`Fig. 3-30. The cell delineation heuristic.
`
`On the other hand, if the second HEC is also correct, the TC may be onto
`something, so it shifts in another 424 bits and tries again. It continues inspecting
`headers in this fashion until it has found 8 correct headers in a row, at which time
`it assumes that it is synchronized and moves into the SYNCH state to start normal
`operation. Note that the probability of getting into SYNCH state by accident with
`a purely random bit stream is r 80 , which can be made arbitrarily small by choos(cid:173)
`ing a large enough o. The price paid for a large 8, however, is a longer tirne to
`synchronize.
`In addition to resynchronizing after losing synchronization (or at startup), the
`TC sublayer needs a heuristic to determine when it has lost synchronization, for
`example after a bit has been inserted or deleted from the bit stream. It would be
`unwise to give up if just one HEC was incorrect, since most errors are bit inver(cid:173)
`sions, not insertions or deletions. The wisest course here is just to discard the cell
`with the bad header and hope the next one is good. However, if a HECs in a row
`are bad, the TC sublayer has to conclude that it has lost synchronization and must
`return to the HUNT state.
`Although unlikely, it is conceivable that a malicious user could try to spoof
`the TC sublayer by inserting a data pattern into the payload field of many con(cid:173)
`secutive cells that imitates the HEC algorithm. Then, if synchronization were
`ever lost, it might be regained in the wrong place. To make this trick much
`harder, the payload bits are scrambled on transmission and descrambled on recep(cid:173)
`tion.
`Before leaving the TC sublayer, one comment is in order. The mechanism
`chosen for cell delineation requires the TC sublayer to understand and use the
`header of the A TM layer above it. Having one layer make use of the header of a
`higher layer is in complete violation of the basic rules of protocol engineering.
`The idea of having layered protocols is to make each layer be independent of the
`
`Ex.1006.256
`
`DELL
`
`
`
`SEC. 3.6
`
`EXAMPLE DATA LINK PROTOCOLS
`
`239
`
`ones above it. It should be possible, for example, to change the header format of
`the ATM layer without affecting the TC sublayer. However due to the way cell
`delineation is accomplished, making such a change is not possible.
`
`3.7. SUMMARY
`
`The task of the data link layer is to convert the raw bit stream offered by the
`physical layer into a stream of frames for use by the network layer. Various fram(cid:173)
`ing methods are used, including character count, character stuffing, and bit stuff(cid:173)
`ing. Data link protocols can provide error control to retransmit damaged or lost
`frames. To prevent a fast sender from overrunning a slow receiver, the data link
`protocol can also provide flow control. The sliding window mechanism is widely
`used to integrate error control and flow control in a convenient way.
`Sliding window protocols can be categorized by the size of the sender's win(cid:173)
`dow and the size of the receiver's window. When both are equal to 1, the proto(cid:173)
`col is stop-and-wait. When the sender's window is greater than 1, for example to
`prevent the sender from blocking on a circuit with a long propagation delay, the
`receiver can be programmed either to discard all frames other than the next one in
`sequence (prqtocol 5) or buffer out of order frames until they are needed (protocol
`6).
`
`Protocols can be modeled using various techniques to help demonstrate their
`correctness (or lack thereof). Finite state machine models and Petri net models
`are commonly used for this purpose.
`Many networks use one of the bit-oriented protocols-SDLC, HDLC,
`ADCCP, or LAPB-at the data link level. All of these protocols use flag bytes to
`delimit frames, and bit stuffing to prevent flag bytes from occurring in the data.
`All of them also use a sliding window for flow control. The Internet uses SLIP
`and PPP as data link protocols. ATM systems have their own simple protocol,
`which does a bare minimum of error checking and no flow control.
`
`PROBLEMS
`
`1. An upper layer message is split into 10 frames, each of which has an 80 percent
`chance of arriving undamaged. If no error control is done by the data link protocol,
`how many times must the message be sent on the average to get the entire thing
`through?
`
`2. The following data fragment occurs in the middle of a data stream for which the
`character-stuffing algorithm described in the text is used: DLE, STX, A, DLE, B,
`DLE, ETX. What is the output after stuffing?
`
`3. If the bit string 0111101111101111110 is bit stuffed, what is the output string?
`
`Ex.1006.257
`
`DELL
`
`
`
`240
`
`THE DAT A LINK LA YER
`
`CHAP. 3
`
`4. When bit stuffing is used, is it possible for the loss, insertion, or modification of a sin(cid:173)
`gle bit to cause an error not detected by the checksum? If not, why not? If so, how?
`Does the checksum length play a role here?
`
`5. Can you think of any circumstances under which an open-loop protocol, (e.g., a Ham(cid:173)
`ming code) might be preferable to the feedback type protocols discussed throughout
`this chapter?
`
`6. To provide more reliability than a single parity bit can give, an error-detecting coding
`scheme uses one parity bit for checking all the odd numbered bits and a second parity
`bit for all the even numbered bits. What is the Hamming distance of this code?
`
`7. One way of detecting errors is to transmit data as a block of n rows of k bits per row
`and adding parity bits to each row and each column. Will this scheme detect all single
`errors? Double errors? Triple errors?
`
`8. A block of bits with n rows and k columns uses horizontal and vertical parity bits for
`error detection. Suppose that exactly 4 bits are inverted due to transmission errors.
`Derive an expression for the probability that the error will be undetected.
`9. What is the remainder obtait1ed by dividing x 7 + x 5 + 1 by the generator polynomial
`x 3 +I?
`
`10. Data link protocols almost always put the CRC in a trailer, rather than in a header.
`Why?
`
`11. A channel has a bit rate of 4 kbps and a propagation delay of 20 msec. For what range
`of frame sizes does stop-and-wait give an efficiency of at least 50 percent?
`
`12. A 3000-km long Tl trunk is used to transmit 64-byte frames using protocol 5. If the
`propagation speed is 6 µsec/km, how many bits should the sequence numbers be?
`
`13. Imagine a sliding window protocol using so many bits for sequence numbers that
`wraparound never occurs. What relations must hold among the four window edges
`and the window size?
`
`14. If the procedure between in protocol 5 checked for the condition a ~ b ~ c instead of
`the condition a ~ b < c, would that have any effect on the protocol's correctness or
`efficiency? Explain your answer.
`
`15. In protocol 6, when a data frame arrives, a check is made to see if the sequence
`number differs from the one expected and NoNak is true. If both conditions hold, a
`NAK is sent. Otherwise, the auxiliary timer is started. Suppose that the else clause
`were omitted. Would this change affect the protocol's correctness?
`
`16. Suppose that the three-statement while loop near the end of protocol 6 were removed
`from the code. Would this affect the correctness of the protocol or just the perfor(cid:173)
`mance? Explain your answer.
`
`17. Suppose that the case for checksum errors were removed from the switch statement of
`protocol 6. How would this change affect the operation of the protocol?
`
`18. In protocol 6 the code for FrameArrival has a section used for NAKs. This section is
`invoked if the incoming frame is a NAK and another condition is met. Give a scenario
`where the presence of this other condition is essential.
`
`Ex.1006.258
`
`DELL
`
`
`
`CHAP. 3
`
`PROBLEMS
`
`241
`
`19. Imagine that you are writing the data link layer software for a line used to send data to
`you, but not from you. The other end uses HDLC, with a 3-bit sequence number and a
`window size of seven frames. You would like to buffer as many out of sequence
`frames as possible to enhance efficiency, but you are not allowed to modify the
`software on the sending side. Is it possible to have a receiver window greater than
`one, and still guarantee that the protocol will never fail? If so, what is the largest win(cid:173)
`dow that can be safely used?
`
`20. Consider the operation of protocol 6 over a 1-Mbps error-free line. The maximum
`frame size is 1000 bits. New packets are generated about 1 second apart. The timeout
`interval is 10 msec. If the special acknowledgement timer were eliminated, unneces(cid:173)
`sary timeouts would occur. How many times would the average message be transmit(cid:173)
`ted?
`
`21. In protocol 6 MaxSeq == 2n - L. While this condition is obviously desirable to make
`efficient use of header bits, we have not demonstrated that it is essential. Does the
`protocol work correctly for MaxSeq == 4, for example?
`
`22. Frames of 1000 bits are sent over a 1-Mbps satellite channel. Acknowledgements are
`always piggybacked onto data frames. The headers are very short. Three-bit
`sequence numbers are used. What is the maximum achievable channel utilization for
`(a) Stop-and-wait.
`(b) Protocol 5.
`(c) Protocol 6.
`
`23. Compute the fraction of the bandwidth that is wasted on overhead (headers and
`retransmissions) for protocol 6 on a heavily loaded 50-kbps satellite channel with data
`frames consisting of 40 header and 3960 data bits. ACK frames never occur. NAK
`frames are 40 bits. The error rate for data frames is 1 percent, and the error rate for
`NAK frames is negligible. The sequence numbers are 8 bits.
`
`24. Consider an error-free 64-kbps satellite channel used to send 512-byte data frames in
`one direction, with very short acknowledgements coming back the other way. What is
`the maximum throughput for window sizes of 1, 7, 15, and 127?
`
`25. A 100 km long cable runs at the Tl data rate. The propagation speed in the cable is
`2/3 the speed of light. How many bits fit in the cable?
`
`26. Redraw Fig. 3-21 for a full-duplex channel that never loses frames. Is the protocol
`failure still possible?
`
`27. Give the firing sequence for the Petri net of Fig. 3-23 corresponding to the state
`sequence (000), (OlA), (01-), (010), (OlA) in Fig. 3-20. Explain in words what the
`sequence represents.
`
`28. Given the transition rules AC -7 B, B -7 AC, CD -7 E, and E -7 CD, draw the Petri
`net described. From the Petri net, draw the finite state graph reachable from the initial
`state ACD. What well-known computer science concept do these transition rules
`model?
`
`29. PPP is based closely on HDLC, which uses bit stuffing to prevent accidental flag bytes
`within the payload from causing confusion. Give at least one reason why PPP uses
`character stuffing instead.
`
`Ex.1006.259
`
`DELL
`
`
`
`242
`
`THE DATA LINK LA YER
`
`CHAP. 3
`
`30. What is the minimum overhead in sending an IP packet using PPP? Count only the
`overhead introduced by PPP itself, not the IP header overhead.
`31. Consider the ATM cell delineation heuristic with a= 5, o = 6, and a per-bit error rate
`of 10-5 . Once the system is synchronized, how long will it remain so, despite occa(cid:173)
`sional header bit errors? Assume the line is running at OC-3.
`
`32. Write a program to stochastically simulate the behavior of a Petri net. The program
`should read in the transition rules as well as a list of states corresponding to the net(cid:173)
`work link layer issuing a new packet or the accepting a new packet. From the initial
`state, also read in, the program should pick enabled transitions at random and fire
`them, checking to see if a host ever accepts two messages without the other host emit(cid:173)
`ting a new one in between.
`
`Ex.1006.260
`
`DELL
`
`
`
`4
`
`THE MEDIUM ACCESS SUBLAYER
`
`As we pointed out in Chap. 1, networks can be divided into two categories:
`those using point-to-point connections and those using broadcast channels. This
`chapter deals with broadcast networks and their protocols.
`In any broadcast network, the key issue is how to determine who gets to use
`the channel when there is competition for it. To make this point clearer, consider
`a conference call in which six people, on six different telephones, are all con(cid:173)
`nected together so that each one can hear and talk to all the others. It is very
`likely that when one of them stops speaking, two or more will start talking at
`once, leading to chaos. In a face-to-face meeting, chaos is avoided by external
`means, for example, at a meeting, people raise their hands to request permission
`to speak. When only a single channel is available, determining who should go
`next is much harder. Many protocols for solving the problem are known and form
`the contents of this chapter. In the literature, broadcast channels are sometimes
`referred to as multiaccess channels or random access channels.
`The protocols used to determine who goes next on a multiaccess channel
`belong to a sublayer of the data link layer called the MAC (Medium Access Con(cid:173)
`trol) sublayer. The MAC sublayer is especially important in LANs, nearly all of
`which use a multiaccess channel as the basis of their communication. WANs, in
`contrast, use point-to-point links" except for satellite networks. Because multiac(cid:173)
`cess channels and LANs are so closely related, in this chapter we will discuss
`LANs in general, as well as satellite and some other broadcast networks.
`
`243
`
`Ex.1006.261
`
`DELL
`
`
`
`244
`
`THE MEDIUM ACCESS SUBLAYER
`
`CHAP. 4
`
`Technically, the MAC sublayer is the bottom part of the data link layer, so
`logically we should have studied it before examining all the point-to-point proto(cid:173)
`cols in Chap. 3. Nevertheless, for most people, understanding protocols involving
`multiple parties is easier after two-party protocols are well understood. For that
`reason we have deviated slightly from a strict bottom-up order of presentation.
`
`4.1. THE CHANNEL ALLOCATION PROBLEM
`
`The central theme of this chapter is how to allocate a single broadcast channel
`among competing users. We will first look at static and dynamic schemes in gen(cid:173)
`eral. Then we will examine a number of specific algorithms.
`
`4.1.1. Static Channel Allocation in LANs and MANs
`
`The traditional way of allocating a single channel, such as a telephone trunk,
`among multiple competing users is Frequency Division Multiplexing (FDM). If
`there are N users, the bandwidth is divided into N equal sized portions (see Fig.
`2-24), each user being assigned one portion. Since each user has a private fre(cid:173)
`quency band, there is no interference between users. When there is only a small
`and fixed number of users, each of which has a heavy (buffered) load of traffic
`(e.g., carriers' switching offices), FDM is a simple and efficient allocation
`mechanism.
`However, when the number of senders is large and continuously varying, or
`the traffic is bursty, FDM presents some problems. If the spectrum is cut up into
`N regions, and fewer than N users are currently interested in communicating, a
`large piece of valuable spectrum will be wasted. If more than N users want to
`communicate, some of them will be denied permission, for lack of bandwidth,
`even if some of the users who have been assigned a frequency band hardly ever
`transmit or receive anything.
`However, even assuming that the number of users could somehow be held
`constant at N, dividing the single available channel into static subchannels is
`inherently inefficient. The basic problem is that when some users are quiescent,
`their bandwidth is simply lost. They are not using it, and no one else is allowed to
`use it either. Furthermore, in most computer systems, data traffic is extremely
`bursty (peak traffic to mean traffic ratios of 1000:1 are common). Consequently,
`most of the channels will be idle most of the time.
`The poor performance of static FDM can easily be seen from a simple queue(cid:173)
`ing theory calculation. Let us start with the mean time delay, T, for a channel of
`capacity C bps, with an arrival rate of A frames/sec, each frame having a length
`drawn from an exponential probability density function with mean 11µ bits/frame:
`
`T=-~1-
`µC-A
`
`Now let us divide the single channel up into N independent subchannels, each
`
`Ex.1006.262
`
`DELL
`
`
`
`SEC. 4.1
`
`THE CHANNEL ALLOCATION PROBLEM
`
`245
`
`with capacity C IN bps. The mean input rate on each of the subchannels will now
`be AIN. Recomputing T we get
`
`N
`µC -A.
`
`=NT
`
`1
`-
`T
`FDM - µ( c IN) - CAIN)
`The mean delay using FDM is N times worse than if all the frames were somehow
`magically arranged orderly in a big central queue.
`Precisely the same arguments that apply to FDM also apply to time division
`multiplexing (TDM). Each user is statically allocated every Nth time slot. If a
`user does not use the allocated slot, it just lies fallow. Since none of the tradi(cid:173)
`tional static channel allocation methods work well with bursty traffic, we will now
`explore dynamic methods.
`
`(4-1)
`
`4.1.2. Dynamic Channel Allocation in LANs and MANs
`
`Before we get into the first of the many channel allocation methods to be dis(cid:173)
`cussed in this chapter, it is worthwhile carefully formulating the allocation prob(cid:173)
`lem. Underlying all the work done in this area are five key assumptions,
`described below.
`1. Station model. The model consists of N independent stations (com(cid:173)
`puters, telephones, personal communicators, etc.), each with a pro(cid:173)
`gram or user that generates frames for transmission. The probability
`of a frame being generated in an interval of length !1t is A.!1t, where A.
`is a constant (the arrival rate of new frames). Once a frame has been
`generated, the station is blocked and does nothin