`Chandran et al.
`
`(10) Patent No.:
`(45) Date of Patent:
`
`US 7,392.279 B1
`Jun. 24, 2008
`
`US007392279B1
`
`(54) NETWORK TRAFFIC SHAPING USING
`TIME-BASED QUEUES
`
`(75) Inventors: Kartik S. Chandran, Sunnyvale, CA
`(US); Guenter Roeck, San Jose, CA
`S. Sunil Khaunte, Santa Clara, CA
`
`(73) Assignee: Cisco Technology, Inc., San Jose, CA
`(US)
`
`c
`
`- r
`
`(*) Notice:
`
`6,404.737 B1 * 6/2002 Novicket al. ............ 370,235.1
`6,438,134 B1* 8/2002 Chow et al. ................. 370,412
`6,526,062 B1* 2/2003 Milliken et al. ........ 370/395.42
`6,621,792 B1* 9/2003 Petty .........
`... 370,230.1
`6,724,767 B1 * 4/2004 Chong et al. ................ 370,412
`OTHER PUBLICATIONS
`R. Ladner, “CSE 326: Data Structures'. Spring 1998, Lecture
`Overheads: Lecture 19, Calendarqueues.
`Randy Brown, "Calendar Queues: A Fast 0(1) Priority Queue Imple
`mentation For the Simulation Even Set Problem', Oct. 1988. vol. 31,
`Communications of the ACM.
`LSI Logic Corporation, "LSI Logic's L64364 ATMizer II+ ATM
`SAR Chip Technical Manual” Chapter 7, Nov. 16, 1998.
`R. Ladner, “CSE 326: Data Structures'. Spring 1998, Lecture
`Overheads: Lecture 19, Calendardueues, month unknown.
`* cited by examiner
`Primary Examiner Dustin Nguyen
`(74) Attorney, Agent, or Firm Weaver Austin Villeneuve &
`Sampson LLP
`ABSTRACT
`(7)
`A time-based buffering system buffers databased upon how
`long the data should be held in order to comply with a traffic
`shaping policy. The data's source or destination need not be
`considered in determining where to buffer the data. The time
`based buffering system includes a collection of time-based
`queues, each of which has a different time to dequeue. The
`system controlling traffic shaping determines how long a
`particular piece of data should be buffered (a “traffic shaping
`delay') until it can be put on the network. Then, based upon
`that length of time, the system chooses one of the time-based
`of queues in which to buffer the data. That chosen queue has
`adequeuing time that matches the traffic shaping delay. After
`the chosen queue dequeues its contents (at the specified time),
`it assumes a new dequeing time and is free to buffer new data
`that must be delayed by a time matching the new dequeuing
`time.
`
`29 Claims, 11 Drawing Sheets
`
`tO E. distic th SMR tly
`Sibi
`pp ls.
`e 1 s d justed under
`M
`YW-
`(b) by
`ayS.
`(21) Appl. No.: 09/276.917
`ppl. No.:
`9
`Mar 26, 1999
`a 20.
`
`22) Filed:
`(22) File
`(51) Int. Cl.
`(2006.01)
`G06F 5/16
`(52) U.S. Cl. ..................... 709/200; 370/230.1; 370/235;
`370,365.4.370,412,719,314.78/102
`(58) Field of Classification Search ................. 717/153:
`370/412, 60,229, 230,395, 235, 429, 236.1,
`370/395 42. 709/240
`See application file for complete search history.
`References Cited
`
`(56)
`
`U.S. PATENT DOCUMENTS
`
`4,435,753
`5,231,633
`5,317,562
`5,463,620
`5,754,530
`5,838,915
`6,052.375
`6,104,700
`6, 195,333
`6,247,061
`6,259,699
`6,389,019
`
`A * 3/1984 RiZZi .......................... 364,200
`A * 7, 1993 Hluchy et al. .............. 370,429
`A
`5, 1994 Nardin et al.
`A * 10, 1995 Sriram ........................ 370/60
`A
`5, 1998 Awdeh et al. ............ 370,236.1
`A 11/1998 Klausmeier et al. .... 395/200.45
`A * 4/2000 Bass et al. .................. 370,412
`A
`8/2000 Haddocket al. ............ 370,235
`B1
`2/2001 Wise .......................... 370,235
`B1
`6, 2001 Douceur et al.
`B1* 7/2001 Opalka et al.
`B1
`5/2002 Fan et al. ............... 370,395.42
`
`201
`
`203
`
`RECEIWEWALUES OF TIME
`NCREMENTANDMAX
`SEAPNGDELAY
`
`--
`-205
`PROVIDETIMEBASE
`QUEUING STRUCTURE
`
`AWAITNEXTRACKET
`
`y
`RECEIVEPACKET FROM
`NETWORK
`
`211
`
`207
`
`209
`
`23
`ENQUEUE
`PACKETFOR
`TRANSMISSION
`
`LL TRAFFICRATE
`IMIT BEEXCEEE
`ly
`DETERMINE SEAPING
`DELAY
`
`DROP
`PACKET
`
`ARING DELAYS
`MAX. SHAPNG
`DELAY?
`
`
`
`25
`
`27
`
`ETERMINEBUCKETN
`TIME-BASED QUEUING
`STRUCTURE
`
`BUFFERRACKETAND
`DEQUEUETIMED-OUT
`BUCKETS
`
`22
`
`223
`
`GUEST TEK EXHIBIT 1005
`Guest Tek v. Nomadix, IPR2019-00253
`
`
`
`U.S. Patent
`
`Jun. 24, 2008
`
`Sheet 1 of 11
`
`US 7,392.279 B1
`
`12
`
`Packet
`
`10
`
`RATE
`Transmit -- -
`POLICER
`
`
`
`
`
`
`
`Buffer
`
`16
`
`20
`
`WAKE-UP
`PROCESS
`
`
`
`U.S. Patent
`
`Jun. 24, 2008
`
`Sheet 2 of 11
`
`US 7,392.279 B1
`
`201
`
`RECEIVE VALUES OF TIME
`INCREMENT AND MAX
`SHAPNG DELAY
`
`203
`
`
`
`PROVIDE TIME BASED
`QUEUING STRUCTURE
`
`205
`
`
`
`207
`
`AWAITNEXT PACKET
`
`RECEIVE PACKET FROM
`NETWORK
`
`WILL TRAFFIC RATE
`IMIT BE EXCEEDED2
`
`213
`
`ENQUEUE
`PACKET FOR
`TRANSMISSION
`
`DROP
`PACKET
`
`HAPNG DELAYS
`MAX. SHAPING
`DELAYP
`
`DETERMINE BUCKET IN
`TIME-BASED QUEUING
`STRUCTURE
`
`BUFFER PACKET AND
`DEQUEUE TIMED-OUT
`BUCKETS
`
`
`
`221
`
`223
`
`Figure 2
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`U.S. Patent
`
`Jun. 24, 2008
`
`Sheet 3 of 11
`
`US 7,392.279 B1
`
`223
`
`
`
`ARE THERE ANY
`BUCKETS THAT HAVE
`TIMED-OUT2
`
`302
`
`
`
`
`
`
`
`
`
`DEQUEUE PACKETS FROM
`ALL BUCKETS THAT HAVE
`TIMED-OUT
`
`304
`
`ENQUEUE PACKETS FROM 306
`BUCKETS FOR
`TRANSMISSION
`
`BUFFERPACKET IN TIME-
`BASED QUEUING
`STRUCTURE
`
`308
`
`Figure 3
`
`
`
`U.S. Patent
`
`Jun. 24, 2008
`
`Sheet 4 of 11
`
`US 7,392.279 B1
`
`40
`
`1.
`
`WAIT FORWAKE-UP
`PERIOD TO EXPIRE
`
`403
`
`RECEIVE WAKE-UP
`NOTIFICATION
`
`405
`
`DEQUEUE PACKETS FROM
`ALL BUCKETS THAT HAVE
`TIMED-OUT
`
`407
`
`ENQUEUE PACKETS FROM
`BUCKETS FOR
`TRANSMISSION
`
`409
`
`Figure 4
`
`
`
`
`
`
`
`
`
`
`
`WsapolysjgeO
`
`
`
`
`
`
`
`(41
`
`BSINST]
`VJOLd)
`
`U.S. Patent
`U.S. Patent
`
`Jun. 24, 2008
`Jun. 24, 2008
`
`Sheet 5 of 11
`Sheet 5 of 11
`
`US 7,392.279 B1
`US 7,392,279 B1
`
`
`
`
`
`
`
`
`U.S. Patent
`U.S. Patent
`
`Jun. 24, 2008
`Jun. 24, 2008
`
`Sheet 6 of 11
`Sheet 6 of 11
`
`US 7,392.279 B1
`US 7,392,279 Bl
`
`540
`
`S
`542
`
`
`
`544
`
`i
`548
`
`i
`546
`
`Figure5B
`
`
`
`U.S. Patent
`
`Jun. 24, 2008
`
`Sheet 7 of 11
`
`US 7,392.279 B1
`
`601
`
`RECEIVE VALUES OF TIME
`INCREMENT AND MAX
`SHAPING DELAY
`
`603
`
`PROVIDE TIME BASED
`QUEUING STRUCTURE
`
`605
`
`607
`
`AWAIT NEXT BANDWIDTH
`REQUEST
`
`RECEIVE A BAND WIDTH
`REQUEST FROM A MODEM
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`Y
`
`613
`
`SCHEDULE A
`GRANT FOR THE
`BANDWIDTH
`REQUEST
`
`
`
`
`
`
`DETERMINE SHAPING
`DELAY FOR GRANT
`
`615
`
`617
`
`DROP
`BANDWIDTH
`REQUEST
`
`
`
`HAPING DELAYS
`MAX. SHAPING
`DELAY?
`
`
`
`
`
`DETERMINE BUCKET IN
`TIME-BASED QUEUING
`STRUCTURE
`
`BUFFER GRANT AND
`DEQUEUE TIMED-OUT
`BUCKETS
`
`62
`
`623
`
`Figure 6
`
`
`
`U.S. Patent
`
`Jun. 24, 2008
`
`Sheet 8 of 11
`
`US 7,392.279 B1
`
`623
`
`ARE THERE ANY
`BUCKETS THAT HAVE
`TIMED-OUT2
`
`702
`
`DEQUEUE GRANTS FROM
`ALL BUCKETS THAT HAVE
`TIMED-OUT
`
`704
`
`SCHEDULE TIMESLOTS FOR 706
`DEQUEUED GRANTS
`
`
`
`
`
`
`
`BUFFER GRANT IN
`SELECTED BUCKET OF
`TIME-BASED QUEUING
`STRUCTURE
`
`708
`
`
`
`N / WAS BUCKETEMPTY
`PRIOR TO ENOUEUING?
`
`
`
`710
`
`CHAIN SELECTED
`BUFFER IN LIST OF
`PENDING BUCKETS
`
`
`
`712
`
`
`
`
`
`
`
`
`
`
`
`Figure 7
`
`
`
`U.S. Patent
`
`Jun. 24, 2008
`
`Sheet 9 of 11
`
`US 7,392.279 B1
`
`801
`
`803
`
`805
`
`807
`
`WAIT FOR MAP
`INTERRUPT
`
`RECEIVE MAP INTERRUPT
`
`
`
`INSERT GRANTS IN MAP
`
`IS PENDING BUCKET LIS
`EMPTY?
`
`DEQUEUE GRANTS FROM
`BUCKETS THAT HAVE
`TIMED-OUT
`
`811
`
`SCHEDULE TIMESLOTS FOR
`DEQUEUED GRANTS
`
`PUT PENDING GRANTS FOR
`PENDING BUCKETS IN MAP
`
`815
`
`817
`
`Figure 8
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`U.S. Patent
`
`Jun. 24, 2008
`
`Sheet 10 of 11
`
`US 7,392,279 B1
`
`6WINST
`
`LS6
`
`AIOUIDY
`
`
`
`
`
`06
`
`9MHOVI
`
`SISOOd7Z£6066:p662iii|a=Take]
`
`
`TeorsAyitJoAe']OVIN-JAR]JIOMPN:
`
`
`MHAHdSISOOd
`
`twealsuUMOg
`
`
`
`pureJoyepnpoyy
`
`JoyMUsueL,
`
`
`
`urransdy)
`
`Joye[NpouTaq
`
`J9ATN00R]pue
`
`VICHW
`
`SSHOOV
`
`TOULNOD
`
`eed
`
`JIOMION
`
`oeLIOU]
`
`ros—
`
`
`
`WTnverounenreneveeeeDevaiisssissssnsvoeeeessuee*MSWAARIG
`
`
`
`\.AHdSISDOG
`
`(286
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`Neneeneneneseneerentceeteennnaeepreeeencenarametrnecrrncnaeseeeeeenensnnerensnesseeeeeererrssstenesensaneesecearereocernsaancentnenvenstacsssasanssneaseeseveseeuersrauecansesssacenced
`
`
`
`
`
`
`
`
`
`U.S. Patent
`
`Jun. 24, 2008
`
`Sheet 11 of 11
`
`US 7,392.279 B1
`
`1010
`\ 1068
`
`
`
`INTERFAES
`
`Figure 10
`
`
`
`US 7,392.279 B1
`
`1.
`NETWORK TRAFFIC SHAPING USING
`TIME-BASED QUEUES
`
`BACKGROUND OF THE INVENTION
`
`10
`
`15
`
`25
`
`30
`
`This invention relates to systems and methods for control
`ling traffic to particular destinations (“data flows') on a net
`work. The invention accomplishes this using a time-based
`queuing system comprising a series of queues, each of which
`buffers packets for a defined length of time.
`Traffic shaping involves limiting the rate at which data is
`transmitted to particular locations on a network (e.g., to net
`work interfaces or network destinations). This may be
`employed to prevent congestion on a particular network seg
`ment or to limit a network entity to the amount of bandwidth
`that it has paid for. A network employs a “policing algorithm
`to determine when traffic to or from a particular network user
`should be limited.
`If a policing algorithm determines that the network inter
`face is at a maximum usage rate, it may limit the flow of
`network traffic accordingly. Typically, traffic flow is limited
`by simply discarding or dropping packets to or from a net
`work interface. This means that the discarded packets must be
`retransmitted at a later time when the network flow is not at
`the maximum level allowed by the policing algorithm.
`Administration of the retransmissions is handled by TCP or
`other relatively high level network protocol. Putting this bur
`den on TCP often places too much responsibility on the
`protocol (which plays many other roles in administering net
`work traffic) and thereby greatly reduces the network perfor
`mance. This may be manifest by short periods of very bursty
`traffic followed by extended quiet periods.
`One potential solution to this difficulty employs separate
`queues for each individual interface or destination that
`requires flow control. Unfortunately, this solution does not
`scale well for systems having many destinations. Each new
`network destination requires a new and separate queue. And
`each new separate queue requires its own additional memory
`and its own additional CPU resources (overhead to identify
`the appropriate queue, etc.). If a system needs to service 5000
`destinations, then it also needs to provide and service 5000
`distinct queues.
`What is needed, therefore, is an improved technique for
`traffic shaping on a per-destination basis. The technique
`should not unnecessarily discard packets when a maximum
`45
`flow rate is exceeded and should not rely on a single queue for
`each destination under consideration.
`
`35
`
`40
`
`SUMMARY OF THE INVENTION
`
`The present invention provides a time-based buffering sys
`tem that buffers network data (e.g., packets) based upon how
`long the data should be held in order to comply with a traffic
`shaping policy. The data's source or destination need not be
`considered—although it can be in determining how and
`where to buffer the data. Typically the time-based buffering
`system includes a collection of time-based queues (some
`times referred to as “buckets'), each of which is scheduled for
`a different time to dequeue. The system controlling traffic
`shaping may determine how long a particular piece of data
`should be buffered (a “traffic shaping delay') until it can be
`put on the network. Then, based upon that length of time, the
`system chooses one of the time-based queues in which to
`buffer the data. That chosen queue has a dequeuing time that
`matches the traffic shaping delay. Queue identifications may
`be recycled, so that after a chosen queue dequeues its contents
`(at the specified time), it is rescheduled for a new dequeing
`
`50
`
`55
`
`60
`
`65
`
`2
`time. It is then available to buffer new data that must be
`delayed by a time matching the new dequeuing time. Prefer
`ably, a given time-based queue can simultaneously buffer
`data associated with different nodes. Preferably, it can also
`buffer network data packets of varying sizes.
`One aspect of the invention relates to a method of control
`ling data flow through a network. The method may be char
`acterized by the following sequence: (1) determining that
`transmitting additional data to or from a particular network
`node will or will likely exceed a maximum allowed data flow
`for the network node; (2) selecting one of a group of time
`based queues; and (3) buffering the additional data in the
`selected time-based queue. Together, the time-based queues
`define a period of time. Each time-based queue defines a
`separate increment of time within the time period, and each
`time-based queue is set to dequeue its contents at a separate
`time. For example, a group of three time-based queues may
`each have 40 millisecond increments of time. Together they
`may define a period of time of 120 milliseconds. Initially, a
`first of these queues may be set to dequeue in 40 milliseconds,
`a second to dequeue in 80 milliseconds, and a third to dequeue
`in 120 milliseconds. At 40 milliseconds, the first queue
`dequeues its contents and it may be set with a new dequeue
`time of 120 milliseconds. At that time, the second queue's
`time to dequeue is 40 milliseconds and the third queue's time
`to dequeue is 80 milliseconds in the future.
`In a specific embodiment, the increments of time defined
`by the time-based queues are each between about 1 and 50
`milliseconds, and the total period of time defined by the group
`of time-based queues is between about 100 and 500 millisec
`onds.
`Typically, the system will have to calculate a time to trans
`mit the additional data (the traffic shaping delay). This calcu
`lation may be based upon traffic shaping criteria Such as those
`provided in a conventional policing algorithm. It can involve,
`for example, calculating an amount of network capacity that
`would be used by the network node if the additional data were
`to be transmitted. After calculating the traffic shaping delay
`for the additional data, the system selects the time-based
`queue having a time to dequeue that matches the calculated
`traffic shaping delay. Then, when that queue dequeues the
`additional data, the network can transmit that data without
`exceeding the maximum allowed data flow for the network.
`Note that if the system determines that transmitting the
`additional data under consideration will not exceed the maxi
`mum allowed data flow for the associated network node, it
`may transmit that data without buffering. In this case, the
`system need not use the time-based queues. At the other
`extreme, the system may determine that the traffic shaping
`delay is so long that the new data should be discarded without
`buffering in the time-based queues. In that case, as well, the
`system will not need to use the time-based queues. Typically,
`this will occur when the traffic shaping delay is greater than a
`maximum shaping delay (e.g., the total period of time defined
`by the group of individual time-based queues (e.g., 120 mil
`liseconds in the above example)).
`In the discussion so far, the system has considered the
`effect of transmitting actual network data (e.g., packets) that
`it has received. There are other situations that may employ the
`time-based buffering of this invention. One notable example
`involves allocating 'grants to transmit additional data in
`networks such as cable modem plants. The system may buffer
`these grants to transmit in time-based queues in the same
`manner that it buffers simple data. When the time-based
`queue dequeues its grant(s) to transmit, the node receiving the
`
`
`
`3
`grant may transmit actual data at the time specified by the
`grant. That time should conform to the system's traffic shap
`ing criteria.
`Another aspect of the invention provides a computer pro
`gram product having program instructions stored on a
`machine-readable medium (e.g., memory in a router or other
`network device or a separate and portable CD-ROM). The
`program instructions may specify one or more of the method
`sequences described above.
`Another aspect of the invention provides an apparatus for
`controlling data flow through a network. The apparatus may
`be characterized by the following features: (1) one or more
`processors; (2) memory coupled to at least one of the one or
`more processors; (3) and a plurality of time-based queues, as
`described above, logically configured on the memory. The
`processors are configured or designed to direct (i) data or (ii)
`grants to transmit data to particular time-based queues based
`upon the requirements of a network traffic shaping policy
`prescribed for the data or grants to transmit the data. Prefer
`ably the apparatus is a router or a Switch. It may be a cable
`modem termination system.
`The one or more processors may have additional roles. For
`example, they may determine the network traffic shaping
`delay required for the data or grant to transmit data. And,
`based upon the delay, the one or more processors may discard
`data or a request to grant transmission of data if the delay is
`greater than the maximum traffic shaping delay (e.g., the
`period of time defined by the plurality of time-based queues).
`Further, the one or more processors can be configured or
`designed to transmit, without buffering in a time-based
`queue, the data or grants to transmit data if there is no network
`traffic shaping delay.
`As Suggested above, a processor (or the apparatus gener
`ally) may direct network packets of varying sizes to the time
`based queues. Still further, a single time-based queue may
`simultaneously buffer data or grants to transmit data from or
`to different network nodes. In other words, the time-based
`queue is not dedicated to a particular network node. This
`overcomes the above-described difficulties associated with
`destination-based queuing for traffic shaping.
`These and other features and advantages of the present
`invention will be described below with reference to the asso
`ciated drawings.
`
`10
`
`15
`
`25
`
`30
`
`35
`
`40
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`45
`
`FIG. 1 is a block diagram illustrating the logical elements
`of a time-based queuing system implemented on a network
`device or devices, in accordance with an embodiment of this
`invention.
`FIG. 2 is a process flow diagram illustrating a method of
`implementing the time-based queuing system of this inven
`tion for downstream transmission of data items.
`FIG. 3 is a process flow diagram illustrating how the data
`items handled by the method of FIG.2 may be enqueued.
`FIG. 4 is a process flow diagram depicting an interrupt
`based method for dequeuing data items from a collection of
`time-based queues.
`FIG. 5A is a schematic illustration of a cable modem plant
`that may be used with the present invention.
`FIG. 5B is a block diagram depicting the structure of a
`MAP message suitable for use with the present invention.
`FIG. 6 is a process flow diagram depicting a time-based
`queuing method of this invention as applied to granting band
`width requests to cable modems or other network entities.
`
`50
`
`55
`
`60
`
`65
`
`US 7,392.279 B1
`
`4
`FIG. 7 is a process flow diagram depicting a method for
`enqueuing bandwidth requests handled in accordance with
`the method depicted in FIG. 6.
`FIG. 8 is a process flow diagram depicting a MAP-interrupt
`method for dequeuing bandwidth grants in accordance with
`one embodiment of this invention.
`FIG. 9 is a block diagram of a cable modem termination
`system that may be employed to implement the present inven
`tion.
`FIG. 10 is a block diagram of a router that may be used in
`conjunction with the methods of the present invention.
`
`DETAILED DESCRIPTION OF THE PREFERRED
`EMBODIMENTS
`
`As mentioned, this invention pertains to buffering data in
`time-based queues. A time-based queue is a queue that
`dequeues its contents (i.e., items that it is buffering) at Sched
`uled times. In most embodiments described herein, a time
`based queue will dequeue all of its contents at the scheduled
`time. However, there may be alternative embodiments in
`which it is desirable to dequeue only a portion of a time-based
`queue's buffered data. A time-based queue will typically be
`implemented with a plurality of like queues concatenated in
`time. The first of these queues may dequeue after a particular
`increment of time has elapsed. Thereafter, after that same
`increment of time has elapsed a second time, the second
`time-based queue dequeues its contents. Then, after yet
`another such increment of time elapses, a third queue
`dequeues its contents and so on. In this manner, every so many
`milliseconds, one of the time-based queues dequeues its con
`tentS.
`This approach to queueing network data is very easy to
`implement and scales very well with additional destinations.
`The network interface or other system managing the time
`based queues need only keep track of the increment of
`dequeuing time associated with the concatenated time-based
`queues. It need not keep track of the data destinations.
`Time-Based Queuing Systems
`One approach to time-based queuing of network data is
`depicted in FIG.1. As shown there, a network interface 10 or
`other network entity initially receives a packet 12. Possibly,
`interface 10 connects a local network (e.g., a LAN or cable
`plant) to an outside network. Packet 12 may have arrived from
`either a local node interior to the interface or a source exterior
`to the interface (e.g., an Internet source).
`Interface 10 must control traffic to or from particular net
`work nodes for which it is responsible. Those nodes may have
`limitations in (1) the rate that they can send traffic over a
`network, (2) the rate that they can receive traffic from the
`network, or (3) the total rate that they can send and receive
`traffic over the network. Thus, interface may have to shape
`traffic addressed to a particular network node and/or shape
`traffic sent by that node.
`Within interface 10, packet 12 is considered by a policing
`block (or policer) 14, which implements a policing algorithm.
`That may be implemented as hardware, firmware, or software
`executed on one or more processors. Policing block 14 deter
`mines whether the packet should be transmitted, dropped, or
`buffered in a time-based queue. Policing block 14 imple
`ments a policing policy specifying the criteria for transmit
`ting, dropping and buffering. Many Such policing algorithms
`may be employed. Examples include “token bucket' and
`“leaky bucket' algorithms.
`In one example, block 14 first determines whether the
`packet destination (or Source) has exceeded an allowed
`amount of traffic on the network. If not, block 14 transmits
`
`
`
`US 7,392.279 B1
`
`5
`packet 12 to its destination. If, on the other hand, the amount
`of traffic allowed to the destination or source has been
`exceeded, block 14 must determine whether to drop or buffer
`packet 12. The details of this procedure will be explained
`below. If the packet is dropped, a high level service such as
`TCP must recognize this and retransmit packet 12 at a later
`time.
`Assuming that block 14 determines that packet 12 should
`be buffered, it passes that packet to a traffic-shaping block 16.
`That block then determines where to buffer packet 12. It will
`buffer the packet in one of the time-based queues comprising
`a time-based queuing structure 18. Note that the individual
`time-based queues in structure 18 are sometimes referred to
`herein as “buckets. Each queue is a chain of buffers—each
`buffer is a queue element. The data (packet 12) is actually
`stored in a buffer.
`At any given time, queuing structure 18 may have its time
`based queues populated with a variety of different packets
`having different sources and different destinations. These
`packets may also vary widely in size. Every few milliseconds,
`one of the time-based queues dequeues its contents. In the
`example depicted, time-based queue B is currently dequeu
`ing its contents 22. These contents are then forwarded to their
`respective destinations on the network. In the embodiment
`shown, a wake-up process 20 instructs structure 18 to
`dequeue the contents of the next scheduled bucket. It does so
`every time that the specified increment of time elapses. This is
`the increment of time associated with each time-based queue
`(bucket).
`Note that whenever interface 10 allows packets to be trans
`mitted (whether by having rate policer 14 determine that no
`traffic shaping delay is appropriate or by having structure 18
`dequeue a bucket), appropriate packets are provided to the
`destination network. When on the network, these packets find
`their way to the intended destination by whatever mechanism
`the destination network uses (e.g., broadcast or multicast).
`As mentioned, rate policer 14 must determine whether to
`transmit, drop, or buffer packet 12. It may accomplish this in
`two steps. First, it determines whether the policy for the
`destination would be breached by immediately transmitting
`the packet. For example, a destination may be allotted a
`bandwidth of 1 megabits per second but only be currently
`using 512 kilobits per second. Because the destination is
`using less than its allotted bandwidth (a maximum allowed
`data flow), rate policer 14 will typically determine that that
`packet should be transmitted without delay.
`If, however, immediate transmission of the packet would
`violate a policy implemented on policer 14, it cannot transmit
`the packet. It must then determine whether to drop or buffer
`the packet. Typically, this is accomplished by first calculating
`a traffic shaping delay required before the packet can be
`transmitted without violating the policy. If that delay is
`greater than Some "maximum shaping delay' such as 500
`milliseconds, block 14 will drop the packet. Often this maxi
`mum shaping delay corresponds to the aggregate length of
`time for all time-based queues in structure 18. This size is
`sometimes referred to as the “period of time' for all the
`time-based queues in structure 18.
`Assuming that the traffic shaping delay is less than the
`maximum shaping delay, block 14 forwards the packet to
`traffic shaper 16. It is now the job of traffic shaper 16 to
`determine which of the buckets of structure 18 should be used
`to buffer the packet under consideration. It may accomplish
`this by finding a bucket that is scheduled to dequeue its
`contents at or near the time when the calculated traffic shap
`
`40
`
`45
`
`6
`ing delay is up. Typically, it will choose the first bucket
`scheduled to dequeue its contents after the traffic shaping
`delay has passed.
`The specific functional blocks presented herein such as
`policer block 14 and traffic shaper block 16 may be blocks of
`program code that, when executed, perform the functions
`ascribed to them. Thus, these functional blocks typically
`require the concerted action of one or more processors,
`memory, and program instructions that selectively configure
`processor(s) and memory. Of course, any of these functional
`blocks may be provided, in whole or in part, on hardware
`specially designed to perform at least some part of the
`described functions.
`In a preferred embodiment, time-based queuing structure
`18 is designed or configured with two parameters: a bucket
`width or “time increment, b and a maximum shaping delay
`or “period of time' M*b. M is the number of buckets in
`structure 18. The choice of a particular bucket within struc
`ture 18 in which to buffer a given packet may be identified by
`the following expression.
`B = B+(d(i); b) Mod M.
`In this expression, B is the bucket identified for buffering
`the current packet (e.g., bucket B). It may be any one of the
`buckets within time-based queuing structure 18 (e.g., Bo. B
`... B.
`). B
`is the next bucket in structure 18 currently
`scheduled for dequeuing. The parameter d(i) is the traffic
`shaping delay calculated for the current packet in traffic flow
`i. Thus the value d(i)/b restates the traffic shaping delay in
`terms of the number buckets that must be dequeued before the
`packet can be transmitted. That is, d(i)/b identifies how many
`buckets in advance of the current bucket that the packet must
`be enqueued. Of course, simply Summing B
`with the
`number of buckets to advance may give a value greater than
`B . Therefore, it is necessary to take the modulo of B+
`d(i)/b with respect to the total number ofbuckets instructure
`18, M.
`In a preferred embodiment, the maximum shaping delay,
`M*b, is between about 100 and 500 milliseconds. Of course,
`the invention is not limited to this range. Generally, the value
`of this parameter will be chosen by the available memory and
`maximum per packet delay. The larger the value of M*b, the
`more packets that would need to be buffered by the shaper. A
`large value of M*b can potentially mean a larger per packet
`delay, as more time would be spent by the packet in the shaper.
`A very large delay would make shaping useless, as the source
`of packets might timeout and assume the packets have been
`dropped, leading to retransmissions, which is exactly what
`the shaper seeks to avoid in the first place. The value of b
`(termed the bucket’s “width.” “time increment,” or “granu
`larity') is chosen based upon the competing concerns of
`finding a bucket that precisely matches a packet's calculated
`delay and requiring only minimal system overhead to trigger
`the periodic dequeuing of buffered data. In one preferred
`embodiment, the value of b is between about 1 and 50 milli
`seconds.
`In the embodiment shown in FIG. 1, wake-up process 20
`regularly triggers dequeuing of Successive time-based
`queues. This typically requires that the system issue inter
`rupts, a fairly expensive procedure. Thus, from the perspec
`tive of system overhead, the value of b is preferably large.
`Unfortunately, this causes many packets to be queued for
`significantly longer periods of time than are required by the
`calculated traffic shaping delay values.
`In an alternative embodiment, the system overhead asso
`ciated with the queuing is reduced, thereby allowing Smaller
`values of “b.” In this embodiment, the system requires two
`
`5
`
`10
`
`15
`
`25
`
`30
`
`35
`
`50
`
`55
`
`60
`
`65
`
`
`
`7
`separate dequeuing parameters: "b' and a “wake-up incre
`ment.” The wake-up increment specifies the time that will
`elapse between Successive interrupts from the system to trig
`ger dequeuing. This may be set to a relatively large number
`such as 40 milliseconds. The value of “b' may be a much 5
`Smaller value Such as, for example, 1 millisecond. In this
`embodiment, dequeuing is triggered by either of two separate
`mechanisms. First, whenevera new piece of data is enqueued,
`the system may dequeue the contents of all buckets that have
`“timed-out' (i.e., passed their scheduled dequeuing time 10
`based on the value of “b”). Second, whenever a wake-up
`interrupt occurs all timed-out buckets are dequeued.
`Note that if two separate engueuing events occur within a
`single time increment, b, only a single time-based queue will
`be dequeued during these two events. However, if the next 15
`enqueuing event occurs after the time increment “b' has
`elapsed, the next scheduled bucket in time-based queuing
`structure 18 will be dequeued. In the case where two enqueu
`ing events are separated in time by more than one time incre
`ment, b, when the second enqueuing event takes place, all 20
`time-based queues up to the current point will be dequeued.
`And, whenever the wake-up interrupt occurs, all time-based
`queues scheduled for dequeuing since the last engueuing
`event will dequeued.
`Note that the time-based queuing mechanism may take on 25
`a number of formats. It may, as described above, have a
`number of time-based queues that serve to both buffer various
`data items and maintain a scheduled dequeuing time. Other
`embodiments also group data items based upon timing rather
`than source or destination. These approaches are within the 30
`Scope of this invention. The general structures and methods
`used for such embodiments will be apparent to those of skill
`in the art. Two general approaches are described in Randy
`Brown, “Calendar Queues: A Fast O(1) Priority Queue Imple
`mehtation For the Simulation Event Set Problem, October 35
`1988, Volume 31, Communications of the ACM and “LSI
`Logic's L64364 ATMizer II+ATM-SAR Chip Technical
`Manual Chapter 7, Nov. 16, 1998, for example. Both of these
`references are incorporated herein by reference for all pur
`poses.
`Methods of Implementing Time-Based Queuing
`FIG. 2 presents a process flow diagram for one method of
`traffic shaping in accordance with this invention. The
`depicted method bases the shaping process upon the destina
`tion of network traffic. The figures are presented from the 45
`perspect