`
`(12) United States Patent
`Beser
`
`(10) Patent No.:
`(45) Date of Patent:
`
`US 7,748,002 B1
`Jun. 29, 2010
`
`(54) SYSTEMS AND METHODS FOR
`SCHEDULINGAPPLICATIONS
`
`(75) Inventor: Nurettin Burcak Beser, Sunnyvale, CA
`(US)
`
`(73) Assignee: Juniper Networks, Inc., Sunnyvale, CA
`(US)
`-
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 2117 days.
`
`c
`(*) Notice:
`
`21) Appl. No.: 10/283,216
`(21) Appl. No
`9
`
`(22) Filed:
`
`Oct. 30, 2002
`O
`O
`Related U.S. Application Data
`(60) Provisional application No. 60/334,727, filed on Oct.
`31, 2001.
`
`(51) Int. Cl.
`(2006.01)
`G06F 9/46
`(2006.01)
`GOSC I5/00
`(52) U.S. Cl. ........................ 718/102: 718/104; 370/230
`(58) Field of Classification Search ................. 718/100,
`718/102-104; 709/200253; 370/229, 235,
`370/230
`See application file for complete search history.
`References Cited
`
`(56)
`
`U.S. PATENT DOCUMENTS
`
`6,031,841
`6,223.222
`6,275,843
`6,412,000
`6.457,051
`6,519,636
`6,590,885
`
`2, 2000
`4, 2001
`8, 2001
`6, 2002
`9, 2002
`2, 2003
`T/2003
`
`Woundy ..................... 370,410
`Fijolek et al. ............... 709,227
`Chorn ........................ T18, 101
`Riddle et al. ................ TO9,224
`Riddle et al. ................ TO9,224
`Engel et al. ................. 709,223
`Jorgensen ................... 370,338
`
`6,591.299 B2 * 7/2003 Riddle et al. ................ TO9,224
`6,636,485 B1 * 10/2003 Fijolek et al. .....
`... 370.252
`6,640.248 B1 * 10/2003 Jorgensen ......
`... TO9,226
`6,647.419 B1 * 1 1/2003 Mogul .......
`... TO9,226
`6,687.222 B1* 2/2004 Albert et al. ......
`... 370,230
`6,917,614 B1* 7/2005 Laubach et al. ...
`... 370,392
`7.216,348 B1* 5/2007 deCarmo ..........
`... T18, 105
`7,281,043 B1 * 10/2007 Davie ........
`... TO9,226
`7,444.407 B2 * 10/2008 Thomas ...................... 709,227
`2002fOO75875 A1* 6, 2002 Dravida et al. ......... 370,395.21
`2002/0143939 A1 * 10, 2002 Riddle et al. ................ TO9,224
`2003/0005144 A1* 1/2003 Engel et al. ....
`... 709,235
`2003/01 03527 A1* 6/2003 Beser ......................... 370/468
`2003/0142690 A1* 7/2003 Beser ......................... 370/432
`OTHER PUBLICATIONS
`
`Cisco 7920 Wireless IP Phone Design and Deployment Guide, Chap
`ter 6 "Quality of Service” (Oct. 2005).*
`Venkatesh Sunkad, Ph.D.; Quality-of-Service: A DOCSIS/
`PacketCableTM Perspective-Part I: http://www.cablelabs.com/
`about cl/SPECS/MayJune2000/news.pgs/sotry5.html; Jul.
`17.
`2002 print date; pp. 1-7.
`PacketCableTM Dynamic Quality-of-Service Specification; PKT-SP
`DQOS-I103-020116; Copyright 1999-2000 Cable Television Labo
`ratories, Inc.; Jan. 16, 2002; 233 pages.
`k .
`cited by examiner
`Primary Examiner Meng-Ai An
`Assistant Examiner Camduy Truong
`(74) Attorney, Agent, or Firm Harrity & Harrity, LLP
`
`ABSTRACT
`
`57
`(57)
`A system allocates resources in a network. The system
`receives an allocation request for a first flow and a second flow
`from an application and identifies the application based on the
`allocation request. The system schedules resources for the
`first flow based on the identification of the application and the
`second flow.
`
`14 Claims, 16 Drawing Sheets
`
`
`
`START
`
`RECEIVE SERVICE FLOW REQUEST
`FROMAPPLICATION
`
`1510
`
`CLASSIFY APPLICATION
`
`
`
`ALLOCATE RESOURCES BASED ON
`CLASSIFICATION
`
`1520
`
`1530
`
`Packet Intelligence Ex. 2037 Page 1 of 29
`
`
`
`U.S. Patent
`
`US 7,748,002 B1
`
`
`
`Packet Intelligence Ex. 2037 Page 2 of 29
`
`
`
`U.S. Patent
`
`Jun. 29, 2010
`
`Sheet 2 of 16
`
`US 7,748,002 B1
`
`paddeuuun
`??K-se
`
`auu!!!
`
`
`
`
`
`
`
`\qqun?oddox, WO
`
`
`
`
`
`deuu que Juno
`
`Packet Intelligence Ex. 2037 Page 3 of 29
`
`
`
`U.S. Patent
`
`Jun. 29, 2010
`
`Sheet 3 of 16
`
`US 7,748,002 B1
`
`| | Japeau OVW
`
`
`
`W
`
`CRJO
`
`us a
`
`y
`
`a ven
`
`Packet Intelligence Ex. 2037 Page 4 of 29
`
`
`
`U.S. Patent
`
`US 7,748,002 B1
`
`
`
`Squêu 1019
`
`jo jaquin || ""°0 000
`
`Packet Intelligence Ex. 2037 Page 5 of 29
`
`
`
`U.S. Patent
`
`US 7,748,002 B1
`
`
`
`Packet Intelligence Ex. 2037 Page 6 of 29
`
`
`
`U.S. Patent
`
`Jun. 29, 2010
`
`Sheet 6 of 16
`
`US 7,748,002 B1
`
`
`
`
`
`deuu puodºs
`
`nGlej de W?sºnbºx
`
`Packet Intelligence Ex. 2037 Page 7 of 29
`
`
`
`U.S. Patent
`
`1B
`
`
`IléééEmfiméscaé.
`
`
`
`
`
`
`
`
`
`£3Ego920a9:6,”,9E,7
`
`
`
`
`
`
`3:263:cargoes:mEfiemmm-mmmméozterm3mmeoamefima.soi835m
`
`,3323.832
`
`.on2283on€38
`
`
`
`
`
`
`
`2523cm:EofiucwskboymvcmEEcumfiucmé«620qucommflemcmflmmmnumm.m.nzsflmom.I”8.5%J5%boflmtcmfiEoumncmfi«.3030@55mekoEmmbman.
`
`
`
`«mymm?uEa$2Ian«mm:n50
`
`
`
`,mou:58oI“5.38ou€33H32$26:250.9530.mcofio9mmDEN;
`6$5«mxomu
`
`1.I.EJEEEomézmg.
`
`
`
`vmammmm£2.%2mmEsEEE
`
`
`Illll$56
`
`
`
`cuI:3meOII“3300oII:3th
`
`
`
`mmcofio55303:0qu“Bum3thx32..
`
`0ficoaao5:950Emacan;nwEmEnmwas.".mlzllll2genema:
`
`EEIIImagmaafiamg.
`
`
`
`
`EEIII55%.age.
`
`2.m,50Emg.%%EEE“.2928.
`égééz.Slllll!Uéé<2<2%.
`
`7,éagél.95:
`
`
`
`E§%II,SEEoEEZ.
`
`
`
`Packet Intelligence Ex. 2037 Page 8 of 29
`
`Packet Intelligence Ex. 2037 Page 8 of 29
`
`
`
`
`
`
`
`U.S. Patent
`
`Jun. 29, 2010
`
`Sheet 8 of 16
`
`US 7,748,002 B1
`
`s
`
`
`
`
`
`F.G. 8
`
`Packet Intelligence Ex. 2037 Page 9 of 29
`
`
`
`U.S. Patent
`
`Jun. 29, 2010
`
`Sheet 9 of 16
`
`US 7,748,002 B1
`
`9
`
`H
`s
`
`Y
`
`O
`
`
`
`He
`s
`9
`
`s
`
`O
`
`F.G. 9
`
`Packet Intelligence Ex. 2037 Page 10 of 29
`
`
`
`U.S. Patent
`
`Jun. 29, 2010
`
`Sheet 10 of 16
`
`US 7,748,002 B1
`
`
`
`F.G. 10
`
`Packet Intelligence Ex. 2037 Page 11 of 29
`
`
`
`U.S. Patent
`
`Jun. 29, 2010
`
`Sheet 11 of 16
`
`US 7,748,002 B1
`
`
`
`1.
`
`E
`O
`
`s
`
`.
`
`.
`
`.
`
`. . ." . O. Of
`
`s
`
`Y
`
`O
`
`F.G. 11
`
`Packet Intelligence Ex. 2037 Page 12 of 29
`
`
`
`U.S. Patent
`
`Jun. 29, 2010
`
`Sheet 12 of 16
`
`US 7,748,002 B1
`
`3
`
`s
`
`
`
`i
`
`F.G. 12
`
`Packet Intelligence Ex. 2037 Page 13 of 29
`
`
`
`U.S. Patent
`
`Jun. 29, 2010
`
`Sheet 13 of 16
`
`US 7,748,002 B1
`
`s
`
`
`
`s
`
`-
`
`O
`
`O
`
`F.G. 13
`
`Packet Intelligence Ex. 2037 Page 14 of 29
`
`
`
`U.S. Patent
`
`Jun. 29, 2010
`
`Sheet 14 of 16
`
`US 7,748,002 B1
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`¿W(S) LIND
`
`
`
`©NISSE OORHd
`
`90?7 ?.
`
`Packet Intelligence Ex. 2037 Page 15 of 29
`
`
`
`U.S. Patent
`
`US 7,748,002 B1
`
`
`
`Packet Intelligence Ex. 2037 Page 16 of 29
`
`
`
`U.S. Patent
`
`Jun. 29, 2010
`
`Sheet 16 of 16
`
`US 7,748,002 B1
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`[0791] SCIO
`
`Packet Intelligence Ex. 2037 Page 17 of 29
`
`
`
`US 7,748,002 B1
`
`1.
`SYSTEMS AND METHODS FOR
`SCHEDULINGAPPLICATIONS
`
`RELATED APPLICATION
`
`2
`associated with burst transmission outputs from CM 56 are
`configurable by CMTS 50 via Media Access Controller
`(MAC) messaging.
`The concept of service flows is central to the operation of
`DOCSIS upstream transmissions. Service flows provide a
`mechanism for upstream Quality of Service (QoS) manage
`ment. In particular, service flows play a part in bandwidth
`allocation. A service flow identifier (ID) defines a particular
`unidirectional mapping between a CM 56 and CMTS 50.
`Active upstream service flow IDs also have associated Ser
`vice IDs or SIDs. CMTS 50 allocates upstream bandwidth to
`SIDs, and hence to CMs 56. SIDs provide the mechanism by
`which upstream QoS is implemented.
`In a basic cable modem implementation, two service flows
`(one upstream, one downstream) could be used, for example,
`to offer best-effort IP service. However, the service flow
`concept allows for more complex cable modems to be devel
`oped that Support multiple service classes while Supporting
`interoperability with more basic modems. With these more
`complex cable modems, it is possible that certain service
`flows may be configured in Such a way that they cannot carry
`all types of traffic. That is, service flows may have a maximum
`packet size limitation or be restricted to Small, fixed size,
`unsolicited grants. Furthermore, it might not be appropriate to
`send other kinds of data on service flows that are being used
`for Constant Bit Rate (CBR)-type applications.
`Even in these complex modems, it may be desirable to be
`able to send certain upstream packets needed for MAC man
`agement, SNMP management, key management, etc. For the
`network to function properly, all cable modems should Sup
`port at least one upstream and one downstream service flow.
`All service flow IDs are unique within the upstream. The
`mapping of a unicast SID to an active/admitted service flow is
`unique within a single upstream. The length of the service
`flow ID is 32 bits. The length of the SID is 14 bits.
`As shown in FIG. 2, the upstream transmission time-line is
`divided into intervals by the upstream bandwidth allocation
`mechanism. Each interval is an integral number of mini-slots.
`A "mini-slot is the unit of granularity for upstream transmis
`sion opportunities. There is no implication that any packet
`data unit (PDU) can actually be transmitted in a single mini
`slot. Each interval may be labeled with a usage code that
`defines both the type of traffic that can be transmitted during
`that interval and the physical-layer modulation encoding. A
`mini-slot is a power-of-two multiple of 6.25 microseconds
`increments (i.e., 2, 4, 8, 16, 32, 64, or 128 times 6.25 micro
`seconds). Since the upstream channel is modeled as a stream
`of mini-slots, CMTS 50 generates the time reference for
`identifying these slots. CMTS 50 also controls access to these
`slots by CMs 56. For example, CMTS 50 may grant some
`number of contiguous slots to a CM56 for it to transmit a data
`PDU. CM 56 times its transmission Such that that CMTS 50
`receives it in the time slot specified. A bandwidth allocation
`MAP is used for assigning bandwidth.
`The bandwidth allocation MAP is a MAC Management
`message transmitted by CMTS 50 on the downstream channel
`that describes, for some interval of time, the uses to which the
`upstream frequency will be used by a given CM56. A given
`MAP may describe some time slots as grants for particular
`CMs 56 to transmit data, other time slots as available for
`contention transmission, and other slots as an opportunity for
`new CMs 56 to join the link. FIG.3 illustrates a MAC Header
`and MAC Management Message Header Fields.
`The upstream bandwidth allocation MAP may include a
`fixed-length header followed by a variable number of infor
`
`This application claims priority under 35 U.S.C. S 119
`based on U.S. Provisional Application No. 60/334,727, filed
`Oct. 31, 2001, the disclosure of which is incorporated herein
`by reference.
`
`10
`
`FIELD OF THE INVENTION
`
`The present invention relates generally to communications
`systems and, more particularly, to systems and methods for
`scheduling resources in a cable modem network.
`
`15
`
`BACKGROUND OF THE INVENTION
`
`25
`
`30
`
`35
`
`Recently, there has been an explosive demand for services,
`such as data, voice, and video, to be delivered over broadband
`communications systems. Cable modem technology is one
`method of providing such broadband services to subscribers.
`Cable modem technology competes with technologies Such
`as Asymmetric Digital Subscriber Lines (ADSL) and Inte
`grated Services Digital Network (ISDN). Many in the indus
`try forecast that cable modem systems will become the pre
`Vailing technology for providing broadband services since
`cable television is already widely in use.
`FIG. 1 illustrates a simplified diagram of a conventional
`cable modem system. The DOCSIS (DataOver Cable Service
`Interface Specifications) Radio Frequency Interface Standard
`specifies the transfer of Internet Protocol (IP) traffic, between
`the cable headend system and customer locations, over an
`all-coaxial or a hybrid-fiber/coax (HFC) cable network 52.
`The transmission path includes a Cable Modem Termination
`System (CMTS)50 at the headend, and a Cable Modem (CM)
`56 at each customer location. The DOCSIS standard defines
`a single transmitter (i.e., CMTS 50) for each downstream
`channel and many receivers (i.e., CMs 56). All CMs 56 listen
`to all frames transmitted on the downstream channel upon
`which they are registered and accept those where the desti
`nations match CM 56 itself or CPEs (Customer Premises
`Equipment) 58 connected to CM 56. CMs 56 communicate
`with other CMs 56 through the CMTS 50.
`The upstream channel is characterized by many transmit
`ters (i.e., CMs 56) and one receiver (i.e., CMTS 50). Time in
`the upstream channel is slotted, providing for time division
`multiple access (TDMA) at regulated time intervals. CMTS
`50 provides the time reference and controls the allowed usage
`50
`for each interval. Intervals may be granted for transmission by
`particular CMs 56, or for contention by all CMs 56. CMs 56
`may contend to request transmission time. To a limited extent,
`CMs 56 may also contend to transmit actual data. In both
`cases, collisions can occur and retry techniques may be used. 55
`The upstream Physical Media Dependent (PMD) sublayer
`uses a Frequency Division Multiple Access (FDMA)/TDMA
`burst modulation format that provides five symbol rates and
`two modulation formats (Quadrature Phase Shift Keying
`(QPSK) and 16-QAM (Quadrature Amplitude Modulation)). 60
`The PMD sublayer format includes a variable-length modu
`lated burst with precise timing beginning at boundaries
`spaced at integer multiples of 6.25 microseconds apart (which
`is 16 symbols at the highest data rate). Each burst Supports a
`flexible modulation, symbol rate, preamble, randomization of 65
`the payload, and programmable FEC (Forward Error Correc
`tion) encoding. All of the upstream transmission parameters
`
`40
`
`45
`
`Packet Intelligence Ex. 2037 Page 18 of 29
`
`
`
`US 7,748,002 B1
`
`3
`mation elements (IEs) as shown in FIG. 4. The upstream
`bandwidth allocation MAP message header may contain the
`following information:
`Upstream Channel ID: The identifier of the upstream chan
`nel to which this message refers.
`UCD Count: Matches the value of the Configuration
`Change Count of the Upstream Channel Descriptor
`(UCD) that describes the burst parameters that apply to
`this map.
`Number of Elements: Number of IEs in the map.
`Alloc Start Time: Effective start time from CMTS initial
`ization (in mini-slots) for assignments within this map.
`Ack Time Latest time, from CMTS initialization, (mini
`slots) processed in upstream. This time is used by the
`CMs for collision detection purposes.
`Ranging Backoff Start: Initial back-off window for initial
`ranging contention, expressed as a power of two. Values
`range 0-15 (the highest order bits may be unused and set
`to 0).
`Ranging Backoff End: Final back-off window for initial
`ranging contention, expressed as a power of two. Values
`range 0-15 (the highest order bits may be unused and set
`to 0).
`The number of transmit opportunities associated with a
`particular IE in a MAP is dependent on the total size of the
`region as well as the allowable size of an individual transmis
`Sion. As an example, assume a request (REQ) IE defines a
`region of 12 mini-slots. If the UCD defines a REQ Burst Size
`that fits into a single mini-slot, then there are 12 transmit
`opportunities associated with this REQIE (i.e., one for each
`mini-slot). If the UCD defines a REQ that fits in two mini
`slots, then there are six transmit opportunities and a REQ can
`start on every other mini-slot. Table 1 illustrates interval
`usage codes and their corresponding IE names.
`
`TABLE 1.
`
`The Interval Usage Codes
`
`Interval Usage Code
`
`-14
`
`9
`
`Information Element Name
`Request
`REQ/Data
`Initial Maintenance
`Station Maintenance
`Short Data Grant
`Long Data Grant
`NIE
`Data Acknowledge
`Reserved
`Expanded IUC
`
`4
`Each IE consists of a 14-bit SID, a 4-bit type code, and a
`14-bit starting offset. FIG.5 illustrates the structure of a MAP
`IE. Since all CMS 56 will scan all IES, it is critical that IES be
`short and relatively fixed format. IEs within the MAP are
`strictly ordered by starting offset. For most purposes, the
`duration described by the IE is inferred by the difference
`between the IE's starting offset and that of the following IE.
`For this reason, a Null IE terminates the list.
`Types of Information Elements (IEs)
`DOCSIS defines four types of SIDs:
`1. 0x3FFF broadcast, intended for all CMs.
`2. 0x2000–0x3FFE—multicast, purpose is defined admin
`istratively.
`3. 0x0001-0x1FFF unicast, intended for a particular CM
`or a particular service within that CM.
`4. 0x0000 null address, addressed to no CM.
`All of the IEs defined are supported by CMs 56. CMTS 50
`uses any of these IEs when creating Bandwidth Allocation
`MAPS
`The Request (REQ) IE
`The Request IE provides an upstream interval in which
`requests can be made for bandwidth for upstream data trans
`mission. The character of this IE changes depending on the
`class of SID. If broadcast, this is an invitation for all CMs 56
`to contend for requests. If unicast, this is an invitation for a
`particular CM 56 to request bandwidth.
`A small number of Priority Request SIDs is defined in
`DOCSIS. These allow contention for Request IEs to be lim
`ited to service flows of a given traffic priority. The Request/
`Data IE provides an upstream interval in which requests for
`bandwidth or short data packets may be transmitted. This IE
`is distinguished from the Request IE in that it provides a
`means by which allocation algorithms may provide for
`“immediate' data contention under light loads, and a means
`by which this opportunity can be withdrawn as networkload
`ing increases.
`Multicast SIDS are used to specify maximum data length,
`as well as allowed random starting points within the interval.
`For example, a particular multicast SID may specify a maxi
`mum of 64-byte data packets, with transmit opportunities
`every fourth slot.
`Short and Long Data Grant IES
`The Short and Long Data Grant IEs provide an opportunity
`for CM56 to transmit one or more upstream PDUs. These IEs
`are issued either in response to a request from CM 56, or
`because of an administrative policy providing some amount
`of bandwidth to a particular CM 56 (see class-of-service
`discussion below). These IEs can also be used with an
`inferred length of Zero mini slots (a Zero length grant), to
`indicate that a request has been received and is pending (a
`Data Grant pending).
`Short Data Grants are used with intervals less than or equal
`to the maximum burst size for this usage specified in the
`Upstream Channel Descriptor (UCD). If Short Data burst
`profiles are defined in the UCD, then all Long Data Grants are
`for a larger number of mini-slots than the maximum for Short
`Data. The distinction between Long and Short Data Grants
`may be exploited in physical-layer forward-error-correction
`coding; otherwise, it is not meaningful to the bandwidth
`allocation process.
`If this IE is a Data Grant Pending (a zero length grant), it
`will follow the NULL IE. This allows CMs 56 to process all
`actual allocations first, before scanning the MAP for data
`grants pending and data acknowledgments.
`
`10
`
`15
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`As another example, assume a REQ/Data IE defines a 24
`mini-slot region. If the REQ/Data IE is sent with a SID of
`0x3FF4, then CM 56 can potentially start a transmit session
`on every fourth mini-slot. This IE contains a total of six
`transmit opportunities (TXOP). Similarly, a SID of 0x3FF6
`implies four TX OPs: 0x3FF8 implies three TX OPs; and
`0x3FFC implies two TX OPs.
`For an Initial Maintenance IE, CM 56 starts its transmis
`sion in the first mini-slot of the region; therefore, CM56 has
`a single transmit opportunity. The remainder of the region
`may be used to compensate for the round trip delays since CM
`56 has not yet been ranged. Station Maintenance IEs, Short/
`Long Data Grant IES, and unicast Request IES are unicast and
`thus are not typically associated with contention transmit
`opportunities. They represent a single dedicated, or reserva
`tion based, transmit opportunity.
`
`60
`
`65
`
`Packet Intelligence Ex. 2037 Page 19 of 29
`
`
`
`US 7,748,002 B1
`
`5
`
`10
`
`15
`
`25
`
`35
`
`40
`
`45
`
`Data Acknowledge IE
`The Data Acknowledge IE acknowledges that a data PDU
`was received. CM 56 may request this acknowledgment
`within the data PDU (normally this would be done for PDUs
`transmitted within a contention interval in order to detect
`collisions). This IE will follow the NULL IE. This allows
`CMs 56 to process all actual interval allocations first, before
`scanning the MAP for data grants pending and data acknowl
`edgments.
`Requests
`Requests refer to the mechanism that CMs 56 use to indi
`cate to CMTS 50 that it needs upstream bandwidth allocation.
`A transmission request may come as a stand-alone Request
`Frame transmission or it may come as a piggyback request in
`the EHDR of another Frame transmission.
`The Request Frame may be transmitted during any of the
`following intervals:
`Request IE,
`Request/Data IE,
`Short Data Grant IE, and
`Long Data Grant IE.
`A piggyback request may be contained in the following
`Extended Headers (EHs):
`Request EH element,
`Upstream Privacy EH element, and
`Upstream Privacy EH element with Fragmentation.
`The request includes:
`The SID making the request, and
`The number of mini-slots requested.
`30
`The number of mini-slots requested may be the total num
`ber that is desired by CM56 at the time of the request (includ
`ing any physical layer overhead). Subject to UCD 2 and
`administrative limits. CM56 may request a number of mini
`slots corresponding to one complete frame, except in the case
`of fragmentation in Piggyback Mode.
`CM56 may have one request outstanding at a time per SID.
`If CMTS 50 does not immediately respond with a Data Grant,
`CM56 may unambiguously determine that its request is still
`pending because CMTS 50 will continue to issue a DataGrant
`Pending in every MAP for as long as a request is unsatisfied.
`In MAPs, CMTS 50 cannot make a data grant greater than 255
`mini-slots to any assigned SID. This puts an upper bound on
`the grant size CM 56 has to support.
`The allocation MAP transmitted in time to propagate
`across the physical cable may be received and handled by
`receiving CMs 56. As such, it may be transmitted consider
`ably earlier than its effective time. The components of the
`delay are:
`Worst-case round-trip propagation delay—may be net
`work-specific, but on the order of hundreds of microsec
`onds;
`Queuing delays within the CMTS implementation-spe
`cific;
`Processing delays within the CMs will allow a minimum
`processing time by each CM; and
`PMD-layer FEC interleaving.
`Within these constraints, vendors may wish to minimize
`this delay so as to minimize latency of access to the upstream
`channel. The number of mini-slots described vary from MAP
`60
`to MAP. At a minimum, a MAP describes a single mini-slot.
`This would be wasteful in both downstreambandwidth and in
`processing time within CMs 56. At maximum, a MAP may
`stretch to tens of milliseconds. Such a MAP would provide
`poor upstream latency, however.
`Allocation algorithms vary the size of the MAPs over time
`to provide a balance of network utilization and latency under
`
`50
`
`55
`
`65
`
`6
`varying traffic loads. A MAP would contain at least two IEs:
`one to describe an interval and a null IE to terminate the list.
`At the most, a MAP is bounded by a limit of 240 IEs. MAPs
`are also bounded in that they will not describe more than 4096
`mini-slots into the future. The latter limit is intended to bound
`the number of future mini-slots that each CM56 is required to
`track. CM 56 is able to support multiple outstanding MAPs.
`Even though multiple MAPs may be outstanding, the sum of
`the number of mini-slots they describe may not exceed 4096.
`The set of all MAPs, taken together, describes every mini-slot
`in the upstream channel. If CM 56 fails to receive a MAP
`describing a particular interval, it will not transmit during that
`interval.
`FIG. 6 illustrates a protocol exchange between CM56 and
`CMTS 50. This example illustrates the interchange between
`CM56 and CMTS 50 when CM56 has data to transmit. If CM
`56 has a data PDU available for transmission, then the fol
`lowing steps occur:
`1. At time t, CMTS 50 transmits a MAP having an effec
`tive starting time oft. Within this MAP is a Request IE that
`starts atts. The difference between tandt is needed to allow
`for:
`Downstream propagation delay (including FEC interleav
`ing) to allow all CMs 56 to receive the MAP:
`Processing time at the CM56 (allows CMs 56 to parse the
`MAP and translate it into transmission opportunities);
`and
`Upstream propagation delay (to allow the CMS transmis
`sion of the first upstream data to begin in time to arrive at
`CMTS 50 at time t).
`2. Att, CM56 receives this MAP and scans it for request
`opportunities. In order to minimize request collisions, CM56
`calculates to as a random offset based on the Data Backoff
`Start value in the most recent MAP.
`3. Atta CM56 transmits a request for as many mini-slots
`as needed to accommodate the PDU. Time t is chosen based
`on the ranging offset so that the request will arrive at CMTS
`50 att
`4. Att, CMTS 50 receives the request and schedules it for
`service in the next MAP. (The choice of which requests to
`grant will vary with the class of service requested, any com
`peting requests, and the algorithm used by CMTS 50.)
`5. At t, CMTS 50 transmits a MAP having an effective
`starting time oft. Within this MAP, a data grant for CM56
`will start at t.
`6. At to CM 56 receives the MAP and scans for its data
`grant.
`7. Atto, CM56 transmits its data PDU so that it will arrive
`at CMTS 50 at t. Time to is calculated from the ranging
`offset as in step 3 above.
`Steps 1 and 2 need not contribute to access latency if CMs
`56 routinely maintain a list of request opportunities. At step 3,
`the request may collide with requests from other CMs 56 and
`be lost. CMTS50 may not directly detect the collision. CM56
`determines that a collision (or other reception failure)
`occurred when the next MAP fails to include acknowledg
`ment of the request. CM 56 may then perform a back-off
`algorithm and retry.
`At step 4, CMTS 50 scheduler may fail to accommodate
`the request within the next MAP. If so, CMTS 50 scheduler
`may reply with a zero-length grant in that MAP or discard the
`request by giving no grant at all. CMTS 50 scheduler may
`continue to report this Zero-length grant in all Succeeding
`MAPs until the request can be granted or is discarded. This
`will signal to CM 56 that the request is still pending. So long
`as CM 56 is receiving a zero-length grant, CM 56 will not
`issue new requests for that service queue.
`
`Packet Intelligence Ex. 2037 Page 20 of 29
`
`
`
`US 7,748,002 B1
`
`10
`
`15
`
`30
`
`35
`
`40
`
`25
`
`7
`Since many different scheduling algorithms can be imple
`mented in CMTS 50, the DOCSIS specification does not
`mandate a particular scheduling algorithm. Instead, DOCSIS
`describes the protocol elements by which bandwidth is
`requested and granted. CMs 56 may issue requests to CMTS
`50 for upstream bandwidth. CMTS 50 transmit allocation
`MAP PDUs on the downstream channel define the allowed
`usage of each mini-slot.
`Contention Resolution
`The DOCSIS specification mandates the method of con
`tention resolution as the truncated binary exponential back
`off, with the initial back-off window and the maximum back
`off window controlled by CMTS 50. The values are specified
`as part of the Bandwidth Allocation MAP MAC message and
`represent a power-of-two value. For example, a value of 4
`indicates a window between 0 and 15 while a value of 10
`indicates a window between 0 and 1023.
`When CM 56 has information to send and wants to enter
`the contention resolution process, CM 56 sets its internal
`back-off window equal to the Data Backoff Start defined in
`the MAP currently in effect. CM 56 may randomly select a
`number within its back-off window. This random value indi
`cates the number of contention transmit opportunities which
`CM56 will defer before transmitting. CM56 may consider
`contention transmit opportunities for which this transmission
`would have been eligible. These are defined by either Request
`IEs or Request/Data IEs in the MAP. It will be appreciated
`that each IE can represent multiple transmission opportuni
`ties.
`As an example, consider a CM 56 whose initial back-off
`window is 0 to 15. Assume that CM 56 randomly selects the
`number 11. CM 56 defers a total of 11 contention transmis
`sion opportunities. If the first available Request IE is for 6
`requests, CM 56 does not use this and has 5 more opportuni
`ties to defer. If the next Request IE is for 2 requests, CM 56
`has 3 more to defer. If the third Request IE is for 8 requests,
`CM 56 transmits on the fourth request, after deferring for 3
`more opportunities.
`After a contention transmission, CM 56 waits for a Data
`Grant (Data Grant Pending) or Data Acknowledge in a sub
`sequent MAP. Once either is received, the contention resolu
`tion is complete. CM56 determines that the contention trans
`mission was lost when it finds a MAP without a Data Grant
`(Data Grant Pending) or Data Acknowledge for it and with an
`acknowledge time more recent than the time of transmission.
`CM56 may increase its back-off window by a factor of two,
`as long as the new value is less than the maximum back-off
`window. CM56 may randomly select a number within its new
`back-off window and repeat the deferring process described
`above.
`This re-try process continues until the maximum number
`of retries (e.g., 16) has been reached, at which time the PDU
`will be discarded. The maximum number of retries is inde
`55
`pendent of the initial and maximum back-off windows that
`are defined by CMTS50. If CM56 receives a unicast Request
`or Data Grant at any time while deferring for this SID, CM56
`may stop the contention resolution process and use the
`explicit transmit opportunity.
`CMTS50 has much flexibility in controlling the contention
`resolution. At one extreme, CMTS 50 may choose to set up
`the Data Backoff Start and End to emulate an Ethernet-style
`back-off with its associated simplicity and distributed nature,
`but also its fairness and efficiency issues. This would be done
`by setting Data Backoff Start=0 and End=10 in the MAP. At
`the other end, CMTS50 may make the Data Backoff Start and
`
`45
`
`50
`
`60
`
`65
`
`8
`End identical and frequently update these values in the MAP
`so all CMs 56 are using the same, and hopefully optimal,
`back-off window.
`CM Bandwidth Utilization
`The following rules govern the response CM 56 makes
`when processing MAPs. These standard behaviors can be
`overridden by the CM's Request/Transmission Policy.
`1. CM56 may first use any Grants assigned to it. Next, CM
`56 may use any unicast REQ for it. Finally, CM56 may use
`the next available broadcast/multicast REQ or REQ/Data IEs
`for which it is eligible.
`2. CM56 may not have more than one Request outstanding
`at a time for a particular SID.
`3. If CM 56 has a Request pending, it will not use inter
`vening contention intervals for that SID.
`DOCSIS CMTS Scheduling
`DOCSIS scheduling services are designed to improve the
`efficiency of the poll/grant process. By specifying a schedul
`ing service and its associated QoS parameters, the DOCSIS
`CMTS 50 can anticipate the throughput and latency needs of
`the upstream traffic and provide polls and/or grants at the
`appropriate times.
`Each service is tailored to a specific type of data flow as
`described below. The basic services include:
`Unsolicited Grant Service (UGS),
`Real-Time Polling Service (rtPS),
`Unsolicited Grant Service with Activity Detection (UGS
`AD),
`Non-Real-Time Polling Service (nrtPS), and
`Best Effort (BE) service.
`FIG. 7 illustrates the relationship between the scheduling
`services and the related QoS parameters. Each is described in
`more detail below.
`Unsolicited Grant Service
`The Unsolicited Grant Service (UGS) is designed to sup
`port real-time service flows that generate fixed size data pack
`ets on a periodic basis, such as Voice over IP (VoIP). The UGS
`service offers fixed size grants on a real-time periodic basis,
`which eliminates the overhead and latency of CM56 requests
`and assure that grants will be available to meet the flows
`real-time needs.
`CMTS 50 provides fixed size data grants at periodic inter
`vals to the service flow. In order for this service to work
`correctly, the Request/Transmission Policy setting should be
`such that CM 56 is prohibited from using any contention
`request or request/data opportunities and CMTS 50 does not
`provide any unicast request opportunities. The Request/
`Transmission Policy also prohibits piggyback requests. This
`will result in CM 56 only using unsolicited data grants for
`upstream transmissions. The key service parameters are the
`Unsolicited Grant Size, the Nominal Grant interval, the Tol
`erated Grant Jitter and the Reque