throbber
PPeErrreveSrrerercreerrrely)
`
`
`
`ryeeTTereTeteTeTTTrrTSTrTreCiresteestesoa
`
`xD
`
`errr
`
`
`
`THEUNITED STATES OFPUCeerer
`
`
`UNITED STATES DEPARTMENT OF COMMERCE
`
`United States Patent and Trademark Office
`
`
`
`THIS IS TO CERTIFY THAT ANNEXEDIS A TRUE COPY FROM THE
`
`
`RECORDS OF THIS OFFICE OF THE FILE WRAPPER AND CONTENTS
`
`ie
`APPLICATION NUMBER: 09/608,126
`FILING DATE: June 30, 2000
`Ae
`anNawewSRS
`PATENT NUMBER: 6,839,751
`i
`Saad
`PreteceeereS
`ISSUE DATE: January 04, 2005
`eran
`
`NrurnaeeCn
`
`Seeiomcass enanatentn ctensreenmtereettrnnensannann
`
`Merrenrrcuin TVDOCRnceGeceLal
`
`TL®ALL,TOWHOMTHESE; PRESENTS SHAM, COMES
`
`October 17, 2018
`
`OF:
`
`a
`
`[TXLEEEeeehe
`
`PTSETTTEEShthkee.caete
`
`
`
`By Authority of the
`a
`UnderSecretary of Commercefor Intellectual Property
`and Director of the United States Patent and TrademarkOffice §
`
`
`
`
`ta
`
`
`
`
`*" MONTGOMERY
`Certifying Officer
`
`
`
`NOAC Ex. 1018 Page:363
`
`
`NOAC Ex. 1018 Page 363
`
`

`

`a2) United States Patent
`US 6,424,624 BL
`(0) Patent No.:
`*Jul. 23, 2002
`(45) Date of Patent:
`Galandet al.
`
`US006424624B1
`
`(54) METHOD AND SYSTEM FOR
`IMPLEMENTING CONGESTION
`DETECTION AND FLOW CONTROLIN
`HIGH SPEED DIGITAL NETWORK
`
`(75)
`
`Inventors: Claude Galand, La Colle sur Loup;
`Pierre-Andre Foriel, Cagnes sur Mer,
`Aline Fichou, La Colle sur Loup,all of
`(FR); Marcus Enger, Hirschhorn (DE)
`
`(73) Assignee: Cisco Technology, Inc., San Jose, CA
`(US)
`
`5,629,927 A
`5,787,071 A
`5,790,522 A
`5,815,492 A
`5,898,691 A
`5,912,894 A
`6,011,776 A
`6,091,708 A *
`6,108,304 A *
`6,118,791 A
`
`5/1997 Waclawsky et al.
`7/1998 Basso etal.
`8/1998 Fichouetal.
`9/1998 Berthaud etal.
`4/1999 Liu
`6/1999 Duault et al.
`1/2000 Berthaud etal.
`7/2000 Matsunuma ............... 370/233
`8/2000 Abe et al. ....... eee 370/232
`9/2000 Fichouetal.
`
`OTHER PUBLICATIONS
`
`(*) Notice:
`
`This patent issued on a continued pros-
`ecution application filed under 37 CFR
`1.53(d), and is subject to the twenty year
`patent
`term provisions of 35 U.S.C.
`154(a\(2).
`
`Subject to any disclaimer, the term ofthis
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 0 days.
`
`The ATM Forum Technical Committee, Traffic Management
`Specification Version 4.0, Apr. 1996.
`
`* cited by examiner
`
`Primary Examiner—Wellington Chin
`Assistant Examiner—Saba Tsegaye
`(74) Attorney, Agent, or Firm-—Cesari and McKenna, LLP
`
`(21) Appl. No.: 09/167,786
`
`(57)
`
`ABSTRACT
`
`(22) Filed:
`
`Oct. 7, 1998
`
`(SE) Vint C07 ceesscssssesssccsseeeeeene GOGF 11/10; HO4L 1/16
`Co) 1A0 370/231; 370/253; 370/400;
`709/235
`(58) Field of Search, cccsccssssusssssesssssseee 370/229-235,
`370/400, 419, 420, 463, 252, 253, 522;
`709/235, 239
`
`(56)
`
`References Cited
`U.S. PATENT DOCUMENTS
`
`This system is made to perform congestion detection and
`flow control in high speed digital packet switching network
`Foreign Application Priority Data
`(30)
`(22) carrying discardable and non-discardable traffic. For-
`Oct. 16, 1997—(EP).....serssssccscceccersercennestecnsensenees 97480070
`ward traffic received at a destination system over a first
`connection from a source system is monitored. If a
`congestion-indicating bit is detected in a received packet, a
`backward congestion indicatoris set in packets flowing from
`the destination system to the source system over a second
`connection. The source system integrates the number of
`backward congestion indicators received over successive
`periods of time using a count-up, count-down counter.
`Specific congestion control actions are taken at the source
`system as a function ofthe counter state at the end of each
`of the successive periods of time. The congestion control
`actions may include increasing or decreasing the bandwidth
`allocated to discardable traffic intended to be delivered over
`the first connection.
`
`5,115,429 A *
`5,313,454 A
`5,426,640 A *
`5,436,891 A *
`5,497,375 A *
`
`............ 370/231
`
`5/1992 Hluchyj et al.
`5/1994 Bustini et al.
`6/1995 Hluchyj et al. oo... 370/235
`
`7/1995 Grossman et al.
`370/231
`3/1996 Hluchyj etal. ........... 370/235
`
`15 Claims, 8 Drawing Sheets
`
`da
`
`NOACEx. 1018 Page 364
`
`NOAC Ex. 1018 Page 364
`
`

`

`U.S. Patent
`
`ACCESS
`
`Jul. 23,2002
`
`Sheet 1 of 8
`
`US 6,424,624 B1
`
`NODE NOACEx. 1018 Page 365
`
`NOAC Ex. 1018 Page 365
`
`

`

`
`
`U.S. Patent
`
`Jul. 23, 2002
`
`Sheet 2 of 8
`
`US 6,424,624 B1
`
`
`
`JesnUOeUseq
`
`
`lO44
`
` NOAa@;JapeaySEENJepeayY4peojcAed
`
`ke 82
`
`
`
`Jesn8diNOS
`
`NOACEx. 1018 Page 366
`
`NOAC Ex. 1018 Page 366
`
`

`

`
`
`U.S. Patent
`
`Jul. 23, 2002
`
`Sheet 3 of 8
`
`US 6,424,624 B1
`
`307
`
`PORT302
`
`\oO
`Oo
`oY
`Ke
`ao
`oO
`ou
`bk
`=
`.¢8)
`
`z<a
`
`o_
`
`RECEIVE
`
`ADAPTER 100re ADAPTER J
`
`FIG.3
`
`keerot8
`
`NOACEx.1018 Page 367
`
`NOAC Ex. 1018 Page 367
`
`

`

`U.S. Patent
`
`Jul. 23, 2002
`
`Sheet 4 of 8
`
`US 6,424,624 BL
`
`
` Network
`J Source
`>NOoamcere
`
`Network
`Source~_OOo
`__
`
`41b 40b
`
`=
`
`& nNo
`
`
`
`
`
`NOACEx. 1018 Page 368
`
`NOAC Ex. 1018 Page 368
`
`

`

`ne Saar 3,Satin thandnhninemansoaiuatetMapiakonbsnhinan
`seleneecithanteSteAdahih,
`
`g
`
`Network
`
`
`
`yuajed‘SN
`
`(Discardable) Z00Z
`‘EZ18
`
`§JO¢79948
`1d77z9‘rzr'9SA
`
`
`ExcessTraffic
`Token Pool
`
`FIG.5
`
`NOACEx. 1018 Page 369
`
`NOAC Ex. 1018 Page 369
`
`

`

`U.S. Patent
`
`Jul. 23,2002
`
`Sheet 6 of 8
`
`US 6,424,624 BI
`
`Packet of backward |_60
`connection arrived
`
`nb_RCl_on =|62
`nb_RCI_on - 1
`
`nb_|aa,on =
`
`|_63
`
`nb_RCi_on + 4
`
`
`
`Timer > time-out
`
`
`
`64
`
`
`
`
`
`
`
`=icwvIOlb~< v3/8
`RoRRIE}
`
`
`Capt
`
`=a
`
`R_r=R_r“ RDF
`
`N
`
`R_p=M_r/R_r
`
`68
`
`
`
`[ResetTimer}®°
`
`NOACEx. 1018 Page 370
`
`NOAC Ex. 1018 Page 370
`
`

`

`
`
`U.S. Patent
`
`Jul. 23, 2002
`
`Sheet 7 of 8
`
`US 6,424,624 BI
`
`Packet of backward |_72
`connection arrived
`
`73
`
`Y
`
`nb_RCl_on =
`nb_RCl_on - 1
`
`[74
`
`|_75
`
`nb_RClI_on =
`nb_RCl_on + 1
`
`
` 16
`
`Timer> time-out
`9
`
`77
`
`Y
`
`18
`
`large decrease
`R_r=R_r*RDF2
`
`
`
`slow decrease
`
`R_r=R_rRDF1
`
` th2<nb_RCI_ons0
`
`
`80
`
`NOCH
`
` Y
`
`
`
`2?
`
`N
`increase
`R_r=R_r + R*RIF
`
`84
`85
`
`83
`
`<aoN
`
`
`
`
`R_p=M_r/R_r
`86
` 87
`
`
`Reset Timer
`
`FIG.7
`
`NOACEx.1018 Page 371
`
`NOAC Ex. 1018 Page 371
`
`

`

`
`
`U.S. Patent
`
`Jul.23, 2002
`
`Sheet 8 of 8
`
`US 6,424,624 B1
`
`bd j04U0Daounos
`
`
`
`Og4asn
`
`NOACEx.1018 Page 372
`
`NOAC Ex. 1018 Page 372
`
`
`
`

`

`
`
`US 6,424,624 B1
`
`1
`METHOD AND SYSTEM FOR
`IMPLEMENTING CONGESTION
`DETECTION AND FLOW CONTROL IN
`HIGH SPEED DIGITAL NETWORK
`
`FIELD OF THE INVENTION
`
`This invention relates to congestion detection and flow
`control in high speed packet switching networks and more
`particularly to methods and apparatus for implementing
`congestion detection and flow control for low priority traffic
`with optimized cost efficiency.
`BACKGROUND ART
`
`Modem digital networks often operate in a multimedia
`environment and interconnect, upon demand, very large
`numbers of users and applications through fairly complex
`digital communication network topologies.
`Due to the variety of users’ demands and the growth of
`distributed applications, network traffic is consuming more
`bandwidth, becoming more non-deterministic and requiring
`more connectivity. These changes have been the driver for
`the emergence of fast packet switching network architec-
`tures in which data, voice and video information are digitally
`encoded, chopped into fixed or variable length packets (also
`named “cells in ATM or Asynchronous Transfer Mode
`networks) and transmitted through a common set of nodes
`and links interconnected to constitute the network commu-
`nication facilities.
`Theneedfor efficient transport of mixed traffic streams on
`very high speed lines (sometimes referred to as links or
`trunks), means, imposes a set of performance and resource
`consumption requirements including very high throughput,
`very short packet processing time, the flexibility to support
`a wide range of connectivity options and efficient flow and
`congestion control. Congestion is generally defined as a
`condition during which network performance is degraded
`due to saturation of network resources such as communica-
`tion links, processor cycles, memory buffers, etc.
`One of the key requirements for high speed packet
`switching networks is reduction of end to end delay in order
`to satisfy real time delivery constraints and to achieve the
`necessary high nodal throughput for the transport of voice
`and video. Increases in link speeds have not been matched
`by proportionate increases in the processing speeds of
`communication nodes. The fundamental challenge for high
`speed networks is to minimize the processing time and to
`take full advantage of the high speed/low error rate tech-
`nologies. Most of the transport and contro] functions pro-
`vided by the new high bandwidth network architectures are
`performed on an end to end basis.
`One basic advantage of packet switching techniques (as
`opposed to so-called circuit switching techniques) is that it
`allows statistical multiplexing of the different types of data
`over a line, which optimizes the utilization of transmission
`bandwidth. One drawback, however,is that packet switching
`introduces delays andjitters which might be detrimental for
`transmission of isochronous data, like video or voice. Meth-
`ods have been proposedto control the network in such a way
`that delays and jitters are bounded for every new connection
`that is set up across the packet switched network.
`Such methods are described, for instance, in a published
`European Application number 0000706297 and include
`establishing a path through the network high speedlines and
`nodes, via an entry node port of said network, making
`optimal use of the available transmission bandwidth of the
`network along the path to the indicated destination.
`
`20
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`2
`type of traffics need to be treated
`Because different
`differently to maintain their usefulness at a destination,
`choices have to be made among the different
`types by
`assigningdifferent specific priorities. In other words, when
`a source terminal requests a connection to a destination
`terminal via the network (i-e., a call is set-up), a quality of
`service (QoS) is assigned to the call in terms of maximum
`permissible delay (T,, max) and packet loss probability
`(P_loss).
`The QoS andtraffic characteristics (¢.g., peak data rate,
`mican data rate and average packet length) are used to
`compute the amount of bandwidth (i.e. equivalent capacity
`or Ceq) to be reserved on every line on the route or path
`assigned to the traffic between the source terminal and the
`destination terminal, in order to guarantee a packet loss
`probability which is smaller than the loss probability
`(P_loss) that has been specified for the connection.
`However, in operation, the network traffic must be con-
`trolled dynamically which means that some packets may
`have to be dropped or discarded within the network to avoid
`traffic congestion.
`In practice, it is common to reserve bandwidth for high
`priority packets (e.g. so-called Real Time or RT traffic)
`allowing such packets are transmitted in preference to lower
`pfiority packets derived from discardable traffic (e.g. Non
`Real Time or NRTtraffic or more particularly Non Reserved
`or NR traffic). Lowerpriority packets may be sentat rates
`greater than their declared rate to dynamically take advan-
`tage of any bandwidth remainingafter all the higherpriority
`traffic has been served. This remaining bandwidth can vary
`widely depending on the actual activity of the high priority
`traffic sources. It is therefore of considerable importance to
`manage the low priority traffic so as to optimize the use of
`the widely varying left-over bandwidth in the network while
`avoiding any congestion which would reduce network
`throughput. This obviously requires providing the network
`(and eventually also the sources) with congestion detection
`and flow control facilities.
`Various mechanismsfor controlling the flow of NR traffic
`have been proposed. In particular, an Available Bit Rate
`(ABR) flow control mechanism has been proposed for
`Asynchronous Transfer Mode (ATM) networks. ABR flow
`control is based on use of a particular flow control cell, the
`so-called Resource Management or RM cell. RM cells are
`used to collect congestion information from network node
`switches along connection paths and to send such informa-
`tion back to the traffic sources. While ABR flow control
`seems to be very efficient, it is complex to implement. End
`systems must generate RM cells periodically, provide sched-
`uling for RM cells to be sent among data cells, and shape
`their traffic in response to congestion indications conveyed
`by received RM cells. Intermediate systems (switches along
`the paths) must be able to differentiate RM cells from regular
`data cells, extract RM cells and update these with congestion
`information. These complexities limit the cost effectiveness
`of this solution.
`Moreover, the above solution requires that all so-called
`non-reserved (i.e.
`low priority) sources connected to a
`network be treated using the ABR mechanism. In fact, if a
`mix ofABR sources and non-ABR sources are connected to
`the network, the ABR sources will be disadvantaged as
`compared to the non-ABR sources which need not be
`capable of sending RM cells. The customer must therefore
`update the hardwareofall the end systems before using ABR
`support, which is an additional drawback from an engincer-
`ing standpoint.
`Finally, there is no description, in the ABR standard, of a
`policing function which could be usedto protectthe network
`from misbehaving sources or from non ABR sources.
`
`NOACEx.1018 Page 373
`
`NOAC Ex. 1018 Page 373
`
`

`

`
`
`3
`Other mechanisms have been proposed, with flow control
`which can be used on ATM or PTM (Packet Transfer Mode,
`including variable length packets) traffic and offer good
`performance. These flow control mechanisms add complex-
`ity to network equipment. Access nodes or ports need to
`store tables of values and must have the capability of adding
`a time-stamp to the data packets or to specific control
`packets. The overhead on the lines is also increased as at
`least a time stamp must be added to some transmitted
`packets.
`A further improvement was disclosed in U.S. Pat. No.
`5,313,454 which made the system transparent to the user
`(source) by providing an internal congestion avoidance
`method. To that end, congestion is identified throughout the
`network and transferred by setting an indicator in the packet
`header. Then congestion indications are used in the desti-
`nation node to generate a rate control message whichis fed
`back to the entry node. This priorart still adds overhead to
`the feedback flow if smooth and fiexible congestion control
`is sought. Otherwise, the flow regulation would be quite
`rigid and basic.
`These functions are generally configured at connection
`setup and remainstatic. A more flexible solution is necessary
`to be able to use the available bandwidth left by the reserved
`traffic while avoiding high packetloss inside the network.
`SUMMARYOF THE INVENTION
`
`The present invention is a method for performing con-
`gestion detection and flow control operations fordata traffic,
`including both discardable and non-discardable traffic, in a
`high speed digital packet switching network including
`access and transit nodes interconnected by links or trunks.
`Any source end-user attached to said network via an entry
`access node can requestits traffic to be transported toward
`a destination end-user also attached to said network via an
`exit access node. So-called in-going (or forward) and return
`(or backward) paths are set from the entry node to the exit
`node and, in the opposite direction, from the exit node to the
`entry node. The paths might include network transit nodes.
`The method includesthe step of monitoring the data flow
`in eachtransit node in the forward path from the entry node
`to the exit node for detecting traffic congestion in the transit
`node. When flow congestion being detected therein, a Con-
`gestion Indication (CI) bit is set in a first predefined header
`field of data packets transported on the forward path to the
`exit node. Data packets entering the exit node are monitored.
`Where a set Cl bit is detected, a congestion indication is fed
`back to the entry node by setting a Return Congestion
`Indication (RCI) bit in a second predefined headerfield in
`the data packets of the traffic of the backward path; RCI bits
`in packets received in the entry node are integrated over a
`predefined period of time by adding or subtracting one unit
`depending on the binary value of each received RCI bit. At
`the end of each of the predefined time periods, the value of
`the integrated RCI indication is checked. The communica-
`tion bandwidth assigned to discardable traffic on the forward
`path, is adjusted as a function of the value of the integrated
`RCIindications.
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`20
`
`35
`
`40
`
`45
`
`50
`
`FIG. 1 is a schematic representation of a high speed
`digital network wherein the invention shall be incorporated.
`FIG.2 is a basic representation of a network implement-
`ing the invention.
`FIG.3 is a schematic representation of a network node in
`which a preferred embodiment of the invention can be
`implemented.
`
`65
`
`US 6,424,624 B1
`
`4
`FIG.4 (consisting of FIGS. 4A, 4B and 4C) shows leaky
`bucket arrangements to be used in the invention.
`FIG. 5 is a schematic representation of the invention as
`implemented with a leaky bucket.
`FIG. 6 is a flow-chart of the algorithm used in the
`invention.
`
`10
`
`FIG. 7 is a flow-chart of an improved algorithm for
`providing enhanced operation of the invention;
`FIG.8 is a schematic representation of a further improve-
`ment enabling source cooperationin the flow control mecha-
`nism.
`
`FIG. 9 is a detailed representation of the header used in
`the implementation according to FIG.8.
`
`DETAILED DESCRIPTION OF A PREFERRED
`EMBODIMENT OF THE INVENTION
`
`FIG.1 is a general block diagram of a packet transmission
`system 10 showing access nodes/ports 11 and 12 andinter-
`mediate or transit nodes (13), the nodes being intercon-
`nected by links or trunks. The links or trunks may provide
`permanent or selectively enabled (dial-up) connections
`between pairs of nodes. A network transit node may be
`attached to one or several access (entry or exit) nodes.
`Each network node includes input/output adapters
`(including) buffers interconnected by a switch fabrics,
`together with data processing facilities providing data com-
`munication and network control services in the network
`node. Data packets (including control data) received on an
`input adapter may be selectively routed on one or more of
`the outgoing communication links or trunks through the
`output adapters. Routing decisions are made in response to
`information in header sections of the packets. The network
`node also provides additional services such as calculation of
`new paths between entry and access nodes,the provision of
`access control to packets entering the network, and the
`provision ofdirectory services and topology database main-
`tenance.
`
`A network access node may operate either as a source
`(entry) of digital data to be transmitted to another end node,
`or as a datasink (exit node), or both. User A, for instance,
`acting as a data source utilizes an access node 11 to access
`the packet switching network 10. The access nodetranslates
`the user’s data into packets formatted for transmission on the
`network and also generates headers used to route said
`packets through the network. At call set-up, the entry node
`processing facilities calculates a path or route through the
`network from the source node (entry node) to the destination
`node (exit node). To avoid overload on any ofthe links on
`the path, the path is selected using an algorithm that ensures
`that adequate bandwidth is available for the new connection,
`while optimizing the overall throughput within the network.
`The act of selecting a path for a particular connection
`implies the allocation of network resources to users in order
`to guarantee their Quality of Service (QoS) requirements.
`Various quality levels of service may be specified, some of
`them in orderto satisfy real-time delivery constraints, others
`related to non real time data traffic transfer. To that end, the
`origin node computes a path to the destination nodethatis
`capable of carrying the new connection and providing the
`level of service required by the new connection. The Path
`Selection process uses data describing the currenttraffic load
`in the entire network (nodes andlinks). Such data are stored
`in a topology database located in each node of the network.
`If no suitable path can be found to meetal] requirements,the
`connection is rejected. Once, the origin node has found a
`
`NOACEx. 1018 Page 374
`
`NOAC Ex. 1018 Page 374
`
`

`

`US 6,424,624 B1
`
`5
`suitable path, a set-up message is generated which traverses
`the selected route, updating the resource allocations
`(including bandwidth occupancy) for each link visited by the
`set-up Message.
`To maintain high network throughput, a path is selected
`and resources are reserved only at the time of the connection
`establishment. The Path Selection process takes into account
`various constraints which come both from the user (quality
`of service requirements, user’s traffic characteristics) and
`from the current network topology and bandwidth alloca-
`tion. In addition,
`the algorithm maximizes the network
`throughput by choosing a path with the least number of hops
`and which tends to achieve an evendistribution of thetraffic
`amongthe links. Once an appropriate path has been selected,
`the network connection establishment process takes place,
`and only then are the resources along the path reserved.
`Accordingly,
`the connections established for different
`sources may require different levels of bandwidth, different
`delay guarantees, and/or different loss probabilities. Real
`time signals such as voice or video, for example, require
`being assigned higher priority levels than non-real time
`signals.
`As already mentioned, such connection characteristics are
`negotiated along with the bandwidth requirements and
`enforced by assigning higherpriorities to the transmission of
`real
`time signals and, in case of congestion occurring,
`discarding non-reserved packets before discarding reserved
`ones. In other words network managementmust distinguish
`between guaranteed delivery traffic and so-called discard-
`able traffic. Nevertheless, optimizing the transmission of
`lowerpriority traffic is important.
`Therefore and due to the bursty nature of data traffic,
`traffic should be continously monitored and a mechanism
`provided for adjusting low priority traffic or any excess
`traffic in excess, and controlling the assigned bandwidth
`dynamically. For instance 85% of the total link bandwidth
`maybereserved for committed traffic, which leaves 15% of
`said bandwidth for dynamic assignment.
`The present invention is a method and apparatus for
`optimizing the transmission of lower priority traffic while
`avoiding congestion in the network.
`The design of adequate congestion schemesis extremely
`difficult for the following reasons:
`Overhead: A congestion scheme should not increase the
`traffic too much, in particular when the network is
`congested. Even proposals of positive feedback
`schemes, ic. schemes that give feedback when the
`network is congested, do not resolve the problem
`totally as they also consume resources.
`Fairness: When the demands of all users cannot be
`satisfied, the share ofsatisfied demands should befairly
`distributed. But neither the definition of “fairly” is
`trivial, nor the meaning of what is considered to be fair
`is invariable.
`
`Responsiveness: The quality of the available resource is
`often changing dynamically. A congestion scheme has
`to react quickly to changes by asking users to increase
`or to decrease their sending rates. On the other hand, in
`order to maintain good throughput, only persistent
`congestion should be taken into account. Short term
`loading of the queue duc to traffic bursis should be
`differentiated from a persistent state of congestion.
`Bad environments: Under congestion, packets can be
`dropped, changed or delivered out of order. Despite
`such problems, the scheme has to continue to work.
`
`
`
`6
`Consideration ofthe totality: The final aim of a congestion
`schemeis to optimize the overall network performance.
`Schemes that optimize the throughput of one user or
`one trunk, do not always maximize the overall perfor-
`mance.
`
`Complexity: In high speed environments there are often
`only a few clock cycles to process a packet in a node.
`Therefore, a flow control schemehas to be simple. This
`criteria can also be called Implementability.
`Misbehaving sources: Some congestion control schemes
`suppose thatall users are able and willing to cooperate.
`These schemes do often not work with misbehaving
`users. Greedy users can degrade the QoSforother users
`or can even drive the network into long term conges-
`tion.
`The aim of the system of this invention is to check the
`traffic by detecting congestion in any node of the network
`and then monitorthe traffic flow accordingly on a port-to-
`port basis, while optimizing the network by complying with
`the requirements to avoid the drawbacks as indicated in the
`priorlisted criteria, ¢.g. by minimizing traffic overhead, and
`yet enabling a smooth and flexible congestion control; being
`insensitive to misbehaving source users, and yet enabling a
`contro] of behaving sources at almost no additional cost;
`being perfectly implementable and rather not complex.
`FIG.2 is a schematic representation of a network imple-
`menting the invention. It shows a connection set between a
`source end-user 20 and a destination end-user 21 through a
`digital communication network 22. The network includes an
`entry access node/port 23 through which the source user 20
`is attached to the network 22, while the destination user 21
`is attached to the network 22 via exit access node/port 24.
`Assumethat, at call set-up, the source end-user’s requestfor
`a connection had been executedbysetting a forward path via
`transit nodes 25 and 26. Since the traffic works both ways,
`Assumethat the retum path between user 21 and user 20 was
`established as indicated by arrow 27 also through a series of
`transit nodes (not shown in the figure). Considerfirst traffic
`originating at user 20. As already mentioned, in a packet
`switching system, source traffic is split into packets includ-
`ing a data payload and a header containing control infor-
`mation. Packets may either be of fixed length as in Asyn-
`chronous Transfer Mode (ATM) of operation or be of
`variable length as in Frame Relay (FR) mode of operation.
`According to this invention, the packets flows are monitored
`throughout the forward path, in each intermediate transfer
`node on the path. As soon as traffic congestion is detected in
`one ofthe nodes, a predefined headerfield is marked or set
`to indicate congestion has been detected. When a congestion
`indication is detected in exit node 24, each packet in the
`return path 27 is marked further congestion.
`Again, we must say that this invention applies whatever
`be the type of traffic, be it organized in Asynchronous
`Transfer Mode (ATM) orin Frame Relay (FR) mode. Butthe
`modeof operation selected herein to describe the invention
`is the FR type (see FIG. 2). Accordingly, each packet of data
`provided bythe source (26) traffic includes a payload section
`and a Frame Relay header section (FR header)(see packet
`28). Now, for the purpose of this invention, each packet
`entering access node 23 is provided therein with a header.
`The fields selected for both marking congestion indication
`(CD,through a so-called Explicit Forward Congestion Indi-
`cation (EFCT in any network node along the forward path
`going through nodes 25, 26, ..., and for Returning said
`Congestion Information (RCT), have herein been selected to
`be in the beader and in said Frame Relay (FR) header,
`respectively. Also, and for optimizingtraffic efficiency, these
`
`10
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`NOACEx. 1018 Page 375
`
`NOAC Ex. 1018 Page 375
`
`

`

`US 6,424,624 B1
`
`20
`
`25
`
`40
`
`45
`
`NOACEx.1018 Page 376
`
`8
`7
`the header from the received packet, the congestion indica-
`fields have been limited to be one-bit long each in the best
`tion EFCI bit is monitored. When an EFCT bitat binary level
`mode of implementation of this invention. Naturally, those
`“1” is detected, the destination port (24) marks each packet
`field locations and lengths selection shall by no means be
`in the backward flow, i.e. from port 24 acting now as a
`considered as limiting the scope ofthis invention. A person
`source for user 21 back to original source user’s port 23,
`skilled in the art understands thatlongerfields shall enable
`with the Returned Congestion Information (RCD,bysetting
`carrying more detailed flow congestion information while
`to “1” the BECN bit in the FR packet header (sec 28). When
`increasing network complexity and increasing operating
`the source port receives a packet with the BECN bit at binary
`cost, as shall be explained further in this description.
`level “1”, a predefined flow contro] action is taken. For
`As already known in the art of digital communication, and
`ATM/ABRtraffic the RCI bit may be selected in the RM
`disclosed in several European Applications (¢.g, Publication
`cell.
`Number 0000719065 and Application Number 95480182.5)
`flow control operating algorithms might be
`Several
`each network node basically includes input and output
`defined but efficient algorithms should preferably provide a
`adapters interconnected via a so-called node switch. Each
`mechanism for dropping the transmission rate on the con-
`adapter includesaseries of buffers or shift registers where
`15
`gested path in a single large step and then enable slowly
`the node transiting packets are stored. Traffic monitoring is
`going back to full rate step by step.
`generally operated via preassigned buffer threshold(s) help-
`Thus, in the preferred mode of implementation of this
`ing monitoring shift register queues, as shall be described
`with reference to FIG. 3.
`invention, the action to be taken upon reception of a packet
`indicating congestion, i.e with the RCI (BECN) bitset (i.e
`Represented in FIG. 3 is a simplified block diagram
`at binary level “1”), is based on an additive increase and a
`showing a network node with the corresponding facilities
`multiplicative decrease of the considered sending rate, and
`used to implement the invention. Basically, it includes a
`this control action is performed in the entry node 23 directly
`series of adapters labeled adapter i with i=1, 2,.. .j @ being
`under network control rather than source control, therefore
`the number of adapters), and a high speed packet switching
`neutralizing any burden due to a misbehaving source user.
`fabric (301). Each adapter is connected to a network link/
`The parameterfor the increase is called Rate Increase Factor
`trunk. Each adapter includesa receive port section (302), a
`(RIF) and is expressed as a quantile of predefined peak rate.
`transmit port section (303) and processing facilities herein
`The factor for the decrease is called Rate Decrease Factor
`designated as General Purpose Processor (GPP) (304). The
`(RDF) andis a quantile of the actual rate.
`packets arriving on the node via the receive port are oriented
`Nowas far as implementation is concerned various flow
`toward the appropriate node adapter (transmit section) to
`regulating devices are already known in the arl. They are
`finally reach the destination user through the preassigned
`based on so-calied usage parameter control (UPC) made to
`network connection path. Switching between an adapter
`prevent connections from sending with different character-
`receive port and an adaptertransmit port is performed via the
`istics than those negotiated in the traffic contract, whether
`switching facilities (301). To that end, the adapters process-
`this is done on purpose ornot.
`ing means determinethe routing operations to be performed,
`One ofthese flow regulating systemsutilizes a so-called
`by using indications provided within the data packet head-
`leaky bucket (see FIG. 4 representing three different leaky
`ers. As represented in FIG. 3, the adapters include queuing
`bucket
`implementations (4a; 4b and 4c)) including an
`facilities for queuing the packets prior to or subsequent to
`admissionshift register (e.g. 41 a) wherein the data packets
`their launch on the switch (301).
`are buffered and a so-called leaky bucket mechanism to
`As mentioned, at connection set-up time, two paths, one
`control their passing further to the network.
`for each direction are computed between the original access
`node and the destination access node attached to the calling
`The Leaky Bucket label describes a family of algorithms
`with the same basic principle, based on the consumption of
`source user and the destinationuser, respectively. The con-
`credits designated as tokens (see FIG. 4). The tokens are
`nection set-up and bandwidth reservation process operate to
`generated at a rate y and can be accumulated in a token pool
`adjust, if the call has been accepted, an access control device
`or bucket (42a) with the size M. When the token poolis full,
`(e.g. leaky buck

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket