`
`U.S. Patent
`
`Dec. 16, 2003
`
`Sheet 1 of 10
`
`US 6,665,273 Bl
`
`100
`
`\
`
`110 ~ ....
`
`ROUTING
`MODULE
`
`112
`I
`
`'
`
`ROUTING
`TABLE
`
`122
`J
`I
`L
`A
`B
`E
`L s
`t
`
`120 ~ ....
`
`TE MODULE
`
`-
`i . -
`
`TE PATH CALCULATION i,.--
`MODULE
`TE TOPOLOGY
`DATABASE
`
`i,.--
`
`~ 130
`
`i,--
`
`140
`
`160 -... ......
`
`RSVP
`
`-
`
`---
`
`-
`
`170
`
`IGP
`
`I
`
`TE_LINK
`150 ' ~
`MANAGEMENT --
`UNIT
`
`FIG. 1 (PRIOR ART)
`
`Cloudflare - Exhibit 1008, page 2
`
`
`
`
`
`U.S. Patent
`
`Dec. 16, 2003
`
`Sheet 3 of 10
`
`US 6,665,273 Bl
`
`FIG. 4
`
`Cloudflare - Exhibit 1008, page 4
`
`
`
`
`
`
`
`
`
`
`
`
`
`U.S. Patent
`
`Dec. 16, 2003
`
`Sheet 9 of 10
`
`US 6,665,273 Bl
`
`TRAFFIC
`FLOW
`
`300 Kbps
`
`200 Kbps
`
`100 Kbps
`
`t--+-
`
`FIG. 10
`
`Cloudflare - Exhibit 1008, page 10
`
`
`
`
`
`US 6,665,273 Bl
`
`1
`DYNAMICALLY ADJUSTING
`MULTIPROTOCOL LABEL SWITCHING
`(MPLS) TRAFFIC ENGINEERING TUNNEL
`BANDWIDTH
`
`BACKGROUND OF THE INVENTION
`
`1. Field of the Invention
`The present invention is related to networking systems 10
`and in particular, to maximizing bandwidth resources in a
`Multiprotocol Label Switching (MPLS) system.
`2. Background
`Internet is the widely accepted medium for the inter(cid:173)
`change of information. Government, business entities and 15
`other organizations routinely use the Internet to provide and
`receive information, and the number of users is increasing.
`To support such increased usage, networks and supporting
`devices have become larger and more complex. However,
`even with the proliferation of such devices and network 20
`complexity, an Internet service provider (ISP) must still
`provide reliable service that can withstand link or node
`failures while maintaining high transmission capacity.
`Additionally, the devices used in the network links and the
`intermediate devices such as switches and routers are expen- 25
`sive items and thus should be used to their maximal capacity.
`Traffic engineering is one solution that enables ISPs to
`maximize device capacities while providing network resil(cid:173)
`iency.
`Traffic engineering (TE) is a term that refers to the ability
`to control traffic flow in a network (notwithstanding a path
`chosen by a routing protocol) with a goal of reducing
`congestion thereby getting the most use out of the available
`devices. For example, a network topology typically has
`multiple paths between two points. However, a routing
`protocol will usually select a single path between the two
`points regardless of the load on the links that form the path.
`This may cause certain paths to be congested while the other
`paths are under-utilized. Traffic engineering facilitates load
`balancing or load sharing among the paths thereby enabling
`the network to carry more traffic between two points.
`Multiprotocol Label Switching (MPLS) is a technology
`that, among others, allows traffic engineering to be per(cid:173)
`formed. The principle behind MPLS, as its name implies, is
`using labels to switch packets along a path. This is in
`contrast to a routing lookup table in which the longest
`lookup on an Internet Protocol (IP) header is performed. A
`label distributing mechanism based on a Label Distribution
`Protocol (LDP) or existing protocols such as Resource
`Reservation Protocol (RSVP), which has been enhanced,
`allows for the assignment and distribution of the labels.
`RSVP is a protocol for reserving network resources to
`provide Quality of Service guarantees to application flows.
`Various control modules are responsible for facilitating and
`maintaining traffic engineering as well as for maintaining
`other relevant control information.
`Referring to FIG. 1, an exemplary MPLS based router
`100, among others, may include:
`a routing module 110 that constructs a routing table 112
`using conventional routing protocols, preferably a
`linked-state Interior Gateway Protocol (IGP) such as
`Intermediate System to Intermediate System (IS-IS),
`Open Shortest Path First (OSPF) and so forth;
`a traffic engineering (TE) module 120 that enables explic(cid:173)
`itly specified label switched paths (LSPs) to be config(cid:173)
`ured throughout a network using a TE topology data-
`
`5
`
`2
`base and a TE path calculation module, as will be
`described, which preferably in conjunction with RSVP
`and a label allocation module (not shown), assigns
`labels 122 to routes and supplements the above routing
`table with those labels to be further described with
`respect to FIG. 2;
`a TE path calculation module 130 that calculates the
`"best" path for the LSPs based on resources required by
`the path and the available resources in the network;
`a TE topology database 140 that receives and stores
`information on link states of the network and available
`resources via the IGP assuming that it is a linked state
`IGP,
`a TE link management module 150 that keeps track of and
`maintains all LSPs that have been set up; and
`supporting protocols such as RSVP 160 and an IGP 170
`with extensions for global flooding of resource infor(cid:173)
`mation.
`Explicitly routed LSPs are referred to as TE tunnels. FIG.
`2 illustrates the operation of an MPLS system supporting
`unicast TE tunnel routing. Using an IGP and RSVP, each
`router on a path of a TE tunnel builds a routing table
`supplemented with labels. For this configuration, each router
`has its resource information flooded to the appropriate IGP
`link state database and accepts RSVP signaling requests.
`Generally, when a TE tunnel is being established, assuming
`it to be a dynamic path, the TE module sets the resource
`requirement of the path based on operator input or default.
`The TE path calculation module calculates a constrained
`30 short path from the "head-end" router to the "tail-end" router
`based on resources required to setup the path, which in this
`instance is path 200. RSVP then signals the routers along the
`path 200 that a setup is being performed. During setup, the
`routers on the path need to agree on the labels to be used for
`35 the packets traveling along the path. This label allocation
`procedure may also be performed by RSVP in that its
`signaling and the resource reservation features may be used
`to construct the TE tunnels. RSVP also requests labels from
`the label allocation module; these labels required for con-
`40 structing the TE tunnel.
`The TE module using RSVP causes router Rl at the
`head-end of the tunnel to send a setup request to the router
`Rn at the tail-end of the tunnel. The setup request travels
`along the calculated path 200 until it reaches the tail-end
`45 router Rn. Upon receipt of the setup request, the router Rn
`returns a setup reply to router Rl over the path 200 traveled
`by the setup request. When router Rn-1 receives the setup
`reply, it requests a label from the label allocation module,
`which allocates a label LRn-1 for the segment of the path.
`50 Router Rn-1 then establishes a mapping in its routing table
`between the label LRn-1 and the path to router Rn. Router
`Rn-1 then attaches the label LRn-1 to the setup reply and
`sends it "upstream" to the previous "hop" router Rn-2 on the
`path. In response to receiving the setup reply, Router Rn-2
`55 requests and receives a label LRn-2, which it uses to allocate
`for that segment of the path and establishes a mapping in its
`routing table between the label LRn-2 and the path to router
`Rn-1. Router Rn-2 then replaces the label LRn-1 on the
`setup reply with its label LRn-2 and sends it to the next
`60 router Rn-3 on the path. Likewise, Router Rn-3, upon
`receiving the setup reply, requests and allocates a label
`LRn-3 for the segment of the path and establishes a mapping
`between the label LRn-3 and the path to router Rn-2. Router
`Rn-3 then replaces the label on the setup reply with its own
`65 label and sends the setup reply further upstream. This
`process is repeated for each router along the path 200 until
`the head-end router Rl receives the setup reply. In this
`
`Cloudflare - Exhibit 1008, page 12
`
`
`
`20
`
`25
`
`30
`
`3
`instance, the setup reply will have a label LR2 attached
`thereto by the router R2 on the path. Router Rl then adds an
`entry to its routing table using the label LR2 which functions
`as an initial label for entering the TE tunnel created for the
`path.
`Once the TE tunnel has been established, the TE module
`notifys the IGP as to the IP address of the TE tunnel. Once
`notified, the IGP can route packets through the TE tunnel
`using the IP address. In instances where the IGP is load
`balancing between a TE tunnel and a regular path, IGP may 10
`use a "flow" method or a "round-robin" method to load
`balance between the two paths. The flow method is really a
`load sharing method and may be performed in a following
`manner: a portion of a source address and a destination
`address of a packet may be combined and hashed to generate
`one of a pseudo-random range of numbers. The value of the
`number determines which path the packet is to follow.
`Assuming the packet is routed to the TE tunnel, the packet
`transmission mechanism through the tunnel is based on label
`switching.
`For example, when router Rl receives a packet, it per(cid:173)
`forms a conventional longest match lookup on the IP header.
`If the lookup indicates a label, router Rl recognizes that the
`packet is to be transmitted through the TE tunnel and affixes
`the initial label on the packet. After the packet is labeled,
`subsequent routers in the tunnel transmit the packet using
`only labels, that is, each router switches the label of an
`incoming packet with its label prior to sending the packet.
`The incoming label received by a router is used as an index
`to the routing table for the "next hop" information. Note that
`the label switching mechanism operates similar to that of a
`layer two switching mechanism. In this manner, other than
`the head-end router, no longest match lookup is performed
`by subsequent routers.
`FIG. 3 gives an example of how the MPLS TE system 35
`may be used to solve traffic congestion while maximizing
`resources. As shown, there are two paths from router C to
`router E. Typically, router C selects the shortest path to
`router E resulting in all traffic being channeled through
`router D. Thus, the resulting traffic volume creates conges- 40
`tion in that path while the path though router F and router G
`remains underloaded. To maximize performance of the
`overall network, it is desirable to shift some of the traffic
`from the congested path to the under-loaded path. One
`previous attempt to address the problem sets the cost of the 45
`path of the routers C-D-E equal to the cost of the path of the
`routers C-F-G-E. While such a method may be feasible in a
`simple network, it is cumbersome if not impossible in a
`network of complex topology. By using explicitly routed
`paths, MPLS traffic engineering is a more straightforward 50
`and flexible way of addressing this problem. For example,
`the traffic engineering module can establish a label-switched
`path from routers A-C-D-E and another label-switched path
`from routers B-C-F-G-E. By defining policies that determine
`which packets are to follow these paths, traffic flow across 55
`a network, even one of complex topology, can be managed.
`"Constraint-based routing" is a technique that allows
`minimal operator intervention in setting up the TE tunnels.
`Constrained-based routing may be supported by the TE
`module and the TE path calculation module as described 60
`above. Prior to constraint-based routing, the operator con(cid:173)
`figured the TE tunnels by specifying a sequence of hops that
`the path should follow. However, a problem concerning this
`form of traffic engineering is that considerable network
`configuration and re-configuration is required. Under 65
`constraint-based routing, an operator merely specifies the
`amount of traffic that is expected to flow in the TE tunnel.
`
`US 6,665,273 Bl
`
`4
`The MPLS TE system then calculates the paths based on
`constraints suitable for carrying the load and establishes
`explicit paths. These paths are established by considering
`resource requirements and resource availability, instead of
`5 simply using the shortest path. Examples of constraint
`factors considered by the MPLS TE system are bandwidth
`requirements, media requirements, a priority versus other
`flows and so forth.
`Below is a table that lists a set of commands that may be
`used to configure an MPLS TE tunnel based on constraint(cid:173)
`based routing.
`
`15
`
`Step Command
`
`Purpose
`
`1. Router ( config) #
`interface tunnel
`2. Router (config-if) #
`tunnel destination A.B.C.D
`3. Router (config-if) #
`tunnel mode mpls traffic-eng
`
`4. Router (config-if) #
`tunnel mpls traffic-eng
`bandwidth bandwidth
`5. Router (config-if) #
`tunnel mpls traffic-eng
`path-option 1 explicit
`name boston
`6. Router (config-if) #
`tunnel mpls traffic-eng
`path-option 2 dynamic
`
`Configure an interface type and
`enter interface configuration mode.
`Specify the destination for a tunnel
`
`Set encapsulation mode of the
`tunnel to MPLS traffic
`engineering.
`Configure bandwidth
`for the MPLS traffic
`engineering tunnel
`Configure a named IP explicit path
`
`Configure a backup path to be
`dynamically calculated from the
`traffic engineering topology
`database.
`
`One particular aspect of configuring an MPLS TE tunnel
`is that the bandwidth of the tunnel needs to be specified as
`shown in the following exemplary commands:
`configure terminal
`interface tunnel 1
`tunnel destination 17. 17. 17. 17
`tunnel mode mpls traffic-eng
`tunnel mpls traffic-eng autoroute announce
`tunnel mpls traffic-eng bandwidth 100
`tunnel mpls traffic-eng priority 1 1
`tunnel mpls traffic-eng path-option 1 dynamic
`Stated differently, the operator specifies the amount of
`bandwidth a TE tunnel requires prior to enabling the tunnel,
`as this is a factor in constraint-based routing. However,
`where the traffic flow constantly changes within the tunnel,
`such bandwidth constraints may not necessarily result in
`efficient usage of the routers, an example which is shown
`below.
`FIG. 4 shows various routers connected together by
`MPLS TE tunnels. Once the operator has configured the TE
`tunnels, the TE module along with the link management
`module, establish and maintain the tunnels. Tunnel paths are
`calculated at the tunnel head based on the required resources
`and the available resources, such as bandwidth. Router C is
`an example in which several TE tunnels enter and exit the
`router. By carefully monitoring the traffic flow, the operator
`may finely balance the allocation of bandwidth to the TE
`tunnels in view of the available bandwidth of the router. If
`the operator configures the tunnels based on peak traffic
`flow, it is possible that the aggregate peak traffic flow of the
`tunnels may be greater than the bandwidth of the router. This
`forces the operator to configure one or more TE tunnels
`elsewhere to share the traffic load. In certain instances, more
`routers may be needed to provide the load share. In many
`instances, however, it is quite possible that the peak traffic in
`the tunnels occurs at different time intervals and the router
`
`Cloudflare - Exhibit 1008, page 13
`
`
`
`US 6,665,273 Bl
`
`5
`C has sufficient bandwidth to handle the traffic flow between
`the plurality of TE tunnels. In a network of complex
`topology where traffic patterns are constantly changing, it is
`generally difficult for the operator to predict the traffic flow
`and adjust the bandwidth of the individual tunnels accord(cid:173)
`ingly. Further, because of the numerous routers involved, it
`is generally burdensome for the operator to continuously
`monitor traffic patterns and update the various bandwidth
`constraints.
`
`SUMMARY OF THE INVENTION
`The invention comprises an improved Multiprotocol
`Label Switching (MPLS) system within a network device
`for traffic engineering that determines an actual traffic flow
`within a traffic engineering (TE) tunnel and dynamically
`adjusts the bandwidth to reflect the actual traffic flow. The
`actual traffic flow may be ascertained by accessing an
`average byte counter in the physical link management
`module that keeps track of bytes flowing through the TE
`tunnel. The TE tunnel configuration information is usually 20
`stored at the head-end network device of the tunnel. Accord-
`ing to one example, the improved MPLS system signals a
`path from the head-end network device to the tail-end
`network device using the adjusted bandwidth as one of the
`constraints for establishing the path. If the new path is
`different from the previous path then the previous path is
`"torn down" and replaced by the new path as the TE tunnel.
`As an example, the MPLS system, among others, com(cid:173)
`prises a TE module, a routing module, a TE path calculation
`module, a TE topology database and a TE link management 30
`module. The TE module is responsible for initiating a tunnel
`setup once the tail-end router and the bandwidth have been
`selected (assuming the current MPLS system is at the head
`router) under "constraint-based" routing. Using the TE
`topology database and the TE path calculation module, a 35
`constrained path calculation is performed to determine a
`path for the tunnel. Once tunnels are set up in the router, the
`TE link management module keeps track of the status of the
`tunnels such as whether a tunnel is being maintained or is
`being "torn down". The TE module supplements the routing 40
`table built by the routing module with labels to be used for
`label switching in a particular tunnel. The link management
`module keeps track of all tunnels through the router includ(cid:173)
`ing the resources used such as allocated bandwidth and
`available bandwidth. In addition, there is a configuration 45
`table, which is stored in a memory and contains tunnel
`configurations specified by the operator or specified by
`default. Configuration information may include bandwidth
`requirements, media requirements, a priority verses other
`flows and so forth. The router may also have a physical link 50
`management module to allocate available resources in
`accordance with the configuration table. The physical link
`management module includes byte counters for each TE
`tunnel set up to monitor the traffic flow through the tunnel.
`The improved MPLS system uses the byte counters to 55
`determine the actual traffic flow through the configured TE
`tunnels and dynamically re-configures the required band(cid:173)
`width to reflect the traffic flow. This may be performed as a
`sub-module within the TE link management module or as a
`separate "autobandwidth" module. The autoband-width 60
`module has a global timer, which is global to the networking
`device. The global timer may be accessed by a supplemental
`MPLS command "auto-bw timer", which at predetermined
`intervals set by the operator or by default, causes the global
`timer to take the average byte count for each tunnel for that 65
`interval and may store each average count in a register.
`Another supplemental MPLS command "auto-bw" deter-
`
`6
`mines the frequency at which the byte count for a particular
`tunnel is to be sampled to determine if a bandwidth adjust(cid:173)
`ment is required. Each enabled "auto-bw" command is
`specific to an established TE tunnel. If a tunnel requires a
`5 bandwidth adjustment, the auto-bw command causes the
`autobandwidth module to update the bandwidth. This may
`be in the form of the auto bandwidth module notifying the TE
`module that the bandwidth needs to be changed and the TE
`module changing the configuration table and performing a
`10 setup procedure. Alternatively, the autobandwidth module
`may change the configuration table and notify the TE
`module of the change causing the TE module, in turn, to
`perform the setup procedure. The features described above
`allow for bandwidth resources of a network to be efficiently
`15 utilized thereby maximizing its capacity while minimizing
`operator intervention. Further details and advantages will be
`apparent in the detailed description to follow.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`The invention description below refers to the accompa(cid:173)
`nying drawings, of which:
`FIG. 1 is a router supporting a Multiprotocol Label
`Switching (MPLS) system;
`FIG. 2 is an MPLS system supporting unicast traffic
`25 engineering (TE) tunnel routing;
`FIG. 3 is a network with a plurality of routers in which TE
`tunnels may be used to solve traffic congestion and to
`maximize resources;
`FIG. 4 is a network with a plurality of label switching
`routers (LSRs) interconnected with TE tunnels;
`FIG. 5 is a network that is managed by a network
`management system;
`FIG. 6 is a network device using an MPLS system;
`FIG. 7 is an improved MPLS system in accordance with
`an embodiment of the invention;
`FIG. 8 is a flow chart showing a method for dynamically
`adjusting bandwidth of TE tunnels in accordance with an
`embodiment of the invention;
`FIG. 9 is a flow chart showing a method for dynamically
`adjusting bandwidth of TE tunnels in accordance with an
`embodiment of the invention;
`FIG. 10 is a graph showing a surge effect in a traffic flow
`of a TE tunnel; and
`FIG. 11 is a flow chart showing a method for dynamically
`adjusting bandwidth of TE tunnels and accounting for surges
`within traffic in accordance with an embodiment of the
`invention.
`
`DETAILED DESCRIPTION OF AN
`ILLUSTRATIVE EMBODIMENT
`
`The present invention pertains to dynamically adjusting a
`bandwidth of a Multi-protocol Label Switching (MPLS)
`system traffic engineering (TE) tunnel based on actual traffic
`flow through the tunnel. Generally, the network devices
`using the MPLS system keep track of byte counts through
`the TE tunnel. Knowledge of the actual traffic flow through
`a tunnel enables dynamic adjustment of the bandwidth,
`which in turn allows for allocation of sufficient resources to
`service the traffic. In one instance, excess bandwidth is
`reallocated elsewhere by the network devices. The invention
`is now described using an illustrative embodiment to aid in
`the understanding of the invention and should not be con(cid:173)
`strued as limitations to the invention.
`As shown in FIG. 5, a network 500 is generally controlled
`by a network management system 520 that monitors and
`
`Cloudflare - Exhibit 1008, page 14
`
`
`
`US 6,665,273 Bl
`
`7
`controls the network. The network management system 520
`implements/executes one or more algorithms to communi(cid:173)
`cate and interact with various network devices within the
`network. The network devices, in turn, provide feedback to
`the network management system 520 as to the status of the 5
`network. The feedback information provides the network
`management system 520 with an overview of the network
`structure, addresses and labels assigned to each device, and
`attributes of the devices and links within the network among
`others. A monitor 540 allows an operator to interface with 10
`the network to perform network management tasks. Such
`tasks may involve setting up operation parameters of the
`network devices, collecting statistics on communication and
`activities of the network, requesting network status
`information, monitoring traffic flow and so forth.
`FIG. 6 illustrates a network device 600 such as a router.
`The router comprises a processing unit 612 and a memory
`unit 614 coupled together by a bus 616. Further coupled to
`the bus may be a plurality of input/output (1/0) interfaces
`618 that interacts with other routers and network devices in 20
`the network. In one example, an operating system (OS) 630
`resides in the memory unit 614 along with an MPLS system
`700 as processor executable instructions. Together, they
`facilitate the operation of the router when executed by the
`processing unit 612. The memory unit 614 may be a volatile 25
`memory such as a Dynamic Random Access Memory
`(DRAM). The MPLS system 700 may also reside in a
`non-volatile memory such as a Read Only Memory (ROM)
`or a Flash memory. Further, the MPLS system may be stored
`in a storage medium such as magnetic or optical disks. 30
`Collectively, the mentioned memories, storage mediums and
`the like will be referred to as a processor readable medium.
`Additionally, portions of the MPLS system may be config(cid:173)
`ured in hardware such as application specific integrated
`circuit (ASIC).
`The exemplary MPLS system 700 of FIG. 7, among
`others, comprises a traffic engineering (TE) module 710, a
`routing module 720, a TE path calculation module 730, a TE
`topology database 740 and a TE link management module
`750. As an overview, the TE module 710 is responsible for 40
`initiating a tunnel setup once the tail-end router and the
`bandwidth have been selected (assuming the current MPLS
`system is at the head router) under constraint-based routing.
`Using the TE topology database 740 and the TE path
`calculation module 730, a constrained path calculation is 45
`performed to determine a path for the tunnel. Once tunnels
`are set up in the router, the TE link management module 750
`keeps track of the status of the tunnels such as whether a
`tunnel is being maintained or is being "tom down". The TE
`module 710 supplements the routing table built by the 50
`routing module 720 with labels 724 to be used for label
`switching in a particular tunnel. The link management
`module 750 keeps track of all tunnels through the router
`including the resources used such as allocated bandwidth
`and available bandwidth. In addition, there is a configuration 55
`table 760, which is stored in a memory and contains tunnel
`configurations specified by the operator or specified by
`default. Configuration information may include bandwidth
`requirements, media requirements, a priority verses other
`flows and so forth. The router 700 causes the physical link 60
`management unit 770 to allocate available resources in
`accordance with the configuration table 760. The physical
`link management unit 770 includes byte counters 772 for
`each tunnel to monitor the traffic flow through the tunnel.
`According to the invention, an improved MPLS system 65
`determines the actual traffic that flows through the config(cid:173)
`ured TE tunnels and dynamically re-configures the tunnel
`
`8
`bandwidth to reflect the traffic flow. The TE module 710
`notified of the change, initiates a path setup procedure to find
`a path that is able to accommodate the adjusted bandwidth.
`If the calculated path is the same as the current path, the
`setup procedure may terminate and the current path is used
`with the new adjusted bandwidth. Alternatively, the setup
`procedure is initiated as described with respect to FIG. 2,
`where the newly established tunnel that meets the adjusted
`bandwidth and other constraints in the configuration table,
`replaces the old tunnel and the old tunnel is torn down. The
`actual traffic may be determined by accessing the byte
`counters 772 kept within the physical link management
`module 770.
`The monitoring of the byte counters may be performed by
`15 a sub-module within the link management module 750 or as
`shown in the figure, it may be performed by a separate
`"autobandwidth" module 780. The autobandwidth module
`has a global timer 782, which is global to the router. At
`predetermined intervals set by the operator or by default, the
`global timer 782 triggers a scan of the byte counters asso(cid:173)
`ciated with the TE tunnels setup in the router. The scan may
`be performed by the processing unit 612 or by a separate
`logical unit within the router. Scanning generally includes
`determining an average byte count for that interval for each
`tunnel and storing each average byte count in a register (such
`as a memory unit 614 in FIG. 6). If there is a previously
`stored average byte count in a register, then the current
`average byte count is compared with the previously stored
`byte count. In one example, the greater of the two values is
`stored in the register. In this manner, the peak average byte
`count is stored in the register.
`In one embodiment, the autobandwidth module 780 fur(cid:173)
`ther includes a per tunnel timer 784 that is associated with
`a particular tunnel and triggers a reading of the register at
`35 another predetermined interval to retrieve the stored peak
`average byte count. The reading may be performed by the
`processing unit or by the separate logical unit. The retrieved
`peak average byte count is used to compare with the current
`bandwidth of the tunnel and if an adjustment is required, the
`autobandwidth module causes the bandwidth to be modified.
`This may be in the form of the autobandwidth module
`notifying the TE module that the bandwidth needs to be
`changed and the TE module changing the configuration table
`and performing a setup procedure. Alternatively, the auto(cid:173)
`bandwidth module may change the configuration table and
`notify the TE module of the change which in turn performs
`the setup procedure. Once a reading of the register has taken
`place, the register is reset and the above mentioned proce(cid:173)
`dure is repeated. Access to the autobandwidth module 780
`may be performed by supplemental MPLS commands, as
`will be described.
`Note that the reading need not be performed at predeter(cid:173)
`mined intervals, but rather, it may be performed according to
`an event. For example, instead of a timer, the autobandwidth
`module 780 may have a comparator 785 that compares the
`difference between the current set bandwidth and the value
`of the average byte counter with a threshold value. If the
`absolute value of the difference exceeds the threshold, the
`autobandwidth module causes the TE module to adjust the
`bandwidth. If the difference is a positive value, this indicates
`to the autobandwidth module that the bandwidth needs to be
`upwardly adjusted. Conversely, if the difference is a
`negative, the bandwidth is downwardly adjusted.
`Referring to the flowchart in FIG. 8, in block 802, a
`bandwidth for each configured TE tunnel is specified in
`accordance with constraint-based routing. The bandwidth
`may be specified individually by an operator or the band-
`
`Cloudflare - Exhibit 1008, page 15
`
`
`
`US 6,665,273 Bl
`
`10
`
`9
`width may be assigned to the tunnels by an algorithm based
`on a predetermined approximated traffic flow expected to
`flow in the configured tunnels. In block 804, the tunnels to
`be selected for dynamic bandwidth adjustment are tagged
`with a tunnel flag. The tagging may be performed using 5
`"auto-bw" command. The auto-bw command is a supple(cid:173)
`mented command to the existing MPLS commands and may
`be as follows:
`tunnel mpls traffic-eng auto-bw { frequency <number>}
`{max-bw <kbs>}
`where the value { frequency <number>} is the time interval
`in which the register that stores the peak average byte count
`is read (per tunnel timer) and the value { max-bw <kbs>}
`defines the maximum bandwidth limit of the tagged tunnel.
`In block 806, a "global timer", which may be in a form of
`another supplemented MPLS command is configured to set
`the fixed interval in which each average byte counter asso(cid:173)
`ciated with a tagged tunnel is scanned. An example of a
`global timer command is shown below:
`mpls traffic-eng auto-bw timers { frequency <seconds>}
`where the value { frequency <seconds>} determines the
`periodic interval in which the global timer triggers a scan of
`all the tagged tunnels marked for dynamic bandwidth adjust(cid:173)
`ment. For example, if { frequency <seconds>} is given a
`value 300, the global timer triggers a scan of the tunnels at
`five-minute intervals. Stated differently, once the global
`timer reaches the configured interval, the timer causes a
`processing unit to read the average byte counters. As
`reflected in blocks 808-816, using a conventional polling
`technique, the processing unit polls the auto-bw marked
`tunnels and for each marked tunnel, the stored counter value
`in a corresponding average counter is retrieved and com(cid:173)
`pared with a previous counter value stored in a register. If the
`current counter value is greater than the previously stored
`counter value, then the greater value replaces the previously 35
`stored value in the register. Conversely, if the current counter
`value is less than the previously stored counter value, then
`the current value is ignored and the register maintains the
`previously stored value. In this manner, the peak average
`byte count is retained in the register. In block 818, a 40
`determination is made as to whether the per tunnel timer has
`reached the set time interval. If so, then in block 820, the
`auto-bw command accesses the TE tunnel configuration
`table and compares the stored counter value with the con(cid:1