`Kodialam et al.
`
`I 1111111111111111 11111 lllll 111111111111111 11111 1111111111 1111111111 11111111
`US006584071Bl
`
`(10) Patent No.:
`(45) Date of Patent:
`
`US 6,584,071 Bl
`Jun.24,2003
`
`(54) ROUTING WITH SERVICE LEVEL
`GUARANTEES BETWEEN INGRESS-EGRESS
`POINTS IN A PACKET NETWORK
`
`(75)
`
`Inventors: Muralidharan S. Kodialam, Marlboro,
`NJ (US); T. V. Lakshman, Eatontown,
`NJ (US)
`
`(73) Assignee: Lucent Technologies Inc., Murray Hill,
`NJ (US)
`
`( *) Notice:
`
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by O days.
`
`(21) Appl. No.: 09/366,620
`Aug. 3, 1999
`(22) Filed:
`
`Int. Cl.7 ................................................ H04L 12/56
`(51)
`(52) U.S. Cl. .................... 370/238; 370/238.1; 370/397;
`370/399; 370/252; 709/241
`(58) Field of Search .............................. 370/235-238.1,
`370/354, 355, 395, 397, 399, 252, 359;
`379/221.06-221.07; 709/235-241
`
`(56)
`
`References Cited
`U.S. PATENT DOCUMENTS
`4,870,639 A * 9/1989 Hayashi et al. ............. 370/415
`5,233,604 A * 8/1993 Ahmadi et al. ............. 370/238
`5,999,286 A * 12/1999 Venkatesan ................. 359/117
`6,301,244 Bl * 10/2001 Huang et al. ............... 370/238
`6,321,271 Bl * 11/2001 Kodialam et al.
`.......... 370/351
`6,347,078 Bl * 2/2002 Narvaez-Guarnieri et al. ... 370/
`230
`6,377,551 Bl * 4/2002 Luo et al. ................... 370/238
`FOREIGN PATENT DOCUMENTS
`0753979 Al * 1/1997
`EP
`* cited by examiner
`
`........... H04Q/11/04
`
`Primary Examiner-Hassan Kizou
`Assistant Examiner-John M. Waxman
`
`(57)
`
`ABSTRACT
`
`A packet network of interconnected nodes employs a
`method of routing with service level guarantees to determine
`a path through the network for a requested label-switched
`path (LSP). Each of the nodes includes one or more routers
`that forward packets based on a forwarding table constructed
`from paths determined in accordance with the method of
`routing with service level guarantees. The method of routing
`with service level guarantees determines the path of the
`requested LSP based on the effect that routing those packets
`of the requested LSP may have on current and/or future
`demands on the capacity of network nodes for currently
`provisioned LSPs. Such method of routing with service level
`guarantees may not necessarily route packets of a requested
`LSP along the shortest path, or minimum number of hops,
`through the network. Given the packet network and LSP
`request, a linear programming system may be defined by a
`set of linear programming equations for a non-split demand
`case. The linear programming system is based on the net(cid:173)
`work topology, the values of the ingress-egress point pair o
`and t and demand bd of the LSP request, and the total
`maxflow values of the existing ingress-egress point pair for
`currently provisioned LSPs. To estimate the solution for the
`linear programming system, a subnetwork is formed using
`link weights and links removed that cannot support the
`requested demand. Link weights are calculated based on the
`critical links of a pseudo-network in which increased maxi(cid:173)
`mum flow along existing paths between ingress-egress point
`pairs is maintained. A shortest path routing algorithm may
`then be employed to generate a path, if available, for the LSP
`request using the subnetwork with the calculated link
`weights.
`
`12 Claims, 7 Drawing Sheets
`
`EXTRACT ELEMENT
`
`lo, 11 AND DEMAND bd FROM LSP REQUEST
`
`RECEIVE VALUES FOR NETWORK GIN, L, Bl,
`NODE INCIDENCE MATRIX M
`RESIDUAL CAPACITY VECTOR R
`SERVICE LEVEL VECTOR
`ELEMENT VECTOR
`
`CALL SUBROUTINE CRJTICAL LINKS
`DETERMINE SET OF CRITICAL LINKS Cp
`
`COMPUTE LINK wl:IGHTS USING SET OF CRITICAL LINKS Cp
`
`FORM SUBNETWORK OF G ONLY WITH LINKS
`HAVING RESIDUAL CAPACITY 'ii FOR DEMAND bd
`AND THE COMPUTED LINK wl:IGHTS
`
`CALCULATE PATH THROUGH SUBNETWORK OF G FOR
`LSPREOUESTUSING leg I SHORTEST PATH
`ROUTING ALGORITHM
`
`PROVIDE PATH FOR LSP REQUEST
`
`401
`
`402
`
`403
`
`404
`
`405
`
`405
`
`407
`
`Cloudflare - Exhibit 1005, page 1
`
`
`
`
`
`U.S. Patent
`
`Jun. 24, 2003
`
`Sheet 2 of 7
`
`US 6,584,071 Bl
`
`FIG. 4
`
`EXTRACT ELEMENT (o, t) AND DEMAND bd FROM LSP REQUEST
`
`RECEIVE VALUES FOR NETWORK GIN, L, Bl:
`NODE INCIDENCE MATRIX M
`RESIDUAL CAPACITY VECTOR R
`SERVICE LEVEL VECTOR
`ELEMENT VECTOR
`
`401
`
`_/ 402
`
`CALL SUBROUTINE CRITICAL LINKS:
`DETERMINE SET OF CRITICAL LINKS Cp
`
`V 403
`
`COMPUTE LINK WEIGHTS USING SET OF CRITICAL LINKS Cp V 404
`
`FORM SUBNETWORK OF G ONLY WITH LINKS
`HAVING RESIDUAL CAPACITY rij FOR DEMAND bd
`AND THE COMPUTED LINK WEIGHTS
`
`CALCULATE PATH THROUGH SUBNETWORK OF G FOR
`LSP REQUEST USING le.g.l SHORTEST PATH
`ROUTING ALGORITHM
`
`PROVIDE PATH FOR LSP REQUEST
`
`V 405
`
`V 406
`
`V 407
`
`Cloudflare - Exhibit 1005, page 3
`
`
`
`U.S. Patent
`
`Jun. 24, 2003
`
`Sheet 3 of 7
`
`US 6,584,071 Bl
`
`FIG. 5
`
`INPUT NETWORK VALUES G(N, L. Bl
`
`11
`
`COMPUTE MAXFLOW VALUES FOR EXISTING TRAFFIC
`
`1
`
`SET LINK CAPACITY AS THE RESIDUAL CAPACITY
`OF THE LINKS AND FORM PSEUDO-NETWORK Gb
`
`COMPUTE PSEUDO-CAPACITY OF THE LINKS
`
`DETERMINE THE SET OF CRITICAL LINKS Cp:
`Iij IS IN SET Cp IF EITHER THE PSEUDO-CAPACITY OF
`THE LINK IS 0; NODE nj IS NOT IN SET A. NODE ni IS
`NOT IN SET 8; THERE IS NO PATH FROM ni TO nj IN Gb
`
`../ 501
`
`_/ 502
`
`V 503
`
`V 504
`
`V 505
`
`PROVIDE THE SET OF CRITICAL LINKS Cp
`
`V 506
`
`Cloudflare - Exhibit 1005, page 4
`
`
`
`
`
`
`
`
`
`
`
`US 6,584,071 Bl
`
`1
`ROUTING WITH SERVICE LEVEL
`GUARANTEES BETWEEN INGRESS(cid:173)
`EGRESS POINTS IN A PACKET NETWORK
`
`CROSS-REFERENCE TO RELATED
`APPLICATIONS
`
`This application is related to U.S. patent application filed
`on the same date as this application as Ser. No. 09/366,619,
`the teachings of which are incorporated herein by reference.
`
`BACKGROUND OF THE INVENTION
`
`1. Field of the Invention
`The present invention relates to packet routing in tele(cid:173)
`communication systems, and, more particularly, to deter(cid:173)
`mining paths through nodes of a packet network for routing
`of packets with guaranteed service levels.
`2. Description of the Related Art
`In interconnected communications packet networks, such 20
`as the Internet, users establish connections between a source
`and a destination with a stream of data packets, called packet
`flows, that are transferred through the network over a
`network path. The network path is defined by a set of nodes
`interconnected by a set of links. Packet networks may have 25
`a hierarchical structure in which smaller networks are inter(cid:173)
`connected by larger networks, and a peer structure in which
`equivalent networks are interconnected. At the highest
`levels, packet networks with high capacity that route packets
`transferred between other, outside packet networks are com- 30
`manly referred to as "backbone" networks. A packet net(cid:173)
`work connects to one or more other packet networks through
`ingress and egress points (routers) of the backbone network.
`FIG. 1 shows a backbone network 100 of the prior art
`having nodes Nl-N9 interconnected through links 101 that 35
`allow communication between packet networks 102-104.
`An ingress point is a router of node Nl that transfers packets
`to the backbone network 100 from a source (packet network
`102), and an egress point is a router of node N4 that transfers
`packets from the backbone network 100 to a destination 40
`(packet network 104). The backbone network 100 may
`support an interior routing protocol to distribute network
`topology information and route packets between ingress and
`egress points based on best effort routing ( e.g., destination(cid:173)
`based shortest path) through the nodes Nl-N9. A centralized 45
`network management system 105 may be employed to 1)
`provision virtual circuits, or packet flows, through the back(cid:173)
`bone network 100; 2) monitor capacity and utilization of
`links 101; and 3) coordinate calculation and installation of
`provisioned paths. Forwarding tables are used by each 50
`node's router to forward each received packet to the next
`node toward its destination. In addition, the centralized
`network management system 105 may also be employed to
`collect and distribute network topology information.
`Interior routing protocols are employed by routers to
`determine forwarding of packets between a source and
`destination pair along a path through the nodes of the
`interconnected packet network. Packets received by a node's
`router are forwarded to other nodes based on a forwarding
`table constructed in accordance with the interior routing
`protocol or routes installed with explicit route provisioning.
`Interior routing protocols may also specify exchange net(cid:173)
`work topology and link-state information ("network topol(cid:173)
`ogy information") among routers to allow the node's router
`to construct the corresponding forwarding table. An example
`of a widely-used interior routing protocol for "best effort"
`routing is the Open Shortest Path First (OSPF) protocol. In
`
`10
`
`2
`addition, some routing protocols associate a link "cost" with
`each link between nodes. This link cost may be associated
`with, for example, average link utilization or revenue gen(cid:173)
`erated by the link, as well as link importance in the network.
`5 When link-state information or link-bandwidth (e.g., con(cid:173)
`nectivity or available bandwidth) is exchanged between
`routers, each router in the network has a complete descrip(cid:173)
`tion of the network's topology.
`Since routing of packets at the higher levels is desirably
`performed at high speed, each higher-level packet network
`may use its own interior routing protocol in addition to the
`interior routing protocol of the lower-level packet network.
`Routing protocols, in addition to providing connectivity,
`may also enable traffic management. The Multi-Protocol
`15 Label Switched (MPLS) standard, for example, allows such
`routing protocols in backbone networks. The MPLS stan(cid:173)
`dard may be employed for networks having virtual circuits
`(packet flows) with provisioned service levels (also known
`as guaranteed quality-of-service (QoS)).
`Provisioned service levels may be, for example, a guar(cid:173)
`anteed minimum bandwidth for the path of a packet flow
`through the backbone network. This path having a guaran(cid:173)
`teed level of service between ingress and egress points may
`be referred to as a Network Tunnel Path (NTP). As would be
`apparent to one skilled in the art, specific implementations
`of NTPs exist for different types of networks. As examples
`of NTPs, virtual circuits may be established for packet flows
`in TCP/IP networks, virtual circuits may be established for
`cells in Asynchronous Transfer Mode (ATM) networks, and
`label switched paths (LSPs) may be established for packets
`in MPLS networks. Packets of a signaling protocol, such as
`RSVP (Reservation Protocol for IP and MPLS networks) or
`LDP (Label Distribution Protocol for MPLS networks), may
`be used to reserve link bandwidth and establish an NTP,
`once routing for the NTP is calculated. NTPs may be
`provisioned as an explicit route along specific paths between
`nodes of the backbone network (i.e., when an NTP is
`provisioned, all intermediate points may be specified
`through which a packet passes between the ingress and
`egress points of the NTP).
`In MPLS networks, packets are encapsulated by append(cid:173)
`ing to the packet, or forming from the packet, additional
`information when the packet is received at an ingress point.
`The additional information, called a label, is used by routers
`of the backbone network to forward the packets. FIG. 2
`shows such an encapsulated packet 200 having a label 201
`appended to packet 202. The label summarizes information
`in the packet header. The summary may be based on the
`header field and include an origination (source) address field
`( o) 210 identifying the address of the ingress point, a
`termination (destination) address field (t) 211 identifying the
`address of the egress point(s). In some cases, the label may
`simply be a pointer that identifies or is otherwise related to
`specific origination and termination address fields in the
`55 header of the received packet. The label also includes one or
`more service level fields (bd) 212. Service level field 212
`may identify a desired service level for the virtual circuit
`(called a "demand"), such as minimum bandwidth required.
`In some networks, the service level field is implied from the
`60 label itself. Other fields 213 may be included in the label
`201, such as MPLS standard version, interior routing pro(cid:173)
`tocol version, maximum delay, or other types of service level
`parameters. The label 201 may alternatively be inserted into
`the packet header (PH) 214 of the packet 202, so the order
`65 of fields shown in FIG. 2 is exemplary only. Backbone
`networks may employ labels to group encapsulated packets
`having similar LSPs into classes (equivalence classes), and
`
`Cloudflare - Exhibit 1005, page 9
`
`
`
`US 6,584,071 Bl
`
`4
`of ingress and egress points through the nodes of a packet
`network and maintains provisioned and requested levels of
`Quality of Service (QoS). The preferred path through the
`nodes is selected so as to reduce impact to service levels of
`5 paths of other currently provisioned NTPs passing through
`each particular node that is selected for the preferred path.
`The routing method determines the preferred path based on
`shortest path routing through a subnetwork derived from the
`network topology, the subnetwork defined with link weights
`10 determined from critical links. The critical links are links
`identified based on remaining bandwidth on the links, posi(cid:173)
`tion in the network relative to ingress-egress point pairs, and
`a criterion for generally maintaining relatively high QoS or
`other service levels over all paths between ingress-egress
`15 point pairs. Generally, the routing method maintains rela(cid:173)
`tively high service levels ( e.g., bandwidth) available in
`existing paths for future NTP requests between all pairs of
`ingress and egress points in the network after routing the
`new NTP request.
`
`3
`methods for forwarding equivalence classes may be
`employed to simplify calculation of routing for LSPs.
`To generate a forwarding table, each router computes a set
`of preferred paths through the network nodes, and may use
`the weights to calculate the set of preferred paths. Each
`preferred path has a minimum total weight between nodes as
`well as minimum summed weight through nodes of the path,
`which is known in the art as shortest-path routing. This set
`of preferred paths may be defined with a shortest-path tree
`(SPT). The forwarding table with routing information (e.g.,
`source-destination pair, source ports, and destination ports)
`is generated from the SPT. The router uses the routing
`information to forward a received packet to its destination
`along the shortest path of the SPT. The SPT may be
`calculated using an algorithm such as Dijkstra's algorithm,
`described in E. Dijkstra, "A Note: Two Problems In Con(cid:173)
`nection With Graphs," Numerical Mathematics, vol.1, 1959,
`pp. 269-271.
`A common shortest-path routing algorithm employed by
`routers to generate routing of an LSP is the min-hop alga- 20
`rithm. In the min-hop algorithm, each router calculates a
`path through the backbone network for the stream of packets
`(packet flows) between the ingress and egress point. Each
`router constructs a path for routing the packet flow from the
`ingress point to the egress point with the least number 25
`("min") of feasible links ("hops") ( a feasible link is a link
`that exists and has sufficient capacity to route the packet
`flow). Routing schemes of the prior art, such as shortest-path
`routing, forward packets based only on destination addresses
`and use only static and traffic-characteristic-independent 30
`link weights to calculate paths for routing tables. Some links
`on the shortest path between certain pairs of ingress and
`egress points may be congested while other links on alter(cid:173)
`nate paths are under-utilized.
`A signaling mechanism, such as RSVP or LDP, may be
`employed to both reserve and establish a connection through
`the network for a packet flow. The signaling mechanism may
`specify quality-of-service attributes for the LSP traversing
`the backbone network. Link congestion caused by shortest- 40
`path routing of multiple LSPs by shortest path routing may
`cause rejection of reservation requests by signaling
`mechanisms, even though sufficient levels of service
`(quality of service guarantees) for the LSP may have existed
`in alternate, under-utilized paths that are only slightly 45
`longer. Available network resources are not efficiently uti(cid:173)
`lized when shortest-path routing is employed.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`Other aspects, features, and advantages of the present
`invention will become more fully apparent from the follow(cid:173)
`ing detailed description, the appended claims, and the
`accompanying drawings in which:
`FIG. 1 shows a backbone network of the prior art having
`nodes interconnected through links that allow communica(cid:173)
`tion between lower-level packet networks;
`FIG. 2 shows an encapsulated packet employed by the
`backbone network ofFIG. l to route packets from an ingress
`point to an egress point;
`FIG. 3 shows a network of interconnected nodes that
`employs a method of routing with service level guarantees
`for routing label-switched paths in accordance with the
`35 present invention;
`FIG. 4 is a flow chart showing an exemplary method of
`routing packets in accordance with the present invention;
`FIG. 5 is a flow chart corresponding to the step of
`determining critical links shown in FIG. 4;
`FIG. 6 is a block diagram of a routing processor that may
`employ the exemplary method of routing packets shown in
`FIGS. 4 and 5;
`FIG. 7 shows an exemplary network of fourteen nodes
`having four ingress-egress point pairs and employed for
`simulations of exemplary implementations of the present
`invention;
`FIG. 8 is a graph of simulation results showing decrease
`in available flow between an ingress and egress point pair for
`increasing demand of LSP requests;
`FIG. 9 is a graph of simulation results showing total
`available flow between an ingress and egress point pair for
`increasing demand of LSP requests;
`FIG. 10 is a graph of simulation results showing total
`55 available flow between all ingress-egress point pairs as a
`function of demand for the MHA and WSP routing algo(cid:173)
`rithms and MIRA;
`FIG. 11 is a graph of simulation results showing accep(cid:173)
`tance of new LSP request by a network for several trials of
`a static case when MHA, WSP and MIRA are employed;
`FIG. 12 is a graph of simulation results showing accep(cid:173)
`tance of new LSP requests by a network for several trials of
`a dynamic case when MHA, WSP and MIRA are employed;
`and
`FIG. 13 is a graph of simulation results showing number
`of rejected LSP requests by a network employing MHA,
`WSP and MIRA when there is a link failure.
`
`50
`
`SUMMARY OF THE INVENTION
`
`In accordance with embodiments of the present invention,
`data is routed along paths through a network of nodes
`interconnected by links and having a plurality of ingress(cid:173)
`egress point pairs. Routing data along paths includes receiv(cid:173)
`ing a request for a path with a service demand for routing
`data between one of the ingress-egress point pairs of the
`network. A subnetwork of nodes and links is defined from
`the network based on the service demand. A set of weights
`is defined for the links of the subnetwork taking into account
`impacts to existing service levels of paths corresponding to
`the other ingress-egress point pairs of the network. A short- 60
`est path through the subnetwork is determined in accordance
`with the set of weights for the links, the shortest path being
`the path with the service demand between the one of the
`ingress-egress point pairs of the network.
`For some exemplary embodiments, the routing method 65
`determines a preferred path for a new network tunnel path
`(NTP) request. The preferred path is defined between a pair
`
`Cloudflare - Exhibit 1005, page 10
`
`
`
`US 6,584,071 Bl
`
`5
`DETAILED DESCRIPTION
`
`FIG. 3 shows a network 300 of interconnected nodes
`Nl-Nll that employs an exemplary implementation of the
`routing method with service level guarantees in accordance
`with the present invention. The routing method determines
`a path through network 300 for a request for a network
`tunnel path, such as a label-switched path (LSP). Each of the
`nodes Nl-Nl includes one or more routers that forward
`packets based on a forwarding table constructed from paths
`determined in accordance with the routing method. The
`routing method may route packets of the requested LSP
`based on the effect that routing those packets of the
`requested LSP may have on current and/or future demands
`on the capacity of network nodes for currently provisioned
`LSPs. The effect is also referred to as "interference" by
`routing packets of the LSP request along a path. Such
`routing method may not necessarily route packets of a
`requested LSP along the shortest path, or minimum number
`of hops, through the network. Such routing should maintain
`as much available capacity on the links, based on a pre(cid:173)
`defined criterion, after routing of the LSP request.
`While the exemplary embodiments of the present inven(cid:173)
`tion are described herein for networks employing the MPLS
`standard with path requests with associated service levels, 25
`such as LSP requests, the present invention is not so limited.
`The present invention may be employed where requests are
`received for Network Tunnel Paths (NTPs) having a guar(cid:173)
`anteed level of service between ingress and egress points.
`NTPs may be, for example, virtual circuits for packet flows 30
`in TCP/IP networks, connections of cells in Asynchronous
`Transfer Mode (ATM) networks, and LSPs)for packets in
`MPLS networks). In accordance with the present invention,
`routing for a new NTP request is calculated based on
`network topology, bandwidth ( or other service level) of the 35
`links, and remaining bandwidth ( or other service level) of
`the links when existing NPTs are routed and a request for a
`new NPT is received.
`A network of interconnected nodes such as network 300
`is defined as G(N, L, B), where N is the set of routers
`(nodes), L is the set of links (arcs), and B is the set of
`available resources for corresponding links in the set L
`(referred to herein as the set of link bandwidths, which may
`be residual bandwidths described below). For the exemplary
`embodiments described herein, the value for available
`resources such as service level is bandwidth capacity of a
`link or path, but the service level values may be link
`parameters such as delay, probability of packet loss,
`revenue, or other quality of service parameter. As known in
`the art, these other service level values may be converted to
`a quantity termed effective bandwidth. A link l;j in the set of
`links L has two subscripts, i and j (0<ij ~ N), representing the
`nodes n; and nj connected by link l;F Without loss of
`generality, each link l;j is directional (packet flows go from
`node n; to node n).
`Sources S1 , S2 , and S3 in FIG. 3 may be packet networks
`providing packet flows to routers in nodes Nl, N6, and NlO,
`respectively. Similarly, destinations D1 , D2 , and D3 may be
`packet networks receiving packet flows from routers in
`nodes NS, N9, and Nll, respectively. Sources S1 , S2 , and S3 60
`are defined as ingress points, while destinations D1 , D2 , and
`D3 are defined as egress points. Ingress-egress point pairs
`are defined as (S1 , D1), (S 2 , D2 ), and (S 3 , D3 ), and each node
`may support one or more sources, or one or more destina(cid:173)
`tions. Nodes Nl-Nll may also have, or have access to, 65
`current network topology and link status information
`(hereinafter referred to as "network topology") which may
`
`6
`be provided and distributed through the network using a
`distributed protocol (e.g., by control packets conforming to
`the OSPF protocol).
`Sources S1 , S2 , and S3 generate packets for new or
`5 currently provisioned LSPs in network 300 that include
`fields identifying the ingress-egress point pair ( e.g., address
`of either source S1 , S2 , or S3 and address of destination D1 ,
`D2 , and D3 ). Signaling packets of, for example, RSVP or
`LDP may be used to communicate quality-of-service (QoS)
`10 attributes or guarantees, such as bandwidth, to network
`elements (routers of nodes); however, packets of LSPs may
`also include values for one or more service-level parameters
`corresponding to QoS attributes or guarantees. These pack(cid:173)
`ets of LSPs transferred through network 300 may conform
`15 to the MPLS standard and may have a format similar to that
`shown and described with respect to FIG. 2.
`For network 300 shown in FIG. 3, three potential ingress(cid:173)
`egress point pairs (source-destination pairs) (S 1 , D1), (S 2 ,
`D2 ), and (S 3 , D3 ), are shown. For the following discussion,
`20 each link l;j interconnecting adjacent nodes n; and nj has an
`associated available capacity, termed residual bandwidth.
`Residual bandwidth of a link l;j is the difference between the
`total bandwidth of the link and the sum of the bandwidth
`demands of LSPs that are currently assigned to that link.
`Networks may exchange information regarding residual
`capacity of links (such as in QoS shortest path first (QoSPF)
`networks), which may be employed for distributed calcula(cid:173)
`tion of routes. Residual bandwidth may commonly be
`expressed in, for example, kbits/sec or Mbits/sec, or may be
`expressed as a percentage of the link's total capacity. Each
`link l;j interconnecting adjacent nodes n; and nj may also
`have an associated link cost. Link cost may be assigned to
`a particular link to allow routing algorithms to favor or
`disfavor routing through the particular link because of, for
`example, delay, cost to provide bandwidth, other traffic
`engineering considerations, or other physical link-layer con-
`siderations.
`In general, a request arrives to network 300 to provision
`and route a path between an ingress point o and egress point
`40 t, and also to provide a route having a requested service level
`of bd (a "demand" bd). For the exemplary network of FIG.
`3, this may be an LSP or other form of NTP request to
`provision a path between source-destination pair (S 1 , D 1)
`with a requested bandwidth bd Mb/sec. LSP requests may
`45 arrive one at a time, with no a priori knowledge of the
`characteristics of demands for bandwidth by future LSP
`requests. In addition, no a priori knowledge of the charac(cid:173)
`teristics of QoS attributes or guarantees; connection arrivals,
`hold time, or departures; and other traffic engineering infor-
`50 mation is necessarily available. The demand bd may be an
`"equivalent" or "effective" bandwidth value since the pack(cid:173)
`ets of a packet flow may represent a stochastic process with
`varying bandwidth needs. As is known in the art, service(cid:173)
`level ( e.g., QoS) attributes or requirements may be translated
`55 into an equivalent or effective bandwidth value. The equiva(cid:173)
`lent or effective bandwidth value is a deterministic value
`approximating the stochastic variable based on peak and
`average packet rate, arrival and hold times, and connection
`duration.
`Routing in accordance with the present invention evalu(cid:173)
`ates and routes the LSP along paths through the network
`between ingress-egress point pairs. The set P is the set of
`specific ( distinguished) node ingress-egress point pairs
`included in the network G(N, L, B) that are the potential
`ingress-egress point pairs (i.e., source-destination pairs (S1 ,
`D1 ), (S2 , D2 ), . . . , (Sk, Dk)). An element of the set P is
`denoted as (s, d) (i.e., (s, d)EP) where s and d correspond to
`
`Cloudflare - Exhibit 1005, page 11
`
`
`
`15
`
`20
`
`7
`a source and a destination node. Multiple LSPs may be
`provisioned between an element (s, d).
`Each element (s, d) has an associated maximum packet
`flow (maxflow) value 8sd that corresponds to an upper bound
`for the total bandwidth ( or other service level) in a path that
`may be routed between a ingress-egress point pair, or
`element, (s, d), after routing with demands of currently
`provisioned LSP requests. The value 8sd changes as band(cid:173)
`width across links in the set L is assigned and re-assigned
`when provisioning and deleting LSPs. In addition, each
`element (s, d) of the set P may have an associated scalar
`weight asd that may correspond to a relative usage,
`importance, or cost of the particular ingress-egress point
`pair. The maxflow value 8 sd may decrease (increase) when
`bandwidth is allocated (released) for LSPs between the
`ingress-egress point pair (s, d). The maxflow value 8sa may
`also decrease (increase) when bandwidth is allocated
`(released) for LSPs between other ingress-egress point pairs
`in the set P whose paths share common links between nodes
`with the path of the element (s, d).
`An LSP request to network 300 may be either through a
`centralized network management system (not shown in FIG.
`3) or by control messages provided to nodes Nl-Nll of the
`network 300 in accordance with a distributed protocol.
`Either a centralized network management system and/or 25
`each network router implements an exemplary routing
`method for the LSP request and determines a path to be
`provisioned through the network corresponding to the
`requested LSP. Provisioning by either the centralized net(cid:173)
`work management system and/or each network router allows 30
`RSVP control (e.g., QoS requests of the RSVP signaling
`protocol) to establish connections (packet flows) with, for
`example, a demanded bandwidth or other type of service
`level.
`The node-arc incidence matrix M is defined as an fxg 35
`matrix in which each row corresponds to a different node n
`of the set N, and each column corresponds to a different link
`1 of the set L. The value for f is equal to the number of
`elements of the set N, and the value of g is equal to the
`number of elements of the set L. Each column has two, 40
`non-zero entries (i, j) for the corresponding link l;j between
`nodes n; and nj. The column corresponding to link l;j has a
`"+1" value in the row i, a "-1" value in the row j, and a "O"
`value in each position corresponding to all other rows.
`The residual capacity vector R of length g is the vector 45
`including the residual capacities for the set of links L. The
`values in the vector R are initialized to the corresponding
`link bandwidth values bij of the set B. After all current LSP
`requests are routed, the residual capacity rij of link l;j is
`defined as the difference between 1) the total link service 50
`level (e.g., bandwidth) bij of link l;j and 2) the sum of the
`service levels (e.g., bandwidth demands) of LSP requests
`currently routed on link l;F For the network, if the residual
`bandwidth of a link is greater than or equal to the bandwidth
`value of the requested LSP, the link is referred to herein as 55
`a feasible link (with respect to the given demand of the LSP
`request).
`The service level vector xsd is a vector of length g
`corresponding to the element (s, d). The values in the vector
`xsd correspond to the route of packet flows in the set of links 60
`L to achieve the maxflow values asa8 sd, as defined and
`described subsequently, between element (s, d) along the
`corresponding links of the ingress-egress point pair. Each
`position in the vector has a value that corresponds to a link.
`The element vector esd is a vector of length f of elements 65
`(s, d) having a "+1" value in position s, a "-1" value in
`position d, and "O" value in the other positions.
`
`US 6,584,071 Bl
`
`8
`An integer programming system may be formed for a
`non-split demand added to the existing network. For the
`non-split demand case, LSP requests arrive one at a time,
`and the demand bd (i.e., packets of the packet flow) of the
`5 LSP request may only be allocated along one path within the
`network G(N, L, B). The solution for the non-split demand
`is a set of paths for all LSPs (new and currently provisioned)
`, for
`that desirably maximizes the maxflow values 8sa and 80
`all ingress-egress point pairs ( all elements in the set P and
`10 including the element (o, t). The integer programming
`system for the non-split demand implementation may be
`represented by equations (1) through (6):
`
`MaXL(s,d