`
`U.S. Patent
`
`Mar.14,2006
`
`Sheet 1 of 15
`
`US 7,012,919 Bl
`
`Label Switch
`...------------104
`
`Network
`~100
`
`2nd LSP ..
`
`110a
`
`..
`
`~ :Aggregate
`i9:
`Flow
`1-. 112a t),
`~ )6,✓
`Label
`i : 0✓'0/
`Switch
`1~R✓ 7, 1st LSP
`\.__)
`~108
`
`/
`
`/
`
`:
`
`:
`
`FIG. lA
`(Prior Art)
`
`Label Switch
`104
`
`Network
`~100
`
`FIG. 1B
`(Prior Art)
`
`Cloudflare - Exhibit 1001, page 2
`
`
`
`
`
`U.S. Patent
`
`Mar.14,2006
`
`Sheet 3 of 15
`
`US 7,012,919 Bl
`
`212
`i c!
`
`210
`
`i /
`
`Data
`
`QoS
`
`208
`7 cl
`Label
`
`~202
`
`1st Packet
`
`c-!212
`
`j
`
`Data
`
`Label
`
`208
`I d~ 204
`
`Data Packet
`
`~213a
`
`I
`
`Req. RMV
`
`I
`
`213b
`
`Internal
`Resource
`Utilization
`
`213c
`
`208
`I d
`~205a Request
`RMV Packet
`
`Label
`
`Label
`
`208
`
`208
`
`205b
`
`RMU
`Packet
`
`Rate
`Indicator
`
`Label
`
`205c
`
`RMI
`Packet
`
`213d
`
`208
`
`External
`Resource
`Utilization
`
`Label
`
`205d
`
`RME
`Packet
`
`208
`l d
`Label
`
`1
`
`1
`
`~206
`
`Close
`Packet
`
`1 ·:::
`
`FIG. 2B
`
`Cloudflare - Exhibit 1001, page 4
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`U.S. Patent
`
`Mar.14,2006
`
`Sheet 10 of 15
`
`US 7,012,919 Bl
`
`Trunk
`Line
`
`.-------------------------------- --------------------------------1
`' I :
`Switch
`: 302/306
`' ' I
`
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`
`I
`
`Linecard
`410D
`
`Trunk
`Line-.-
`
`Linecard
`410A
`
`Switch Core
`430
`
`Linecard
`410C
`
`Trunk
`~Line
`
`Linecard
`410B
`
`I
`I
`I
`I
`I
`
`' I
`
`I
`I
`I
`I
`I
`I
`
`- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - J
`
`Trnnk
`Line
`
`FIG. 4D
`
`Cloudflare - Exhibit 1001, page 11
`
`
`
`U.S. Patent
`
`Mar.14,2006
`
`Sheet 11 of 15
`
`US 7,012,919 Bl
`
`................................................................................................................................................................... :
`: Linecard
`: :~:.~··································································•··••·························:
`Ingress Micro-Flow Manager 505
`
`Network Trunk
`Line Interface
`510
`
`Micro-Flow
`Recognizer
`520
`
`Micro-Flow
`Classifier
`530
`
`Policing
`Scheduler
`540
`
`:
`
`···········1························t·························t······················j············:
`
`:
`:
`To
`' ~
`Trunk Line
`
`Memory 550
`
`Storage
`Block Table
`560
`
`Flow Block
`Table
`570
`
`Policy
`Table
`580
`
`Layers
`Table
`590
`
`Forwarding
`Table
`595
`
`Routing
`Table
`597
`
`......- To
`Switch
`Core
`430
`
`---------------- ---------------------------------- -------------------------------- -----------------
`
`I
`
`Network Trunk
`Line Interface
`515
`
`QoS
`Scheduler
`525
`
`Micro-Flow
`Recognizer
`535
`
`:
`
`............................. ~.~~~.~~.~~?!.~~~I.~:"..~~~~:.~.?~'. ............................. .
`
`FIG. 5
`
`Cloudflare - Exhibit 1001, page 12
`
`
`
`
`
`U.S. Patent
`
`Mar.14,2006
`
`Sheet 13 of 15
`
`US 7,012,919 Bl
`
`(700
`
`START
`
`702
`
`DEFINE A PLURALITY OF PRE-DEFINED
`LABEL SWITCHED PATHS (LSP)
`
`GENERATE MICRO-FLOW FROM RECEIVED
`DATA
`
`SELECT A LSP BASED ON THE QoS SERVICE
`TYPE OF THE GENERATED MICRO-FLOW
`
`TRANSMIT MICRO-FLOW DATA ALONG THE
`SELECTED LSP
`
`704
`
`706
`
`708
`
`710
`
`712
`
`DONE
`
`FIG. 7
`
`Cloudflare - Exhibit 1001, page 14
`
`
`
`U.S. Patent
`
`Mar.14,2006
`
`Sheet 14 of 15
`
`US 7,012,919 Bl
`
`rsoo
`
`START
`
`802
`
`,r
`
`DETERMINE THE DESTINATION OF THE V 804
`
`MICRO-FLOW
`
`~"
`8 06
`CREATE A DESTINATION SET OF LSPs ~
`PROVIDING ACCESS TO THE DESTINATION
`OF THE MICRO-FLOW
`
`~,
`08
`8
`OBTAIN RESOURCE UTILIZATION ~
`INFORMATION FOR EACH LSP IN THE
`DESTINATION SET OF LSPs
`
`,,
`
`GENERATE A QoS SET OF LSPs FROM THE
`DESTINATION SET OF LSPs THAT IS CAPABLE
`OF PROVIDING DESIRED QoS
`
`
`8 10
`·~
`
`,,
`SELECT A LSP FROM THE QoS SET OF LSPs ~
`
`HAVING THE LEAST UTILIZATION OF
`RESOURCES
`
`12
`
`,,.
`DONE
`
`814
`
`FIG. 8
`
`Cloudflare - Exhibit 1001, page 15
`
`
`
`
`
`US 7,012,919 Bl
`
`1
`MICRO-FLOW LABEL SWITCHING
`
`CROSS REFERENCE TO RELATED
`APPLICATIONS
`
`This application is a continuation-in-part (CIP) of U.S.
`patent application Ser. No. 09/552,278, filed on Apr. 19,
`2000, now U.S. Pat. No. 6,574,195 entitled "MICRO(cid:173)
`FLOW MANAGEMENT," which is hereby incorporated by
`reference in its entirety.
`This application also is related to U.S. patent application
`Ser. No. 09/699,199, filed Oct. 27, 2000, now abandoned
`and entitled "SYSTEM AND METHOD FOR UTILIZA(cid:173)
`TION BASED MICRO-FLOW LABEL SWITCHING,"
`which is hereby incorporated by reference in its entirety.
`
`BACKGROUND OF THE INVENTION
`
`5
`
`2
`LSP is established, the LSP becomes a logical link and is
`integrated into the Layer 3 routing topology. When a packet
`is transported over the LSP, a Layer 2 switching function is
`performed for fast packet forwarding.
`A conventional MPLS protocol includes a signaling com(cid:173)
`ponent and a forwarding component. The signaling compo(cid:173)
`nent is used to establish LSPs based on either traffic engi(cid:173)
`neered information or dynamic routing information. Once a
`LSP is established, the associated incoming and outgoing
`10 labels at each label switched router (LSR) form a forwarding
`entry in a MPLS forwarding table, which is used by the
`forwarding component to perform fast packet forwarding on
`the labeled MPLS packets.
`When packets arrive, the forwarding component searches
`15 the forwarding table to make a routing decision for each
`packet. Specifically, the forwarding component examines
`the incoming MPLS label and searches for a match. If there
`is a match, the packet is directed to the appropriate outgoing
`interface across the system's switching fabric.
`The header of each packet is generally given a label,
`which is a short, fixed length value that identifies a For(cid:173)
`warding Equivalence Class (FEC) for the packet. Each FEC
`is a set of packets that are forwarded over the same path
`through a network, even if the individual packets' ultimate
`25 destinations are different. Label switches use the FEC to
`determine which LSP to utilize for transmitting the packet.
`It should be noted that a plurality of FECs may be mapped
`to the same LSP, and likewise, more than one LSP may be
`mapped to each FEC. The packet is then transmitted using
`the selected LSP, which defines an ingress-to-egress path
`through the network that is followed by all packets assigned
`to a specific FEC.
`In the core of the network, label switches ignore the
`packet's network layer header and simply forward the
`packet using the packet's label. Basically, when a labeled
`packet arrives at a label switch, the forwarding component
`uses the input port number and label to perform an exact
`match search of its forwarding table. When a match is found,
`the forwarding component retrieves the next hop address
`from the forwarding table and directs the packet to the
`outbound interface for transmission to the next hop in the
`LSP.
`This allows for the OSI interconnection Layer to by-pass
`having to look in the individual Layer 3 destinations and to
`simply route based upon the MPLS labels. Moreover, a
`MPLS topology allows for a network control application to
`monitor the LSPs that are established in the network and,
`allows the application to create new LSPs as the traffic
`between routers changes, either in direction or in size.
`However, traffic from users is not always predictable, and
`hence excess capacity must be provided in each LSP to
`ensure available bandwidth during rerouting. High levels of
`flow aggregation require a higher amount of excessive
`bandwidth to be provisioned on each LSP to support tran(cid:173)
`sient "burstiness" of the traffic load, which is an extremely
`inefficient use of network resources. Moreover, routing with
`aggregated flows through the network causes significant
`strain and shift as the network re-aligns during failure, as
`shown next with reference to FIGS. lA and lB.
`FIG. lA is an illustration showing a conventional LSP
`based network 100. The conventional LSP based network
`100 includes label switches 102, 104, and 106, a first LSP
`108 connecting label switch 102 to label switch 106, and a
`second LSP 110a connecting label switch 102 to label switch
`104. Finally, each LSP includes a large aggregate flow 112a
`and 112b, comprised of a plurality of individual data flows.
`
`35
`
`1. Field of the Invention
`This invention relates generally to computer networking, 20
`and more particularly to micro-flow based label switched
`path utilization over a computer network.
`2. Description of the Related Art
`Due to high customer demand for increasingly reliable
`and differentiated services, today's Internet Service Provid(cid:173)
`ers (ISPs) are constantly faced with the challenge of adapt(cid:173)
`ing their networks to support increased customer demand
`and growth. As a result, many ISPs rely upon conventional
`switches and network servers to connect dial-in port con(cid:173)
`centrators to backbone networks, such as the Internet. In the 30
`past, these servers and port concentrators typically commu(cid:173)
`nicated with each other through the use of an Internet
`Protocol (IP), while the port concentrators typically com(cid:173)
`municated with the network backbone through asynchro(cid:173)
`nous transfer mode (ATM) protocol.
`The previously described configuration is often referred to
`as an IP over ATM model. IP over ATM uses an overlay
`model in which a logical IP routed topology runs over and
`is independent of an underlying Open Systems Interconnec(cid:173)
`tion (OSI) Layer 2 switched ATM topology. The OSI Ref- 40
`erence Model is the International Organization for Stan(cid:173)
`dard's (ISO) layered communication protocol model.
`The Layer 2 switches provide high-speed connectivity,
`while the IP routers at the edge, interconnected by a mesh of
`Layer 2 virtual circuits, provide the intelligence to forward 45
`IP datagrams.
`Although, the ATM switches provided high bandwidth,
`the requirement of costly network interface cards, 10% "cell
`tax" overhead, numerous system interrupts, and poor routing
`stability reduce its effectiveness. Moreover, the growth of 50
`Internet services and Wavelength Division Multiplexing
`(WDM) technology at the fiber level has provided a viable
`alternative to ATM for multiplexing multiple services over
`individual circuits. Moreover, ATM switches currently are
`being out-performed by Internet backbone routers, and mul- 55
`tilayer switching paradigms, such as Mulitprotocol Label
`Switching (MPLS) offer simpler mechanisms for packet(cid:173)
`oriented traffic engineering (TE) and multiservice function(cid:173)
`ality. Hence, many ISPs currently utilize MPLS technology
`to provide label path switching as the method to interconnect 60
`multiple transit devices instead of ATM technology.
`The basic function of MPLS is to provide a Layer 2 Label
`Switched Path (LSP), which is similar to ATM, to transport
`one or more traffic flows over a predetermined path. The
`path is generally traffic engineered (TE) to maximize the 65
`usage of the physical links within the network that were
`under-utilized using the existing routing algorithm. Once an
`
`Cloudflare - Exhibit 1001, page 17
`
`
`
`US 7,012,919 Bl
`
`3
`Each individual flow is a group of IP data packets from a
`single data transmission, wherein each IP data packet in the
`flow includes the same source address, destination address,
`source port, destination port, and IP protocol type. In addi(cid:173)
`tion, each packet of the flow follows the preceding packet by
`no more than a predetermined amount of time, for example,
`2 milliseconds (ms).
`During normal operation the conventional LSP based
`network 100 functions satisfactorily. Specifically, the indi(cid:173)
`vidual flows included in the aggregate flows 112a and 112b 10
`of each LSP 108 and 110a reach their respective destinations
`in a satisfactory manner. However, when unexpected rerout(cid:173)
`ing occurs, problems arise, as shown next with reference to
`FIG. lB.
`FIG. lB is an illustration of a conventional LSP based
`network 100 having an unusable LSP. In the FIG. lB, the
`second LSP 110a connecting label switch 102 to label switch
`104 is no longer usable, for example, because of congestion
`or physical wire failure. In this case, the aggregate flow 112b
`included in the second LSP 110a must be rerouted along a
`third LSP 110b, which connects label switch 102 to label
`switch 104 via label switch 106. However, as shown in FIG.
`lB, the third LSP 110b and the first LSP 108 share a
`common path, specifically, the connection between label
`switch 102 and label switch 106. Hence, the first LSP 108
`must include enough bandwidth to accommodate the entire
`bandwidth of the second LSP 110a, which was rerouted to
`the third LSP 110b.
`FIG. lB illustrates the difficulty of rerouting large aggre(cid:173)
`gate flows. To ensure available bandwidth for rerouting, LSP
`108 must always reserve enough bandwidth to accommodate
`adjacent LSPs in the event of a reroute, such as rerouted LSP
`110b shown in FIG. lB. Since such a large bandwidth
`reserve, or threshold, is needed to accommodate unexpected
`flow increases, link utilization is low. For example, if 50%
`of a link's bandwidth is utilized by its aggregate flow, the
`other 50% may have to be reserved in case of an unexpected
`flow reroute. Hence, 50% of the link bandwidth is not
`utilized during general operation.
`Moreover, fault recovery in a conventional LSP based
`network is slow, typically 20 seconds to 2 minutes. This
`slow fault recovery time results from a lack of control over
`the individual flows within each LSP. As stated previously,
`a conventional LSP based network routes each individual
`flow to a particular LSP based on the FEC associated with
`each individual flow. However, once an individual flow is
`routed to a particular LSP, the network can no longer
`efficiently alter the path of the individual flow. As a result,
`if a particular switch or area of an LSP is disabled, local
`repair can occur, however such new path will not be the most
`efficient at transporting the micro-flows. In addition, when a
`fault at a particular switch occurs, the failure indication is
`not communicated to the source node in real time. Hence,
`end-to-end recovery is delayed, typically resulting in net(cid:173)
`work congestion.
`In view of the forgoing, there is a need for an intelligent
`traffic engineering protocol that provides load balancing
`based on the utilization of individual LSPs. In addition, the
`protocol should integrate OSI based Layer 2 and OSI based
`Layer 3 switching, provide good traceability of data flows,
`and allow for fast fault recovery.
`
`SUMMARY OF THE INVENTION
`
`Broadly speaking, the present invention fills these needs
`by providing an intelligent traffic engineering protocol that
`manages micro-flows within dynamically selected LSPs. In
`
`4
`one embodiment, a method for providing an aggregate
`micro-flow having intelligent load balancing is disclosed.
`Initially, a set of label switched paths (LSPs) is defined for
`a network domain. Then, as the network receives a set of
`5 data packets, a micro-flow comprising the set of data packets
`is defined. In addition to the information included in each
`received data packet, the micro-flow includes a quality of
`service (QoS) type. A particular label switched path (LSP) is
`selected from the defined set of LSPs, based on the QoS type
`of the micro-flow, and the micro-flow is transmitted along
`the selected LSP.
`In another embodiment, a micro-flow wrapper logical unit
`is described. The micro-flow logical unit includes a pre(cid:173)
`defined LSP, which defines a physical path along a set of
`15 network switches for transmission of a network data packet.
`In addition, the LSP is preferably capable of supporting a
`first QoS type for data packets transmitted along the LSP.
`Also included in the micro-flow wrapper logical unit is a
`micro-flow, which comprises a plurality of data packets
`20 transmitted along the predefined LSP, and includes a second
`QoS type. To ensure good QoS, the first QoS type preferably
`is not a lower QoS type than the second QoS type. Generally,
`QoS types having less stringent requirements for delay,
`jitter, and loss are considered lower QoS types than those
`25 having more strict requirements for delay, jitter, and loss.
`Thus, the QoS type capable of being supported by the
`defined LSP preferably is greater than, or equal to, the QoS
`type of the micro-flow.
`A network switch for routing a micro-flow is disclosed in
`30 yet a further embodiment of the present invention. The
`network switch includes a database having a predefined set
`of LSPs, and an internal routing fabric capable of internally
`routing a micro-flow. As discussed previously, the micro(cid:173)
`flow comprises a set of data packets, and also has a QoS type
`35 associated with it. The network switch further includes logic
`that selects a particular LSP from the defined set of LSPs
`included in the database. The logic selects the particular LSP
`based on the QoS type of the micro-flow. Finally, the
`network switch includes an egress line card that is capable
`40 of transmitting the micro-flow along the selected LSP.
`In yet a further embodiment, a method for providing an
`aggregate micro-flow using LSP utilization information is
`disclosed. A set of label switched paths and a micro-flow
`comprising a set of data packets are defined. Next, a par-
`45 ticular label switched path is selected from the set of label
`switched paths based on a utilization value of the particular
`label switched path. The micro-flow then is transmitted
`along the selected label switched path.
`Advantageously, embodiments of the present invention
`50 use predetermined LSPs to route micro-flows. Because
`many network providers currently use LSP based networks,
`a protocol that shares common characteristics with LSP
`based networks integrates easier with existing equipment. In
`addition, embodiments of the present invention provide
`55 intelligent load balancing using individual micro-flows to
`better manage the network automatically. Further, micro(cid:173)
`flow traceability is enhanced because of the use of LSPs.
`Moreover, because of load balancing, congestion risk is
`reduced in the embodiments of the present invention. In
`60 addition, embodiments of the present invention afford the
`ability to manage data flow at the micro-flow level. This low
`level management allows enhanced fault recovery since the
`micro-flows can be rerouted from the point of failure, rather
`than at the beginning of the network domain as is the case
`65 with conventional LSP based networks.
`Finally, it will become apparent to those skilled in the art
`that the ability to manage individual micro-flows allows the
`
`Cloudflare - Exhibit 1001, page 18
`
`
`
`5
`use of reduced LSP thresholds, resulting in increased
`resource utilization. Other aspects and advantages of the
`invention will become apparent from the following detailed
`description, taken in conjunction with the accompanying
`drawings, illustrating by way of example the principles of 5
`the invention.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`The invention, together with further advantages thereof,
`may best be understood by reference to the following
`description taken in conjunction with the accompanying
`drawings in which:
`FIG. lA is an illustration showing a segment of a con(cid:173)
`ventional LSP based network;
`FIG. lB is an illustration of a conventional LSP based
`network having an unusable LSP;
`FIG. 2A is an illustration showing a micro-flow, in accor(cid:173)
`dance with an embodiment of the present invention;
`FIG. 2B is an illustration showing a detailed view of a
`micro-flow, in accordance with an embodiment of the
`present invention;
`FIG. 2C is block diagram showing the QoS field of the
`first micro-flow data packet of a micro-flow, in accordance
`with an embodiment of the present invention;
`FIG. 3 is an illustration showing an exemplary segment of
`a micro-flow LSP network domain, in accordance with an
`embodiment of the present invention;
`FIG. 4A is a high-level block diagram showing the 30
`functional components of a core label switch, in accordance
`with an embodiment of the present invention;
`FIG. 4B is a block diagram of an ingress label switch, in
`accordance with an embodiment of the present invention;
`FIGS. 4C-l shows an exemplary segment of a micro-flow
`LSP network domain experiencing a failure, in accordance
`with an embodiment of the present invention;
`FIGS. 4C-2 shows an exemplary segment of a micro-flow
`LSP network domain performing a local repair, in accor- 40
`dance with an embodiment of the present invention;
`FIGS. 4C-3 is an illustration showing the exemplary
`segment of a micro-flow LSP network domain performing a
`source reroute, in accordance with an embodiment of the
`present invention;
`FIG. 4D is a high level block diagram of a network
`switch, in accordance with an embodiment of the present
`invention;
`FIG. 5 is a block diagram of a line card, in accordance
`with an embodiment of the present invention;
`FIG. 6A is an illustration showing an exemplary segment
`of micro-flow LSP network domain, in accordance with an
`embodiment of the present invention;
`FIG. 6B is an illustration showing an exemplary micro(cid:173)
`flow LSP network domain having a disabled LSP, in accor(cid:173)
`dance with an embodiment of the present invention;
`FIG. 7 is a flowchart showing a method for transmitting
`a micro-flow utilizing a micro-flow LSP network domain, in
`accordance with an embodiment of the present invention;
`FIG. 8 is a flowchart showing a method for selecting a
`predefined LSP for transmission of a micro-flow, in accor(cid:173)
`dance with an embodiment of the present invention; and
`FIG. 9 is a flowchart showing an alternate embodiment
`method for selecting a predefined LSP for transmission of a 65
`micro-flow having an associated FEC, in accordance with an
`embodiment of the present invention.
`
`US 7,012,919 Bl
`
`6
`DETAILED DESCRIPTION OF THE
`EMBODIMENTS OF THE PRESENT
`INVENTION
`
`35
`
`The present invention provides enhanced dynamic net-
`work resource traffic engineering in an LSP-based network
`environment. In particular, the present invention provides
`increased link utilization, enhanced load balancing, and
`efficient fault recovery, by providing intelligent link man-
`10 agement at the micro-flow level within an LSP-based envi(cid:173)
`ronment. In the following description, numerous specific
`details are set forth in the description of various embodi(cid:173)
`ments of the present invention in order to provide a thorough
`understanding of the present invention. It will be apparent,
`15 however, to one skilled in the art that the present invention
`may be practiced without some or all of these specific
`details.
`In accordance with one embodiment of the present inven(cid:173)
`tion, the aforementioned problems of low link utilization,
`20 poor load balancing, and slow fault recovery are addressed
`by the use of an intelligent and dynamic network resource
`traffic engineering mechanism. The intelligent and dynamic
`network
`resource
`traffic engineering mechanism, as
`described throughout this specification, includes a method of
`25 forwarding micro-flows to selected label switched paths
`(LSPs) based on quality of service descriptors, additional
`FEC requirements, and current link or path utilization char(cid:173)
`acteristics.
`In the following description, the term switch will be used
`as synonymous with router. In particular, it should noted that
`the reference to a switch is intended to merely refer to any
`type of device that assists in the transporting of data signals
`from one point in a network to another point in the network.
`Further, the term quality of service (QoS) will be used to
`refer to any service definition related information that can be
`associated with a data packet, micro-flow, or Label Switched
`Path. For example, QoS can refer to transmission rate
`information, delay variation information, and jitter informa(cid:173)
`tion. In addition, QoS can refer to the ability to define a level
`of performance in a data communications system. For
`example, networks often specify modes of service that
`ensure optimum performance for traffic such as real-time
`voice and video. It should be born in mind that QoS has
`become a major issue on the Internet as well as in enterprise
`45 networks, because voice and video are increasingly traveling
`over IP-based data networks.
`By way of background, when data is sent from a host
`computer to a target computer via a computer network, the
`data is divided into individual data packets that are indi-
`50 vidually routed through the network. When eventually
`received by the target computer, the data packets are collated
`back into the original form of the data.
`To route data at increased transfer speeds, an embodiment
`of the present invention converts received data packets into
`55 micro-flows and forwards them via the micro-flow LSP
`network domain. FIG. 2A is an illustration showing a
`micro-flow 200, in accordance with an embodiment of the
`present invention. The micro-flow 200 includes a first micro(cid:173)
`flow data packet 202, optional subsequent data packets 204,
`60 and a close packet 206. The micro-flow 200 may include any
`number of subsequent data packets 204, including zero
`subsequent data packets 204, however each micro-flow
`preferably includes a first micro-flow packet 202 and a close
`packet 206.
`Each micro-flow 200 is a group of data packets (e.g., IP
`data packets) from a single transmission, wherein each data
`packet in a single micro-flow includes the same source
`
`Cloudflare - Exhibit 1001, page 19
`
`
`
`US 7,012,919 Bl
`
`7
`address, destination address, source port, destination port,
`and protocol type. In addition, each packet in the micro-flow
`200 follows the preceding packet by no more than a prede(cid:173)
`termined amount of time, for example 2 milliseconds (ms).
`Each micro-flow 200 also has a maintained state associ(cid:173)
`ated with each micro-flow describing link utilization char(cid:173)
`acteristics of the LSP used to transmit the micro-flow 200,
`provided by an intelligent feedback mechanism. In this
`manner, the source can make an intelligent selection of the
`micro-flow LSP based on utilization information. This
`allows the label switch to continuously manage the traffic
`within the micro-flow LSP network domain in an efficient
`manner.
`Although the following description focuses on IP data
`packets for illustrative purposes, the embodiments of the
`present invention may apply equally to other protocols, such
`as A1M, frame relay, etc.
`The micro-flow 200 is routed through the micro-flow LSP
`network domain using a micro-flow label and QoS descrip(cid:173)
`tors that are created upon arrival of the micro-flow at the
`micro-flow LSP network domain.
`FIG. 2B is an illustration showing a detailed view of a
`micro-flow 200, in accordance with an embodiment of the
`present invention. The micro-flow 200 includes a first micro(cid:173)
`flow data packet 202, an optional subsequent data packet
`204, and a close packet 206. In addition, a request RMU
`packet 205a, a RMU packet 205b, a RMI packet 205c, and
`a RME packet 205d are shown. As stated previously, the
`micro-flow 200 may include any number of subsequent data
`packets 204, including zero subsequent data packets 204, 30
`however each micro-flow preferably includes a first micro(cid:173)
`flow packet 202 and a close packet 206. The use of the
`request RMU packet 205a, the RMU packet 205b, the RMI
`packet 205c, and the RME packet 205d will be described
`subsequently.
`The first data packet 202 of the micro-flow 200 includes
`a label field 208, a QoS field 210 and a data field 212.
`Thereafter, each subsequent data packet 204 includes a label
`field 208 and a data field 212. Finally, the close packet
`includes a label field 208 and a close field 214. As described 40
`in greater detail below, the close field 214 of the close packet
`206 is used to instruct a switch to terminate an already
`established micro-flow that is present in the network.
`Within the above described packets, the data field 212
`generally includes the entire content of the data packet 45
`received at the ingress label switch of the micro-flow LSP
`network domain. However, in one embodiment, the data
`field 212 may include only a portion of the data packet. To
`create a micro-flow data packet, an embodiment of the
`present invention adds the label field 208, and the QoS field 50
`210 to the first data packet received.
`The label field 208 is used by the micro-flow LPS network
`domain to differentiate data packets of one micro-flow from
`data packets of another micro-flow, and to associate each
`data packet in a micro-flow with its assigned QoS charac- 55
`teristics. Generally, the label field 208 represents the OSI
`network and transport layer characteristics of the data pack-
`ets from a single micro-flow 200. In one embodiment, the
`characteristics include the protocol type, the source address,
`the destination address, the source port, and the destination 60
`port associated with each data packet. It should be noted that
`the information used to differentiate data packets of one
`micro-flow from another can be based on other information
`types including real time protocol (RTP), MPLS or Differ(cid:173)
`entiated Services (DiffServ) identifiers, or other information 65
`relating to a characteristic that is unique to the data packets
`of a specific micro-flow.
`
`8
`FIG. 2C is block diagram showing the QoS field 210 of
`the first micro-flow data packet of a micro-flow, in accor(cid:173)
`dance with an embodiment of the present invention. The
`QoS field 210 includes a set of QoS descriptors that describe
`5 QoS constraints of the related micro-flow. Specifically, the
`QoS field 210 can include a packet discard time limit (D)
`value 216, a weighing factor (W) 218, a guaranteed rate
`(GR) value 220, a micro-flow timeout period (DT) value
`222, an available rate (AR) value 224, and a delay variation
`10 value (Q). Based upon these QoS descriptors, the behavior
`of the micro-flow can be characterized as one of three basic
`service types, specifically, available rate (AR) traffic, maxi(cid:173)
`mum rate (MR) traffic, or guaranteed rate (GR) traffic. Of
`course, other service types may also be incorporated, as will
`15 be apparent to those skilled in the art.
`Available Rate (AR) Traffic is micro-flow traffic that does
`not have real-time requirements, resulting in loose delay and
`jitter characteristics. In addition, due to the connection(cid:173)
`oriented nature of AR traffic on the transport-layer, AR
`20 traffic has relatively relaxed loss prerequisites. Most trans(cid:173)
`mission control protocol (TCP) micro-flows are examples of
`AR traffic.
`Maximum Rate (MR) Traffic is micro-flow traffic that has
`real-time characteristics, resulting in rigid delay and jitter
`25 requirements. Further, MR traffic is sensitive to traffic loss.
`An example of MR traffic is user datagram protocol (UDP)
`micro-flows, particularly when carrying voice or video ( e.g.,
`Real-Time Protocol (RTP)). MR traffic QoS generally is
`determined at the time of arrival at the switch. MR traffic
`may represent real-time intensive traffic wherein the source
`and destination are unknown, such that it cannot be pre(cid:173)
`configured ahead of time. Thus, to determine the QoS
`service type for MR traffic, the arrival time of the MR traffic
`is monitored at the ingress portion of the switch. Thereafter,
`35 a determination is made based on the arrival time of the MR
`traffic as to what the QoS service type should be for a
`particular micro-flow.
`Guaranteed Rate (GR) Traffic is similar to MR traffic in its
`characteristics, and has strict requirements on delay, jitter,
`and loss. However, GR traffic has the desired rate commu(cid:173)
`nicated to the micro-flow LSP network by the user. This
`communication can be done by either explicit signaling or
`by user-defined traffic profiles. Thus, the guaranteed rate is
`well specified.
`Referring back to FIG. 2C, the QoS descriptors of the QoS
`field 210 will now be described. The packet discard time
`limit (D) value 216 is used to ensure buffer availability with
`a label switch. This value is a parameter that can operate like
`a burst tolerance that allows the switches of the micro-flow
`LSP network domain to have a basis for policing micro(cid:173)
`flows. In one embodiment, the D value can be between 10
`ms and 500 ms. The D value typically is set to a large value,
`such as 400 ms, for AR traffic. For MR traffic and GR traffic,
`such as real-time voice or video, the D value 216 typically
`is s