`US007366101Bl
`
`c12) United States Patent
`Varier et al.
`
`(10) Patent No.:
`(45) Date of Patent:
`
`US 7,366,101 Bl
`Apr. 29, 2008
`
`(54) NETWORK TRAFFIC SYNCHRONIZATION
`MECHANISM
`
`(75)
`
`Inventors: Roopesh R. Varier, Sunnyvale, CA
`(US); David Jacobson, Durham, NC
`(US); Guy Riddle, Los Gatos, CA (US)
`
`(73) Assignee: Packeteer, Inc., Cupertino, CA (US)
`
`( *) Notice:
`
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 916 days.
`
`(21) Appl. No.: 10/611,573
`
`(22) Filed:
`
`Jun. 30, 2003
`
`(51)
`
`Int. Cl.
`H04J 3/14
`(2006.01)
`H04J 3/06
`(2006.01)
`H04L 12126
`(2006.01)
`H04L 12128
`(2006.01)
`H04L 12156
`(2006.01)
`(52) U.S. Cl. ....................... 370/241; 370/401; 370/503
`(58) Field of Classification Search ........ 370/216-218,
`370/241, 242, 401, 503
`See application file for complete search history.
`
`(56)
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`
`2003/0043792 Al*
`
`3/2003 Carpini et al. .............. 370/386
`
`* cited by examiner
`
`Primary Examiner-Kevin C. Harper
`(74) Attorney, Agent, or Firm-Mark J. Spolyar
`
`(57)
`
`ABSTRACT
`
`Methods, apparatuses and systems directed to a network
`traffic synchronization mechanism facilitating the deploy(cid:173)
`ment of network devices in redundant network topologies.
`In certain embodiments, when a first network device directly
`receives network traffic, it copies the network traffic and
`transmits it to at least one partner network device. The
`partner network device processes the copied network traffic,
`just as if it had received it directly, but, in one embodiment,
`discards the traffic before forwarding it on to its destination.
`In one embodiment, the partner network devices are opera(cid:173)
`tive to exchange directly received network traffic. As a
`result, the present invention provides enhanced reliability
`and seamless failover. Each unit, for example, is ready at any
`time to take over for the other unit should a failure occur. As
`discussed below, the network traffic synchronization mecha(cid:173)
`nism can be applied to a variety of network devices, such as
`firewalls, gateways, network routers, and bandwidth man(cid:173)
`agement devices.
`
`2002/0167960 Al* 11/2002 Garcia-Luna-Aceves .... 370/442
`
`34 Claims, 12 Drawing Sheets
`
`50
`
`30a
`
`~ 42
`
`42
`
`Cloudflare - Exhibit 1018, page 1
`
`
`
`
`
`
`
`U.S. Patent
`
`Apr. 29, 2008
`
`Sheet 3 of 12
`
`US 7,366,101 Bl
`
`50
`
`30a
`
`42
`
`42
`
`44
`
`42
`
`Fig._2A
`
`140
`
`Cloudflare - Exhibit 1018, page 4
`
`
`
`U.S. Patent
`
`Apr. 29, 2008
`
`Sheet 4 of 12
`
`US 7,366,101 Bl
`
`50
`
`42
`
`Fig._2B
`
`140
`
`Cloudflare - Exhibit 1018, page 5
`
`
`
`
`
`
`
`
`
`
`
`U.S. Patent
`
`Apr. 29, 2008
`
`Sheet 9 of 12
`
`US 7,366,101 Bl
`
`50
`
`430a
`
`42
`
`Fig._2G
`
`140
`
`Cloudflare - Exhibit 1018, page 10
`
`
`
`
`
`
`
`
`
`US 7,366,101 Bl
`
`1
`NETWORK TRAFFIC SYNCHRONIZATION
`MECHANISM
`
`CROSS-REFERENCE TO RELATED
`APPLICATIONS
`
`This application makes reference to the following com(cid:173)
`monly owned U.S. patent applications and patents, which
`are incorporated herein by reference in their entirety for alt
`purposes:
`U.S. patent application Ser. No. 08/762,828 now U.S. Pat.
`No. 5,802,106 in the name of Robert L. Packer, entitled
`"Method for Rapid Data Rate Detection in a Packet Com(cid:173)
`munication Environment Without Data Rate Supervision;"
`U.S. patent application Ser. No. 08/970,693 now U.S. Pat.
`No. 6,018,516, in the name of Robert L. Packer, entitled
`"Method for Minimizing Unneeded Retransmission of Pack-
`ets in a Packet Communication Environment Supporting a
`Plurality of Data Link Rates;"
`U.S. patent application Ser. No. 08/742,994 now U.S. Pat. 20
`No. 6,038,216, in the name of Robert L. Packer, entitled
`"Method for Explicit Data Rate Control in a Packet Com(cid:173)
`munication Environment without Data Rate Supervision;"
`U.S. patent application Ser. No. 09/977,642 now U.S. Pat.
`No. 6,046,980, in the name of Robert L. Packer, entitled 25
`"System for Managing Flow Bandwidth Utilization at Net(cid:173)
`work, Transport and Application Layers in Store and For(cid:173)
`ward Network;"
`U.S. patent application Ser. No. 09/106,924 now U.S. Pat.
`No. 6,115,357, in the name of Robert L. Packer and Brett D. 30
`Galloway, entitled "Method for Pacing Data Flow in a
`Packet-based Network;"
`U.S. patent application Ser. No. 09/046,776 now U.S. Pat.
`No. 6,205,120, in the name of Robert L. Packer and Guy
`Riddle, entitled "Method for Transparently Determining and 35
`Setting an Optimal Minimum Required TCP Window Size;"
`U.S. patent application Ser. No. 09/479,356 now U.S. Pat.
`No. 6,285,658, in the name of Robert L. Packer, entitled
`"System for Managing Flow Bandwidth Utilization at Net(cid:173)
`work, Transport and Application Layers in Store and For- 40
`ward Network;"
`U.S. patent application Ser. No. 09/198,090 now U.S. Pat.
`No. 6,412,000, in the name of Guy Riddle and Robert L.
`Packer, entitled "Method for Automatically Classifying
`Traffic in a Packet Communications Network;"
`U.S. patent application Ser. No. 09/198,051, in the name
`of Guy Riddle, entitled "Method for Automatically Deter(cid:173)
`mining a Traffic Policy in a Packet Communications Net(cid:173)
`work;"
`U.S. patent application Ser. No. 09/206,772, in the name 50
`of Robert L. Packer, Brett D. Galloway and Ted Thi, entitled
`"Method for Data Rate Control for Heterogeneous or Peer
`Internetworking;"
`U.S. patent application Ser. No. 09/966,538, in the name
`of Guy Riddle, entitled "Dynamic Partitioning of Network 55
`Resources;"
`U.S. patent application Ser. No. 10/039,992, in the
`Michael J. Quinn and Mary L. Laier, entitled "Method and
`Apparatus for Fast Lookup of Related Classification Entities
`in a Tree-Ordered Classification Hierarchy;"
`U.S. patent application Ser. No. 10/015,826, in the name
`of Guy Riddle, entitled "Dynamic Tunnel Probing in a
`Communications Network;"
`U.S. patent application Ser. No. 10/104,238, in the name
`of Robert Purvy and Mark Hill, entitled "Methods and
`Systems Allowing for Non-Intrusive Network Manage(cid:173)
`ment;"
`
`10
`
`2
`U.S. patent application Ser. No. 10/108,085, in the name
`of Wei-Lung Lai, Jon Eric Okholm, and Michael J. Quinn,
`entitled "Output Scheduling Data Structure Facilitating
`Hierarchical Network Resource Allocation Scheme;"
`U.S. patent application Ser. No. 10/155,936, in the name
`of Guy Riddle, Robert L. Packer and Mark Hill, entitled
`"Method
`for Automatically Classifying Traffic with
`Enhanced Hierarchy in a Packet Communications Net(cid:173)
`work;"
`U.S. patent application Ser. No. 10/177,518, in the name
`of Guy Riddle, entitled "Methods, Apparatuses and Systems
`Allowing for Progressive Network Resource Utilization
`Control Scheme;"
`U.S. patent application Ser. No. 10/178,617, in the name
`15 of Robert E. Purvy, entitled "Methods, Apparatuses and
`Systems Facilitating Analysis of Network Device Perfor-
`mance;" and
`U.S. patent application Ser. No. 10/236,149, in the name
`of Brett Galloway and George Powers, entitled "Classifica(cid:173)
`tion Data Structure enabling Multi-Dimensional Network
`Traffic Classification and Control Schemes."
`
`FIELD OF THE INVENTION
`
`The present invention relates to computer networks and,
`more particularly, to methods, apparatuses and systems
`facilitating the synchronization of monitoring and/or man(cid:173)
`agement tasks associated with network devices deployed in
`redundant network topologies.
`
`BACKGROUND OF THE INVENTION
`
`Efficient allocation of network resources, such as avail(cid:173)
`able network bandwidth, has become critical as enterprises
`increase reliance on distributed computing environments
`and wide area computer networks to accomplish critical
`tasks. The widely-used TCP/IP protocol suite, which imple(cid:173)
`ments the world-wide data communications network envi(cid:173)
`ronment called the Internet and is employed in many local
`area networks, omits any explicit supervisory function over
`the rate of data transport over the various devices that
`comprise the network. While there are certain perceived
`advantages, this characteristic has the consequence of jux(cid:173)
`taposing very high-speed packets and very low-speed pack-
`45 ets in potential conflict and produces certain inefficiencies.
`Certain loading conditions degrade performance of net(cid:173)
`worked applications and can even cause instabilities which
`could lead to overloads that could stop data transfer tempo-
`rarily. The above-identified U.S. patents and patent applica(cid:173)
`tions provide explanations of certain technical aspects of a
`packet based telecommunications network environment,
`such as Internet/Intranet technology based largely on the
`TCP/IP protocol suite, and describe the deployment of
`bandwidth management solutions to monitor and manage
`network environments using such protocols and technolo(cid:173)
`gies.
`An important aspect of implementing enterprise-grade
`network environments is provisioning mechanisms that
`address or adjust to the failure of systems associated with or
`60 connected to the network environment. For example, FIG.
`lA illustrates a computer network environment including a
`bandwidth management device 130 deployed to manage
`network traffic traversing an access link 21 connected to a
`open computer network 50, such as the Internet. As one
`65 skilled in the art will recognize the failure of bandwidth
`management device 130 will prevent the flow of network
`traffic between end systems connected to LAN 40 and
`
`Cloudflare - Exhibit 1018, page 14
`
`
`
`3
`computer network 50. To prevent this from occurring, one
`prior art mechanism is to include a relay that actuates a
`switch to create a direct path for electrical signals across the
`bandwidth management device 130, when a software or
`hardware failure disables bandwidth management device
`130. In this manner, the bandwidth management device 130
`essentially acts as a wire, allowing network traffic to pass to
`thereby preserve network access. The problem with this
`approach is that, while network access is preserved, there is
`no failover mechanism to control or optimize network traffic
`while the bandwidth management device 130 remains down.
`To provide failover support that addresses this circum(cid:173)
`stance, the prior art included a "hot standby" mechanism
`offered by Packeteer, Inc. of Cupertino, Calif., for use in
`shared Ethernet network environments employing the Car(cid:173)
`rier Sense Multiple Access with Collision Detection
`(CSMA/CD) protocol. As FIG. 1B illustrates, redundant
`bandwidth management devices 230a, 230b are deployed
`between router 22 and LAN 40. The inherent properties of
`the shared Ethernet LANs 40 and 41 meant that all inbound
`and outbound packets were received at both bandwidth
`management devices 230a, 230b. According to the hot
`standby mechanism, one bandwidth management device
`230a (for instance) operated in a normal mode classifying
`and shaping network traffic, while the other bandwidth
`management device 230b operated in a monitor-only mode
`where the packets were dropped before egress from the
`device. The bandwidth management devices 230a, 230b
`were also configured to communicate with each other over
`LAN 40 and/or 41 to allow each device to detect when the 30
`other failed. When such a failure occurred, bandwidth man(cid:173)
`agement device 230b previously operating in a monitor-only
`mode, could provide failover support in a substantially
`seamless fashion since its data structures were already
`populated with the information required to perform its
`function.
`While the hot standby feature is suitable in shared Eth(cid:173)
`ernet environments, the use of Ethernet LAN switches in
`more modern enterprise networks has presented further
`challenges. According to switched Ethernet environments,
`an end system only sees the packets intended for it, render(cid:173)
`ing invalid the assumption upon which the hot standby
`mechanism is based. FIG. 2A illustrates a computer network
`environment implemented by LAN switches 23, where the
`end systems such as computers 42 and servers 44 are
`connected to different ports of a LAN switch 23. In the
`network environment of FIG. 2A, LAN switches 23 connect
`bandwidth management devices 30a, 30b to router 22, as
`well as the end systems associated with an enterprise net(cid:173)
`work. While the bandwidth management devices 30a, 30b 50
`are deployed in a redundant topology, without the present
`invention, there is no mechanism that ensures that one
`device can seamlessly take over for the other device should
`one fail.
`Furthermore, many enterprise network architectures fea(cid:173)
`ture redundant topologies for such purposes as load-sharing
`and failover. For example, as FIG. 2B illustrates a typical
`enterprise network infrastructure may include a plurality of
`access links (e.g., 21a, 21b) connecting an enterprise LAN
`or WAN to an open computer network 50. In these network 60
`topologies, network traffic may be directed completely
`through one route or may be load-shared between alternative
`routes. According to these deployment scenarios, a given
`bandwidth management device 30a or 30b during a given
`span of time may see all network traffic, part of the network
`traffic, or no network traffic. This circumstance renders
`control of network traffic on a network-wide basis problem-
`
`35
`
`40
`
`45
`
`US 7,366,101 Bl
`
`4
`atic, especially when the bandwidth management devices
`30a, 30b each encounter only part of the network traffic.
`That is, each bandwidth management device 30a, 30b,
`without the invention described herein, does not obtain
`5 enough information about the network traffic associated with
`the entire network to be able to accurately monitor network
`traffic and make intelligent decisions to control or shape the
`network traffic flowing through the corresponding access
`links 21a, 21b. In addition, if a given bandwidth manage-
`10 ment device 30a, 30b sees no traffic for a period of time and
`the active route fails (for example), the bandwidth manage(cid:173)
`ment device deployed on the alternate route essentially
`becomes the master controller but possesses no prior infor(cid:173)
`mation about existing flows or other network statistics. This
`15 circumstance often renders it impossible to adequately clas(cid:173)
`sify data flows associated with connections active at the time
`of a change or fail over in the active bandwidth management
`device.
`In light of the foregoing, a need in the art exists for
`20 methods, apparatuses, and systems that allow two or more
`network devices to synchronize as to network traffic indi(cid:173)
`vidually encountered by each network device. A need further
`exists for methods, apparatuses and systems facilitating the
`monitoring and management of network traffic in redundant
`25 network topologies. Embodiments of the present invention
`substantially fulfill these needs.
`
`SUMMARY OF THE INVENTION
`
`The present invention provides methods, apparatuses and
`systems directed to a network traffic synchronization mecha(cid:173)
`nism facilitating the deployment of network devices in
`redundant network topologies. In certain embodiments,
`when a first network device directly receives network traffic,
`it copies the network traffic and transmits it to at least one
`partner network device. The partner network device pro(cid:173)
`cesses the copied network traffic, just as if it had received it
`directly, but, in one embodiment, discards the traffic before
`forwarding it on to its destination. In one embodiment, the
`partner network devices are operative to exchange directly
`received network traffic. As a result, the present invention
`provides enhanced reliability and seamless failover. Each
`unit, for example, is ready at any time to take over for the
`other unit should a failure occur. As discussed below, the
`network traffic synchronization mechanism can be applied to
`a variety of network devices, such as firewalls, gateways,
`network routers, and bandwidth management devices.
`
`DESCRIPTION OF THE DRAWINGS
`
`FIG. lA is a functional block diagram illustrating a
`computer network environment including a bandwidth man(cid:173)
`agement device deployed in a non-redundant network envi(cid:173)
`ronment including a single access link.
`FIG. 1B is a functional block diagram showing the
`55 deployment of redundant network devices in a CSMA/CD
`network environment.
`FIG. 2A is a functional block diagram illustrating a
`computer network environment including first and second
`network devices 30a, 30b and LAN switches 23.
`FIG. 2B is a functional block diagram illustrating a
`computer network environment including first and second
`network devices 30a, 30b deployed to control traffic across
`redundant access links 21a, 21b.
`FIG. 2C is a functional block diagram illustrating the
`65 network interfaces and other functionality associated with a
`network device configured according to an embodiment of
`the present invention.
`
`Cloudflare - Exhibit 1018, page 15
`
`
`
`US 7,366,101 Bl
`
`5
`FIG. 2D is a functional block diagram illustrating an
`alternative connection scheme between the first and second
`network devices for the exchange of network traffic syn(cid:173)
`chronization data.
`FIG. 2E is a functional block diagram illustrating the 5
`network interfaces and other functionality associated with a
`network device configured according to another embodi(cid:173)
`ment of the present invention.
`FIG. 2F is a functional block diagram illustrating the
`functionality associated with a network device including 10
`third and fourth non-synchronization network interfaces.
`FIG. 2G is a functional block diagram illustrating a
`computer network environment including first, second and
`third network devices 430a, 430b and 430c deployed to
`control traffic across redundant access links 21a, 21b.
`FIG. 3 is a functional block diagram setting forth the
`functionality in a bandwidth management device according
`to an embodiment of the present invention.
`FIG. 4 is a flow chart diagram illustrating a method
`directed to the synchronization of network traffic data and 20
`the enforcement of bandwidth utilization control on network
`traffic data traversing an access link.
`FIG. 5 is a flow chart diagram illustrating a method
`directed to the synchronization of network traffic between
`two or more network devices.
`
`DESCRIPTION OF PREFERRED
`EMBODIMENT(S)
`
`6
`nected to computer network 50 via router 22. Access link 21
`is a physical and/or logical connection between two net(cid:173)
`works, such as computer network 50 and network 140. The
`computer network environment, including computer net(cid:173)
`works 140, 50 is a packet-based communications environ(cid:173)
`ment, employing TCP/IP protocols, and/or other suitable
`protocols, and has a plurality of interconnected digital
`packet transmission stations or routing nodes. As FIG. 2A
`illustrates, network devices 30a, 30b, in one embodiment,
`are provided between router 22, respectively, and computer
`network 140. As discussed in more detail below, network
`devices 30a, 30b, can be bandwidth management devices
`that are each operative to classify data flows and, depending
`on the classification, enforce respective bandwidth utiliza-
`15 tion controls on the data flows to control bandwidth utili(cid:173)
`zation across and optimize network application performance
`across access links 21. In the network environment of FIG.
`2A, bandwidth management device 30b, in one embodi-
`ment, may be deployed solely to provide failover support in
`case of the failure of bandwidth management device 30a.
`Other operational configurations, however, are possible. In
`the network environment of FIG. 2B, bandwidth manage(cid:173)
`ment devices 30a, 30b may operate concurrently to control
`bandwidth utilization across respective access links 21a, 21b
`25 with one unit able to seamless take over for the other should
`either unit itself, a LAN switch 23, a router 22a or 22b,
`and/or access links 21a or 21b fail. In such an embodiment,
`LAN switches 23 include the capability or re-directing
`traffic to alternate ports upon the detection of a network
`30 failure.
`Network devices 30a, 30b are operably connected to
`transmit packet data to synchronize network traffic between
`each other. As the following provides, network devices 30a,
`30b can be connected to synchronize network traffic in a
`variety of configurations. As FIGS. 2A and 2B illustrate,
`transmission line 99 interconnects network devices 30a, 30b
`to allow for sharing of network traffic data in the form of
`synchronization packets. FIG. 2C further illustrates the
`configuration of network device 30a according to an
`embodiment of the present invention. As FIG. 2C shows,
`network device 30a comprises control module 75, and, in
`one embodiment, network interfaces 71, 72, and synchro(cid:173)
`nization network interface 74. As FIG. 2C illustrates, net(cid:173)
`work interfaces 71 and 72 operably connect network device
`30a to the communications path between computer network
`140 and router 22a and are the interfaces at which regular
`network traffic is received. Synchronization network inter(cid:173)
`face 74 connects network device 30a, via transmission line
`99, to one or more partner network devices, such as network
`device 30b. In one embodiment, all synchronization packets
`whether arriving at network interface 71 or network inter-
`face 72 are exchanged over network interface 74. In one
`such embodiment, the interface at which the packet arrived
`is indicated in a reserved header field in the synchronization
`55 header encapsulating the copied packet. As discussed below,
`in another embodiment not using synchronization headers,
`the network interface identifier overwrites a portion of the
`copied packet data. In another embodiment, the packet flow
`direction can be determined by examining the host database
`60 134 containing the inside and outside host IP addresses (see
`below).
`Network devices 30a, 30b may be connected via network
`interface 74 in a variety of manners. As discussed above,
`network interfaces 74 of network devices 30a, 30b may be
`65 connected via a direct line, such as a CAT-5 cable. In one
`embodiment, network interface 74 is a wired Ethernet-based
`interface. In another embodiment, network interface 74 may
`
`FIGS. 2A and 2B illustrate two possible network envi(cid:173)
`ronments in which embodiments of the present invention
`may operate. FIG. 2A illustrates a computer network envi(cid:173)
`ronment where access link 21 and router 22 connect LAN 40
`to computer network 50. As FIG. 2A shows, the network
`environment includes redundant network devices 30a, 30b 35
`operatively connected to communication paths between
`LAN 40 and router 22 via LAN switches 23. FIG. 2B
`illustrates a computer network environment featuring a
`redundant network topology, that includes first and second
`access links 21a, 21b; routers 22a, 22b; and network devices 40
`30a, 30b. Access links 21a, 21b operably connect computer
`network 140 to computer network 50. In one embodiment,
`computer network 140 is an enterprise WAN comprising a
`plurality of LAN segments. In one embodiment, computer
`network 50 is an open computer network, such as the 45
`Internet. As one skilled in the art will recognize, the network
`topology can be expanded to include additional access links
`and associated network devices. LAN switches 23 include a
`plurality of ports to which end systems, such as client
`computers 42 and servers 44, and intermediate systems, such 50
`as routers and other switches, are connected. LAN switches
`23 receive packets on a given port and forward the packets
`to other network devices on selected ports. In one embodi(cid:173)
`ment, LAN switch 23 is an Ethernet-based (IEEE 802.3)
`switch.
`
`A. Packet Synchronization Functionality
`As discussed above, FIG. 2A sets forth a packet-based
`computer network environment including network devices
`30a, 30b deployed to perform a network function on data
`flows traversing access links 21. In the network environment
`of FIG. 2B network devices 30a, 30b by operation of LAN
`switches 23 are operative to perform a network function on
`data flows traversing access links 21a, 21b respectively. As
`FIG. 2A shows, computer network 140 interconnects several
`TCP/IP end systems, including client devices 42 and server
`device 44, and provides access to resources operably con-
`
`Cloudflare - Exhibit 1018, page 16
`
`
`
`US 7,366,101 Bl
`
`7
`be a wireless network interface allowing for wireless trans(cid:173)
`mission of synchronization packets between network
`devices 30a and 30b. In such an embodiment, the synchro(cid:173)
`nization packets are preferably encrypted to guard against
`unauthorized detection or analysis of the network traffic. In 5
`another embodiment, network interface 74 may be directly
`connected to computer network 140 to allow for transmis(cid:173)
`sion of copied packets over computer network 140. In such
`an embodiment, the copied packets are encapsulated in
`synchronization headers (see below) to allow LAN switch 10
`23 or other network device to transmit the synchronization
`packets to the proper destination.
`In another embodiment, network interface 73 may also be
`used as a synchronization interface that connects network
`device 30a, via a direct transmission line or indirect con- 15
`nection (e.g., through computer network 140), to one or
`more partner network devices, such as network device 30b
`for the exchange of synchronization packets. See FIGS. 2D
`and 2E. Such an embodiment allows, regular network inter(cid:173)
`faces 71 or 72 to be paired with a synchronization interface 20
`73 or 74 to obviate the need for flagging or explicitly
`determining the interface at which the packet was received.
`As FIG. 2E illustrates, assuming network interface 73 is
`used as a synchronization interface, network interface 73 can
`be directly connected to a corresponding synchronization 25
`interface on a partner network device via transmission line
`98. As FIGS. 2D and 2E illustrate, synchronization network
`interface 74 of network device 330a is connected to a
`corresponding network interface on a partner network
`device 330b via transmission line 98. Alternatively, network 30
`interface 73 and/or network interface 74 may be connected
`to computer network 140 such that copied packets are
`transmitted over computer network 140 between network
`devices 30a, 30b. Connecting the synchronization interfaces
`over a LAN or other computer network supporting regular 35
`network traffic, however, requires encapsulation of the pack-
`ets with the appropriate MAC or other network address to
`ensure that they reach their intended destination. Such a
`configuration, however, allows synchronization packets to
`be multicast ( or broadcast in a VLAN) to more than one 40
`partner network device to allow for synchronization of more
`than two network devices. As FIG. 2G illustrates, in embodi(cid:173)
`ments involving more than two partner network devices
`430a, 430b, 430c, the synchronization interfaces 74 may be
`connected to a computer network via lines 91, 92, and 93. In 45
`the embodiment shown including LAN switches 23, the
`synchronization packets require encapsulation and the use of
`a multicast MAC address to ensure that each synchroniza(cid:173)
`tion interface 74 receives all synchronization packets. In one
`embodiment, the synchronization network interfaces may be 50
`connected to the same virtual Local Area Network (VLAN)
`to facilitate layer 2 discovery mechanisms and the automatic
`configuration of network devices for the transmission of
`synchronization packets. In one embodiment, synchroniza(cid:173)
`tion packets may then be broadcast or multicast to all the 55
`network devices in the VLAN. In another embodiment, the
`synchronization interfaces 74 of network devices 430a,
`430b, 430c can be connected to an Ethernet hub obviating
`the need to encapsulate the packets and/or use a multicast
`MAC address, since a given synchronization interface will, 60
`by operation of the hub, see the packets sent from the other
`two interfaces.
`In yet another embodiment, VPN servers or similar func(cid:173)
`tionality may be employed to tunnel and encrypt the syn(cid:173)
`chronization packets transmitted between network devices. 65
`For example, network devices 530a and 530b are connected
`to VPN servers 43 via transmission lines 94, 95 (see FIG.
`
`8
`2H). In such an embodiment, VPN servers 43 (labeled, VPN
`1 and VPN 2) encrypt and tunnel the synchronization
`packets between network devices 530a, 530b. In yet another
`embodiment, all regular network traffic and the synchroni(cid:173)
`zation packets can be transmitted over transmission Lines
`94, 95 and through VPN servers 43, obviating the second
`connections between network devices 530a, 530b and LAN
`switches 23. Furthermore, VPN functionality could also be
`implemented across transmission line 99 between network
`devices 30a, 30b (see FIG. 2B).
`Control module 75 generally refers to the functionality
`implemented by intermediate network device 30. In one
`embodiment, control module 75 is a combination of hard(cid:173)
`ware and software, such as a central processing unit,
`Memory, a system bus, an operating system and an appli(cid:173)
`cation, such as a firewall, gateway, proxy, or bandwidth
`management application. In one embodiment, network inter(cid:173)
`faces 71, 72, 73 and 74 are implemented as a combination
`of hardware and software, such as network interface cards
`and associated software drivers. In addition, the network
`interfaces 71, 72, 73 and 74 can be wired network interfaces,
`such as Ethernet interfaces, or wireless network interfaces,
`such as 802.11, BlueTooth, satellite-based interfaces, and
`the like. Partner network device 30b is similarly configured.
`Other configurations are possible. For example, as dis(cid:173)
`cussed above, one or more physical network interfaces 73 or
`7 4 may be omitted. In a less preferred embodiment, for
`example, if network interface 74 is omitted, copied packets
`can be transmitted between network devices 30a and 30b
`over LAN switches 23 via respective network interfaces 71.
`Still further, network device 30a may include more than two
`non-synchronization network interfaces. As FIG. 2F illus(cid:173)
`trates, network device 530a may include network interfaces
`71a, 71b, 72a, and 72b. In such an embodiment, the syn(cid:173)
`chronization packets transmitted to the partner network
`device are marked as discussed above with the network
`interface identifier corresponding to the network interface at
`which the original packet was received. In one embodiment,
`the network device 530a can be configured only to copy and
`transmit synchronization packets on a particular subset of its
`non-synchronization network interfaces.
`According to an embodiment of the present invention,
`packets received from computer network 50 by network
`device 30a at network interface 72 are copied and transmit(cid:173)
`ted to partner network device 3 Ob from network interface 7 4.
`Similarly, partner network device 30b copies and transmits
`packets received from computer network 50 at its corre(cid:173)
`sponding network interface 72 to network device 30a.
`Similarly, network device 30a copies and transmits packets
`received at network interface 71 to partner network device
`30b via network interface 74. Network device 30a also
`receives, at network interface 74, packets copied and trans(cid:173)
`mitted by network device 30b. In one embodiment, the
`partner network devices 30a, 30b do not copy and transmit
`as synchronization packets any multicast or broadcast pack(cid:173)
`ets.
`In the embodiment discussed above, network devices 30a,
`30b may exchange synchronization packets corresponding
`to packets received at network interfaces 71 and 72 over a
`single transmission line 99 or other connection, using a
`single network interface (e.g., 74) at each network device. In
`such an embodiment, the synchronization packets may take
`many forms. In one embodiment, the copied packets are
`overwritten in predetermined locations with state informa(cid:173)
`tion, such as an identifier for the interface on which the
`packet was received, a packet receipt time, etc. This state
`information, in one embodiment, overwrites data not impli-
`
`Cloudflare - Exhibit 1018, page 17
`
`
`
`US 7,366,101 Bl
`
`5
`
`9
`cated in or affecting the network function performed by the
`network devices 30a, 30b. For example, certain state infor(cid:173)
`mation, such as an interf