throbber
     
`Exhibit  1004  
`  
`
`Exhibit 1004
`
`

`
`US007069324B1
`
`(12)
`
`United States Patent
`Tiwana et a].
`
`(10) Patent N0.:
`(45) Date of Patent:
`
`US 7,069,324 B1
`Jun. 27, 2006
`
`(54) METHODS AND APPARATUS
`SLOW-STARTING A WEB CACHE SYSTEM
`
`(75) Inventors: Gurumukh S. TiWana, Cupertino, CA
`US ; Dann KWok, Los Altos, CA
`gUsg; JameZA- Aviani, Jr‘, Santa
`Barbara, CA (US); Martin Cieslak,
`Fremont, CA (US); Martin A. Kagan,
`Burhngame’ CA (Us); Stewart L_
`Forster, Ben?eigh (AU)
`
`(73) Assignee: Cisco Technology, Inc., San Jose, CA
`(Us)
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 732 days.
`
`( * ) Notice:
`
`(21) Appl, NQ; 09/608,549
`
`(22) Filed:
`
`Jun. 30, 2000
`
`(51) Int. Cl.
`(2006.01)
`G06F 15/16
`(2006-01)
`G06F 15/167
`(52) US. Cl. ..................................... .. 709/226; 709/215
`(58) Field of Classi?cation Search .............. .. 709/ 215,
`709/217, 218, 219, 231, 203, 226, 232, 233,
`709/234, 235; 711/170, 171, 172, 173, 133,
`711/135, 136, 159, 160
`See application ?le for complete search history.
`
`(56)
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`
`5/1991 Hahne et a1. ............. .. 370/236
`5,014,265 A *
`5/1993 Bitner ....................... .. 710/57
`5,210,829 A *
`5/1995 Spinney
`5,414,704 A
`1/1996 Majeti et a1.
`5,488,412 A
`4/1996 Abramson et a1.
`5,506,987 A
`5,581,736 A * 12/1996 Smith ....................... .. 711/170
`5,586,121 A 12/1996 Moura et a1.
`5,634,125 A *
`5/1997 Li ............................ .. 707/203
`
`5,687,369 A * 11/1997 Li ............................ .. 707/203
`RE35,774 E
`4/1998 Moura et a1.
`5,818,845 A 10/1998 Moura et a1.
`i *
`(lg/[hen et tal'l ~~~~~~~~~~~~~~~~~ " 707/10
`Oura e 3'
`1/1999 Moura et a1.
`(Continued)
`
`’
`’
`5,859,852 A
`
`wo
`
`FOREIGN PATENT DOCUMENTS
`WO98/31107
`7/1998
`
`OTHER PUBLICATIONS
`Cisco Systems, Inc., Release Notes for Cisco Cache Engine
`500 genes’ Software Verslon 210*
`(Continued)
`
`Primary Examinerilason Cardone
`Assistant ExamineriThomas Duong
`(74) Attorney, Agent, or FirmiBeyer Weaver & Thomas
`LLP
`
`(57)
`
`ABSTRACT
`
`Methods and apparatus are described for intelligently
`assigning a portion of a cluster’s traf?c (e.g., buckets) to a
`Cache system to minimize overloading of such cache system.
`In general terms, When a neW cache system enters a cache
`Cluster and/or starts up, the neW cache system’s full bucket
`allocation is not immediately assigned to the neW cache
`system. Instead, only a portion of the full bucket allocation
`is initially assigned to the neW cache system. In one embodi
`ment, the neW cache system’s bucket assignment is gradu
`ally increased until the cache system is handling it’s full
`bucket allocation or it becomes overloaded. The cache
`system’s load is also checked periodically to determine
`Whether it has become overloaded. When the cache system
`becomes overloaded, buckets are immediately shed from the
`cache system. In sum, the neW cache system’s load is
`adjusted until it is handling an optimum number of buckets.
`
`34 Claims, 6 Drawing Sheets
`
`Sm mm 01 m1
`mun puvlausly
`mm mm New cs
`
`mm mu m
`pvlviauxly 1m
`hunk-u in Now as
`
`Petitioner Ex. 1004 Page 1
`
`

`
`US 7,069,324 B1
`Page 2
`
`US. PATENT DOCUMENTS
`
`2/1999 Katzela et al.
`5,872,773 A
`4/1999 Klaus
`5,892,903 A
`8/1999 Levan
`5,946,047 A
`8/1999 Levan
`5,946,048 A
`9/1999 Aviani, Jr.
`5,950,205 A
`9/1999 Erimli et a1.
`5,953,335 A
`9/1999 Levan
`5,956,346 A
`9/1999 Levan
`5,959,660 A
`9/1999 Chin et a1.
`5,959,968 A
`9/1999 Moura et al.
`5,959,997 A
`5,989,060 A 11/1999 Coile et a1.
`6,006,266 A 12/1999 Murphy et a1.
`6,016,388 A
`1/2000 Dillon
`6,052,718 A
`4/2000 Gifford
`6,345,294 B1
`2/2002 O’Toole et a1.
`6,370,614 B1* 4/2002 Teoman et a1. ........... .. 711/113
`6,385,642 B1* 5/2002 Chlan et a1.
`709/203
`6,405,256 B1* 6/2002 Lin et a1. . . . . .
`. . . .. 709/231
`6,442,661 B1* 8/2002 DresZer .................... .. 711/170
`6,463,454 B1* 10/2002 Lumelsky et a1. ........ .. 709/105
`6,463,509 B1* 10/2002 Teoman et a1. ........... .. 711/137
`
`OTHER PUBLICATIONS
`Eager et al., “Adaptive Load Sharing in Homogeneous
`Distributed Systems,” IEEE, Transactions on Software
`Engineering, vol. Se-12, No. 5, May 1986, pp. 662-675.
`Akamai Technologies, Inc. -Global Internet Content Deliv
`ery-“HoW FreeFloW Works,” Webmaster@akamai.com
`1999-2000.
`Digital Island, Inc. -e-Business Without Limits-, “Enabling
`Technologies,” http://WWW.digisle.net. No date.
`Internap, “Preferred Collocation Services,” http://WWW.
`internap.com Copyright © 2001 Internap Network Services
`Corporation.
`
`Meyer, et al., Request For Comments No. 2026, entitled,
`“Generic Routing Encapsulation (GRE),” Jan., 2000,
`Internet Engineering Task Force, 9 pages.
`Mockapetris, P., Request For Comments No. 1034, entitled,
`“Domain Names4Concepts and Facilities,” Nov., 1987,
`Internet Engineering Task Force, 31 pages.
`Information Sciences Institute, Request for Comments No.
`793, entitled, “Transmission Control ProtocoliDARPA
`Internet ProgramiProtocol Speci?cation,” Sep., 1981,
`Internet Engineering Task Force, 49 pages.
`David M. Gi?ford, “Replica Routing,” U.S. Appl. No.
`09/472,964, ?led Dec. 28, 1999, 37 Pages.
`Johnson et al., “Dynamic Server Organization,” U.S. Appl.
`No. 09/294,837, ?led Apr. 19, 1999, 42 Pages.
`Lu et al., “Automatic NetWork Addresses Assignment and
`Translation Interference,” U.S. Appl. No. 60/160,535, ?led
`Oct. 20, 1999, 127 Pages.
`Lu et al., “Method and Apparatus for Automatic NetWork
`Address Assignment,” U.S. Appl. No. 60/178,063, ?led Jan.
`24, 2000, 74 Pages.
`Johnson et al., “Method and Apparatus for Determining a
`NetWork Topology in the Presence of NetWork Address
`Translation,” U.S. Appl. No. 60/178,062, ?led Jan. 24, 2000,
`32 Pages.
`Toole et al., “Fast-Changing NetWork Status and Load
`Monitoring and Feedback,” U.S. Appl. No. 60/ 177,985, ?led
`Jan. 25, 2000, 20 Pages.
`Kirk Johnson, “A Method and Apparatus for Minimalist
`Approach to Implementing Server Selection,” U.S. Appl.
`No. 60/177,415, ?led Jan. 21, 2000, 39 Pages.
`
`* cited by examiner
`
`Petitioner Ex. 1004 Page 2
`
`

`
`U.S. Patent
`
`Jun. 27, 2006
`
`Sheet 1 6f 6
`
`US 7,069,324 B1
`
`
`
`EBEWE cozmczwmo
`
`0:
`
`NE.
`
`
`
`mEBZmE Em=0
`
`
`
` % % 9N2 mmo_.
`
`
`
`5226 280
`
`
`
`E296 230
`
`_._______..q____.......
`N
`v
`
`v
`
`Petitioner Ex. 1004 Page 3
`
`

`
`U.S. Patent
`
`Jun. 27, 2006
`
`Sheet 2 0f 6
`
`US 7,069,324 B1
`
`CS joins
`cluster
`
`r200
`
`New CS
`announces
`presence
`
`J~202
`
`204
`
`Determine full
`bucket allocation I
`for each CS
`
`ad
`ls cluster at max lo
`
`208
`
`Assign full full
`bucket allocation to
`each cluster CS
`
`Assign buckets to
`New CS using a
`Slow-start techniqu
`
`Assign load to each Jaw
`cluster CS using a
`Tracking technique
`
`end
`
`Petitioner Ex. 1004 Page 4
`
`

`
`U.S. Patent
`
`Jun. 27, 2006
`
`Sheet 3 6f 6
`
`US 7,069,324 B1
`
`Fig. 3
`
`[-210
`
`Assign half of full
`302/\/ bucket allocation to
`New CS
`
`304
`
`ent bucket count assig
`
`to 1?
`
`Wait a j
`predetermined time
`
`Shed half of last
`buckets previously
`shed from New CS
`2
`309
`
`Is New CS
`overloaded?
`
`[316
`
`Were buckets shed’?
`
`N
`
`Assign half of
`remaining buckets to
`New CS
`
`V
`
`\JAM
`
`Assign half of
`previously shed
`buckets to New CS
`
`Petitioner Ex. 1004 Page 5
`
`

`
`U.S. Patent
`
`Jun. 27, 2006
`
`Sheet 4 6f 6
`
`US 7,069,324 B1
`
`Tracking
`Mode
`
`212
`[
`
`405 f
`
`Shed 1 bucket from
`overloaded cluster
`CS
`
`Wait a f
`predetermined time
`
`404
`
`s any cluster C
`overloaded?
`
`s any cluster C
`underloaded?
`
`underloaded
`assigned full bucket
`allocation?
`
`408
`
`Add 1 bucket to \IMO
`underloaded CS
`
`Fig. 4
`
`Petitioner Ex. 1004 Page 6
`
`

`
`U.S. Patent
`
`Jun. 27, 2006
`
`Sheet 5 6f 6
`
`US 7,069,324 B1
`
`[-500
`
`502
`
`04
`
`Determine a
`new full bucket
`allocation for
`each 08
`
`i
`
`Assign buckets
`to each
`remaining 08
`using Tracking
`technique
`
`Petitioner Ex. 1004 Page 7
`
`

`
`U.S. Patent
`
`Jun. 27, 2006
`
`Sheet 6 6f 6
`
`US 7,069,324 B1
`
`mmu<mmmHzH
`
`M 2mg
`
`mikwo
`[W G
`
`3%
`
`mowmmuomm
`
`@ @SwE
`
`Petitioner Ex. 1004 Page 8
`
`

`
`US 7,069,324 B1
`
`1
`METHODS AND APPARATUS
`SLOW-STARTING A WEB CACHE SYSTEM
`
`BACKGROUND OF THE INVENTION
`
`2
`proxy con?guration in his or her broWser to alloW the
`broWser to communicate With the proxy server. For the large
`ISPs With millions of customers there is signi?cant overhead
`associated With handling tech support calls from customers
`Who have no idea What a proxy con?guration is. Additional
`overhead is associated With the fact that different proxy
`con?gurations must be provided for different customer oper
`ating systems. The considerable economic expense repre
`sented by this overhead o?fsets the bene?ts derived from
`providing accelerated access to the World Wide Web.
`Another problem arises as the number of WWW users
`increases. That is, as the number of customers for each ISP
`increases, the number of proxy servers required to service
`the groWing customer base also increases. This, in turn,
`presents the problem of allocating packet traf?c among
`multiple proxy servers.
`NetWork caching represents an improvement over the
`proxy server model. NetWork caching is transparent to end
`users, high performance, and fault tolerant. By altering the
`operating system code of an existing router, the router is
`enabled to recogniZe and redirect data traf?c having particu
`lar characteristics such as, for example, a particular protocol
`intended for a speci?ed port (e.g., TCP With port 80), to one
`or more netWork caches connected to the router via an
`interface having suf?cient bandWidth. If there are multiple
`caches connected to the cache-enabled router, the router
`selects from among the available caches for a particular
`request based on the destination IP address speci?ed in the
`packet.
`The netWork cache to Which the request is re-routed
`“spoofs” the requested destination platform and accepts the
`request on its behalf via a standard TCP connection estab
`lished by the cache-enabled router. If the requested infor
`mation is already stored in the cache it is transmitted to the
`requesting platform With a header indicating its source as the
`destination platform. If the requested information is not in
`the cache, the cache opens a direct TCP connection With the
`destination platform, doWnloads the information, stores it
`for future use, and transmits it to the requesting platform. All
`of this is transparent to the user at the requesting platform
`Which operates exactly as if it Were communicating With the
`destination platform. Thus, the need for con?guring the
`requesting platform to suit a particular proxy con?guration
`is eliminated along With the associated overhead. An
`example of such a netWork caching technique is embodied
`in the Web Cache Coordination Protocol (WCCP) provided
`by Cisco Systems, Inc., a speci?c embodiment of Which is
`described in copending, commonly assigned, US. patent
`application Ser. No. 08/946,867 for METHOD AND APPA
`RATUS FOR FACILITATING NETWORK DATA TRANS
`MISSIONS ?led Oct. 8, 1997, the entirety of Which is
`incorporated herein by reference for all purposes.
`Each cache system has a particular capacity. For example,
`a cache system may be con?gured to handle four buckets of
`tra?ic. A bucket is generally de?ned as l/256th of the total
`amount of traf?c (e. g., IP address space) being handled by a
`particular group of associated cache systems (commonly
`referred to as a “cache cluster” or “cache farm”). For
`example, each bucket represents l/256th of the IP addresses or
`Web servers being spoofed by the cache systems Within a
`particular cache cluster. Conventionally, the buckets are
`evenly apportioned betWeen the cache systems of a cache
`cluster. Unfortunately, the capacity may vary from cache
`system to cache system. When the particular cache cluster
`has a relatively large amount of traf?c (e.g., a fat pipe) and
`a cache system’s capacity is less that its assigned load, the
`cache system may become quickly overWhelmed. Also,
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`The present invention relates to transmission of data in a
`network environment. More speci?cally, the present inven
`tion relates to methods and apparatus for redirecting netWork
`traf?c. Still more speci?cally, techniques are described
`herein for redirecting packet ?oWs from a device that does
`not oWn the ?oWs.
`Generally speaking, When a client platform communicates
`With some remote server, Whether via the Internet or an
`intranet, it crafts a data packet Which de?nes a TCP con
`nection betWeen the tWo hosts, i.e., the client platform and
`the destination server. More speci?cally, the data packet has
`headers Which include the destination IP address, the desti
`nation port, the source IP address, the source port, and the
`protocol type. The destination IP address might be the
`address of a Well knoWn World Wide Web (WWW) search
`engine such as, for example, Yahoo, in Which case, the
`protocol Would be TCP and the destination port Would be
`port 80, a Well knoWn port for http and the WWW. The
`source IP address Would, of course, be the IP address for the
`client platform and the source port Would be one of the TCP
`ports selected by the client. These ?ve pieces of information
`de?ne the TCP connection.
`Given the increase of traf?c on the World Wide Web and
`the groWing bandWidth demands of ever more sophisticated
`multimedia content, there has been constant pressure to ?nd
`more ef?cient Ways to service data requests than opening
`direct TCP connections betWeen a requesting client and the
`primary repository for the desired data. Interestingly, one
`technique for increasing the ef?ciency With Which data
`requests are serviced came about as the result of the devel
`opment of netWork ?reWalls in response to security con
`cerns. In the early development of such security measures,
`proxy servers Were employed as ?reWalls to protect net
`Works and their client machines from corruption by unde
`sirable content and unauthoriZed access from the outside
`World. Proxy servers Were originally based on Unix
`machines because that Was the prevalent technology at the
`time. This model Was generaliZed With the advent of SOCKS
`Which Was essentially a daemon on a Unix machine. Soft
`Ware on a client platform on the netWork protected by the
`?reWall Was specially con?gured to communicate With the
`resident demon Which then made the connection to a desti
`nation platform at the client’s request. The demon then
`passed information back and forth betWeen the client and
`destination platforms acting as an intermediary or “proxy”.
`Not only did this model provide the desired protection for
`the client’s netWork, it gave the entire netWork the IP address
`of the proxy server, therefore simplifying the problem of
`addressing of data packets to an increasing number of users.
`Moreover, because of the storage capability of the proxy
`server, information retrieved from remote servers could be
`stored rather than simply passed through to the requesting
`platform. This storage capability Was quickly recognized as
`a means by Which access to the World Wide Web could be
`accelerated. That is, by storing frequently requested data,
`subsequent requests for the same data could be serviced
`Without having to retrieve the requested data from its
`original remote source. Currently, most Internet service
`providers (ISPs) accelerate access to their Web sites using
`proxy servers.
`Unfortunately, interaction With such proxy servers is not
`transparent, requiring each end user to select the appropriate
`
`50
`
`55
`
`60
`
`65
`
`Petitioner Ex. 1004 Page 9
`
`

`
`US 7,069,324 B1
`
`3
`when the number of cache systems within a cluster is
`reduced, for example, to a single cache system, the remain
`ing cache system may become overwhelmed when it is
`assigned the full 256 buckets.
`When a particular cache system becomes overloaded, the
`network tra?ic may become disrupted as the cache system
`fails to handle the tra?ic in a timely manner. For example,
`the cache system may block tra?ic for a minute or more
`when it becomes overwhelmed with more packets than even
`its bypass mechanism can handle. As a result, the cache
`system may become a bottle neck for the cache cluster’s
`tra?ic. Therefore, there is a need for improving tra?ic
`handling procedures within a cache cluster so that occur
`rences of cache system overload are minimized.
`
`SUMMARY OF THE INVENTION
`
`4
`previously shed, a portion of the assigned buckets are
`periodically shed from the new cache system. When the new
`cache system is overloaded and when buckets have been
`previously shed, a portion of a number of buckets that were
`previously shed are periodically shed from the new cache
`system. When the new cache system is not overloaded and
`when no buckets have been previously shed, a portion of the
`unassigned buckets are periodically assigned to the new
`cache system. When the new cache system is not overloaded
`and when buckets have been previously shed, a portion of a
`number of buckets that were previously shed are periodi
`cally assigned to the new cache system. In a more speci?c
`embodiment, the portion of the number of buckets that were
`previously shed from the new cache system, the portion of
`the unassigned buckets, and the portion of the assigned
`buckets are equal to a half portion.
`In another implementation of the above method, when an
`existing cache system leaves the cache cluster or shuts
`down, a new bucket allocation is determined for each of the
`remaining cache systems and buckets are assigned to the
`remaining cache system using the ?rst technique.
`In another embodiment, the invention pertains to a com
`puter system operable to assign tra?ic buckets to a cache
`system. The computer system includes a memory and a
`processor coupled to the memory. The memory and the
`processor are adapted to provide at least some of the above
`described method operations. In yet a further embodiment,
`the invention pertains to a computer program product for
`assigning tra?ic buckets to a cache system. The computer
`program product has at least one computer readable medium
`and a computer program instructions stored within the at
`least one computer readable product con?gured to cause a
`processing device to perform at least some of the above
`described method operations.
`These and other features and advantages of the present
`invention will be presented in more detail in the following
`speci?cation of the invention and the accompanying ?gures
`which illustrate by way of example the principles of the
`invention.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`FIG. 1 is a network diagram illustrating cache clusters
`according to a speci?c embodiment of the present invention;
`FIG. 2 is a ?owchart representing a bucket assignment
`process for a cache system (CS) that is joining a cluster or
`starting-up in accordance with one embodiment of the
`present invention.
`FIG. 3 is a ?owchart illustrating the slow-start procedure
`of FIG. 2 in accordance with one embodiment of the present
`invention.
`FIG. 4 is a ?owchart illustrating the tracking procedure of
`FIG. 2 in accordance with one embodiment of the present
`invention.
`FIG. 5 is a ?ow chart illustrating a procedure for assigning
`buckets when a CS leaves a cluster in accordance with one
`embodiment of the present invention.
`FIG. 6 is a diagrammatic representation of a router in
`accordance with one embodiment of the present invention.
`
`DETAILED DESCRIPTION OF SPECIFIC
`EMBODIMENTS
`
`Reference will now be made in detail to a speci?c
`embodiment of the invention. An example of this embodi
`ment is illustrated in the accompanying drawings. While the
`invention will be described in conjunction with this speci?c
`embodiment, it will be understood that it is not intended to
`limit the invention to one embodiment. On the contrary, it is
`intended to cover alternatives, modi?cations, and equiva
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`Accordingly, the present invention provides an apparatus
`and method for intelligently assigning a portion of a clus
`ter’s tra?ic (e.g., buckets) to a cache system to minimize
`overloading of such cache system. In general terms, when a
`new cache system enters a cache cluster and/or starts up, the
`new cache system’s full bucket allocation is not immediately
`assigned to the new cache system. Instead, only a portion of
`the full bucket allocation is initially assigned to the new
`cache system. Thus, the new cache system is less likely to
`be immediately overwhelmed as it enters a cache cluster.
`In one embodiment, the new cache system’s bucket
`assignment is gradually increased until the cache system is
`handling it’s full bucket allocation or it becomes overloaded.
`The cache system’s load is also checked periodically (e.g.,
`every 30 seconds) to determine whether it has become
`overloaded. When the cache system becomes overloaded,
`buckets are immediately shed from the cache system. As a
`result, if the new cache becomes overloaded, it is unlikely to
`remain overloaded for a signi?cant period of time. Thus, the
`new cache system is unlikely to cause a bottle neck for the
`cluster’s network tra?ic. In sum, the new cache system’s
`load is adjusted until it is handling an optimum number of
`buckets (e.g., the cache is not underloaded or overloaded). In
`other embodiments, each cache system’s load within the
`cache cluster continues to be monitored and adjusted so as
`to facilitate e?icient use of each cache system.
`In one aspect, the invention pertains to a method for
`assigning tra?ic buckets to a cache system. When a new
`cache system starts up in a cache cluster having a plurality
`of total buckets, a full bucket allocation is determined for the
`new cache system. Buckets are assigned to the new cache
`system using a ?rst technique when the cache cluster is not
`operating at a maximum load. Buckets are assigned to the
`new cache system using a second technique that differs from
`the ?rst technique. The second technique is performed after
`the ?rst technique. Preferably, the full bucket allocation is
`assigned to the new cache system when the cache cluster is
`operating at a maximum load.
`In one implementation, the ?rst technique includes peri
`odically monitoring a load of the new cache system. When
`the new cache system is overloaded, a minimum number of
`buckets are shed from the new cache system. When the new
`cache system is underloaded, the minimum number of
`buckets are added to the new cache system. In a more
`speci?c implementation, the minimum number equals a
`single bucket.
`In a speci?c implementation, the second technique is
`performed until the full allocation has been assigned to the
`new cache system or a minimum number of buckets have
`been added to or shed from the new cache system. In this
`implementation, a portion of the full bucket allocation is
`initially assigned to the new cache system. When the new
`cache system is overloaded and when no buckets have been
`
`50
`
`55
`
`60
`
`65
`
`Petitioner Ex. 1004 Page 10
`
`

`
`US 7,069,324 B1
`
`5
`lents as may be included Within the spirit and scope of the
`invention as de?ned by the appended claims. In the folloW
`ing description, numerous speci?c details are set forth in
`order to provide a thorough understanding of the present
`invention. The present invention may be practiced Without
`some or all of these speci?c details. In other instances, Well
`knoWn process operations have not been described in detail
`in order not to unnecessarily obscure the present invention.
`FIG. 1 is a simpli?ed netWork diagram Which Will be used
`in conjunction With the ?oWcharts of FIGS. 2 and 3 to
`describe a speci?c embodiment of the present invention.
`According to this embodiment, a plurality of client machines
`102 Which are resident on a local area netWork (LAN) 104
`communicate via router 106 and Wide area netWork (WAN)
`108, e.g., the internet, With server 110. Of course, some or
`all of the clients 102 may communicate With the router 106
`through various other con?gurations, rather than through a
`LAN. For example, a client may be coupled directly to the
`router 106 or there may be one or more intermediate routers
`betWeen a client 102 and the router 106.
`As discussed above, the router 106 may direct certain
`traf?c, e.g., destined for port 80, to a cache system, such as
`112a, Which is con?gured to “spoof” server 110. Ifthere are
`multiple caches connected to the cache-enabled router, the
`router selects from among the available caches for a par
`ticular request based on the destination IP address speci?ed
`in the packet. For example, a ?rst set of destination IP
`addresses may be assigned to cache system 112a; a second
`set of IP addresses to cache system 112b, and a third set of
`IP addresses to cache system 1120.
`Before sending the packet to one of its associated cache
`systems, the cache-enabled router 106 encapsulates the
`packet for transmission to the selected cache system by
`adding another IP header Which designates the router as the
`source of the packet and the cache system as the destination.
`That is, the router encapsulates the packet for transmission
`to a cache system Which might be several “hops” aWay. So,
`for example, router 106 might encapsulate the packet for
`transmission to cache system 112d Which is connected to
`router 106 via router 114. Thus, not only may multiple cache
`systems be associated With a particular router, but multiple
`routers may be supported by an individual cache system or
`a group of cache systems. This alloWs a tremendous amount
`of ?exibility in Where the cache system and router need to
`be in relation to each other.
`The selected cache system 112a responds to a request
`from a client 102 to obtain objects from destination platform
`110. The cache system 112a either retrieves objects from
`destination platform 110 to then present to one of the clients
`or retrieves objects from its oWn cache (Which objects Were
`previously retrieved from the destination platform 110).
`It Will be understood that the netWork cache systems
`described herein may employ any of a variety of existing ?le
`systems and remain Within the scope of the invention. For
`example, the invention may be implemented using a Unix
`general purpose ?le system or the equivalent. A particular
`embodiment of the invention employs the ?le system
`described in commonly assigned, US. Pat. No. 5,950,205
`for DATA TRANSMISSION OVER THE INTERNET
`USING A CACHE MEMORY FILE SYSTEM issued on
`Sep. 7, 1999, the entire speci?cation of Which is incorpo
`rated herein by reference for all purposes.
`In the illustrated embodiment, cache systems 112a, 112b,
`1120, and 112d form a cache cluster or farm 120 and cache
`system 122 form a cache cluster 122. Traf?c is typically
`allocated to each cache system Within the same cache
`cluster. Traf?c may be allocated based on any suitable factor.
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`6
`In the illustrated embodiment, tra?ic is allocated based on IP
`destination address. That is, each cache system is assigned
`to handle requests for objects from a particular set of
`destination addresses. The present invention provides
`mechanisms for intelligently assigning buckets to each
`cache system Within a cluster so that the cache system is not
`immediately overWhelmed by object requests, for example.
`Any suitable mechanism may be utiliZed for preventing
`cache system overload. For example, buckets may be sloWly
`assigned to a neW cache system upon star‘tup. Additionally,
`the cache system’s may be continuously monitored for
`overload, as Well as underload, and the bucket assignment is
`then appropriately adjusted. As one or more cache system(s)
`are pulled from a cluster or shut-doWn, bucket can also be
`intelligently re-assigned to minimiZe traf?c bottle-necks.
`FIG. 2 is a ?owchart representing a bucket assignment
`process 200 for a cache system (CS) that is joining a cluster
`or starting-up in accordance With one embodiment of the
`present invention. Initially, the neW CS announces its pres
`ence in operation 202 to the other CS’s and/or router(s) of
`the cluster. In response to this announcement, the full bucket
`allocation for each CS is then determined in operation 204.
`This bucket allocation procedure may be implemented in
`any of the router’s or CS’s associated With the cluster. When
`allocation is implemented Within a CS, that CS is commonly
`referred to as the “lead CS.” The CS With the loWest
`assigned IP address typically functions as the lead CS.
`In one embodiment, the buckets are evenly divided
`among the cache system’s of the cluster. For the cache
`cluster 120, 64 buckets (i.e., 256/4) are allocated for each CS
`since there are four CS’s (11211411251). On the other hand,
`since there is currently only a single CS 110 Within the
`cluster 122, 256 buckets are allocated for CS 110. When a
`neW CS is associated With a particular cluster, buckets from
`existing CS’s are allocated to the neW CS in a roughly even
`manner, i.e., about the same number from each. The router
`may also attempt to preserve the utility of data already stored
`in the existing CS’s While ?lling up the neW caching engine
`With neW information. Buckets may also be assigned based
`on any suitable load balancing techniques. That is, buckets
`are assigned so that traf?c is evenly distributed among the
`CS’s of the cluster. For example, the high traf?c buckets
`Would not be assigned to the same CS. A Weighted load
`balancing assignment technique is further described beloW.
`After the full bucket allocation is determined for each CS,
`it is then determined Whether the cluster is at maximum load
`in operation 208. If the cluster is currently handling a
`maximum load (e.g., prior to start-up of the neW CS), the
`256 buckets are already apportioned among the existing
`CS’s in chunks that are manageable for the existing CS.
`Since the neW allocation for each CS is a loWer number of
`buckets than the number of buckets currently assigned to the
`existing CS’s, the neW CS is likely to able to handle its full
`allocation of buckets. Thus, if the cluster is at maximum
`load, the full bucket allocation is then assigned to each
`cluster CS in operation 214. Of course, it is assumed that the
`neW CS does not have a radically loWer capacity than the
`existing CS’s. Typically, When one adds a CS to a cluster, the
`CS Will be an upgraded higher capacity CS. At the very least,
`the neW CS has the same capacity as the other CS’s in the
`cluster. Operations 208 and 214 may be skipped to alloW for
`a more diverse arrangement of CS’s With Widely variable
`capacities.
`If the cluster is not at maximum load, a relatively large
`number of buckets may not be assigned to the old CS’s. In
`other Words, a large number of buckets may be allocated to
`the neW CS. Thus, the full bucket allocation may then be
`
`Petitioner Ex. 1004 Page 11
`
`

`
`US 7,069,324 B1
`
`7
`gradually assigned to the new CS using a sloW-start tech
`nique in operation 210 so as to not overload the neW CS. In
`general terms, buckets are sloWly assigned to the neW CS
`until the full allocation is assigned or the neW CS is
`overloaded. Buckets are then continuously assigned to each
`cluster CS using a tracking technique in operation 212. In
`tracking mode, the CS’s are continuously monitored for
`overloading and underloading. The underloaded or over
`loaded CS’s bucket assignment is then appropriately
`adjusted.
`FIG. 3 is a ?owchart illustrating the sloW-start procedure
`210 of FIG. 2 in accordance With one embodiment of the
`present invention. For exemplary purposes, CS 112d has
`entered cluster 120, Which already has existing CS’s 112a,
`112b, and 1120. When the sloW-start procedure 210 is
`implemented for neW CS 112d, half of the full bucket
`allocation is assigned to the neW CS 112d in operation 302.
`In this example, the full allocation is 64 buckets; therefore,
`CS 112d is initially assigned 32 buckets.
`It is then determined Whether the current bucket count that
`Was assigned/shed is less than or equal to one bucket in
`operation 304. If the current bucket count assigned/shed is
`less than or equal to one, then sloW-start mode is exited.
`Tracking mode is then implemented on the neW CS 112d, as
`Well as the other CS’s (112a through 1120). In the illustrated
`tracking mode embodiment, the CS is monitored for under
`load or overload and a single bucket may then be shed/
`assigned accordingly. Of course, tracking mode may be
`entered upon reaching any other suitable bucket count that
`Was assigned or shed.
`If the count is not less than one, the sloW-start mode is not
`exited (and the tracking mode is not entered). Instead, the
`process Waits fo

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket