throbber
United States Patent [19]
`Aggarwal et al.
`
`US006012126A
`[11] Patent Number:
`[45] Date of Patent:
`
`6,012,126
`Jan. 4, 2000
`
`[54] SYSTEM AND METHOD FOR CACHING
`OBJECTS OF NON-UNIFORM SIZE USING
`MULTIPLE LRU STACKS PARTITIONS INTO
`A RANGE OF SIZES
`
`[75] Inventors: Charu Chandra Aggarwal, Ossining,
`N.Y.; Marina Aleksandrovna
`Epelman, Cambridge, Mass.; Joel
`Leonard Wolf, Katonah; Philip
`Shi-lung Yu, Chappaqua, both of NY.
`
`[73] Assignee: International Business Machines
`Corporation, Armonk, NY.
`
`[21] Appl. No.: 08/741,412
`[22]
`Filed:
`Oct. 29, 1996
`
`[51] Int. Cl.7 .................................................... .. G06F 12/00
`[52] US. Cl. ........................ .. 711/133; 711/134; 711/136;
`711/129; 711/132; 709/203
`[58] Field of Search ................................... .. 711/113, 118,
`711/132, 133, 119, 122, 129, 135, 136,
`134, 159, 160; 709/203
`
`[56]
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`
`3/1985 Coulson et al. ...................... .. 711/129
`4,503,501
`2/1997 McNutt et al. ..
`711/170
`5,606,688
`9/1998 Kisor .................. ..
`.. 395/20057
`5,809,250
`5,842,216 11/1998 Anderson et a1. .............
`395/20033
`
`OTHER PUBLICATIONS
`
`Marc Adams, Charles Stanbridge, Ghaleb Abdulla, Stephen
`Williams, and Edward Fox, “Caching Proxies: Limitations
`and Potentials”, Oct. 1995.
`Hypertext Transfer Protocol—HTTP/1.1, J. Mogul DEC H.
`Frystyk T. Berners—Lee MIT/LCS Jan. 1997, http://WWW.p
`mg.les.mit.edu, J. Gettys.
`Timos K. Sellist, “Intelligent Caching And Indexing Tech
`niques For Relational Database Systems”, Inform. Systems,
`vol. 13, No. 2, pp. 175—185, 1988.
`
`Web Server Technology, The Advanced Guide for World
`Wide Web Information Providers, N. Yeager, et al., pp.
`200—201.
`Abrams M. Standridge et al., “Caching Proxies: Limitations
`and Potentials”, Fourth International World Wide Web Con
`ference Proceedings, p. 119, (1995).
`Abrams M. Standridge et al., “Caching Proxies: Limitations
`and Potentials”, Fourth International World Wide Web Con
`ference Proceedings, pp. 120—133, (1995).
`
`Primary Examiner—Eddie P. Chan
`Assistant Examiner—Kimberly N. McLean
`Attorney, Agent, or Firm—Kevin M. Jordan
`
`[57]
`
`ABSTRACT
`
`A system and method for caching objects of non-uniform
`siZe. A caching logic includes a selection logic and an
`admission control logic. The admission control logic deter
`mines Whether an object not currently in the cache is
`accessed may be cached at all. The admission control logic
`uses an auxiliary LRU stack Which contains the identities
`and time stamps of the objects Which have been recently
`accessed. Thus, the memory required is relatively small. The
`auxiliary cache serves as a dynamic popularity list and an
`object may be admitted to the cache if and only if it appears
`on the popularity list. The selection logic selects one or more
`of the objects in the cache Which have to be purged When a
`neW object enters the cache. The order of removal of the
`objects is prioritized based both on the siZe as Well as the
`frequency of access of the object and may be adjusted by a
`time to obsolescence factor (TTO). To reduce the time
`required to compare the space-time product of each object in
`the cache, the objects may be classi?ed in ranges having
`geometrically increasing intervals. Speci?cally, multiple
`LRU stacks are maintained independently Wherein each
`LRU stack contains only objects in a predetermined range of
`siZes. In order to choose candidates for replacement, only
`the least recently used objects in each group need be
`considered.
`
`37 Claims, 7 Drawing Sheets
`
`530' .
`
`um 00)
`cache
`
`Salem \Elllzlivé
`purges w, P2,
`Pk (macaw)
`
`Petitioner Apple Inc. - Exhibit 1069, p. 1
`
`

`
`U.S. Patent
`
`Jan. 4,2000
`
`Sheet 1 of7
`
`6,012,126
`
`-95
`
`
`
`""""" Multiple Clients'q "" Network
`
`-130
`
`Main
`Memory
`150
`
`Server
`
`100
`
`\105
`
`Auxiliary stack
`125
`
`/
`
`Multiple
`LRU
`stacks
`
`Disk
`layout
`115
`
`M .
`am
`
`110
`
`Caching
`logic
`160
`
`Petitioner Apple Inc. - Exhibit 1069, p. 2
`
`

`
`
`
`U°S° Patent Object size=1 /'191 Jall- 4, 2000
`
`
`
`Sheet 2 0f 7
`
`1
`
`1
`
`1
`
`1 ’@170
`
`Object size=2 to 3
`
`192
`
`2
`
`3
`
`2 ’ W175
`
`Object size=4 to 7
`4
`
`6
`
`Object size=8 to infinity
`10
`
`193
`''w 180
`
`6
`
`11
`
`194
`"q/ 185
`
`Fig. 1B
`
`Petitioner Apple Inc. - Exhibit 1069, p. 3
`
`

`
`U.S. Patent
`
`Jan. 4,2000
`
`Sheet 3 of7
`
`6,012,126
`
`500
`w.
`i=1
`4
`505
`\ “
`Object O(i) is
`demanded
`from cache
`
`510
`
`540
`\
`Add identity+timestamp of O(i) to auxiliary
`(popularity) stack using LRU rules
`‘
`
`535
`
`5so\
`
`i=i+1 <
`
`‘
`Return O(i)
`from cache
`
`Yes
`
`‘
`
`515
`
`520
`
`525
`
`W, Select tentative
`
`Purges P1, P2,---
`Pk (selection)
`
`Admission Control; |S
`
`an exchange of O(i)
`and p1’mpk
`viable?
`
`Yes Bring in O(i)
`Purge
`P1,...Pk
`
`Petitioner Apple Inc. - Exhibit 1069, p. 4
`
`

`
`U.S. Patent
`
`Jan. 4, 2000
`
`Sheet 4 of 7
`
`6,012,126
`
`550\ I
`
`Select the object in the main cache
`not yet selected which has the largest
`product of size and time since last access
`taking into account time to obsolescence. (ITO)
`
`560
`
`Do the
`selected objects
`constitute at least S units
`ofspace?
`
`570\
`
`The picked objects form
`the tentative purges
`P1...Pk
`
`Fig. 3
`
`Petitioner Apple Inc. - Exhibit 1069, p. 5
`
`

`
`Petitioner Apple Inc. - Exhibit 1069, p. 6
`
`

`
`U.S. Patent
`
`Jan. 4,2000
`
`Sheet 6 of7
`
`6,012,126
`
`610 \
`j=1, 2:0
`
`615
`
`N0
`
`Output
`
`Yes
`620x
`Pick the least recently used object
`in each of the stacks and among
`these set of objects pick the one
`with the least value of product of
`size and time since last access,
`while taking into account time to
`obsolescence (TTO)
`
`625
`v
`\
`Z=Z+size of chosen object
`
`640
`
`635
`x
`
`630
`
`l
`
`V ‘ Pj=current
`selected object
`
`Petitioner Apple Inc. - Exhibit 1069, p. 7
`
`

`
`Petitioner Apple Inc. - Exhibit 1069, p. 8
`
`

`
`6,012,126
`
`1
`SYSTEM AND METHOD FOR CACHING
`OBJECTS OF NON-UNIFORM SIZE USING
`MULTIPLE LRU STACKS PARTITIONS INTO
`A RANGE OF SIZES
`
`FIELD OF THE INVENTION
`The present invention relates to the caching of objects of
`non-uniform siZe. A particular aspect of the present inven
`tion is applicable to the caching of objects on the World
`Wide Web.
`Glossary of Terms
`While dictionary meanings are also implied by certain
`terms used herein, the following glossary of some terms may
`be useful.
`Internet
`The netWork of netWorks and gateWays that use the
`TCP/IP suite of protocols.
`Client
`a computer Which issues commands to the server Which
`performs the task associated With the command.
`Server
`Any computer that performs a task at the command of
`another computer is a server. A Web server typically sup
`ports one or more clients.
`World Wide Web (WWW or Web)
`The Internet’s application that lets people seeking infor
`mation on the Internet sWitch from server to server and
`database to database by clicking on highlighted Words or
`phrases of interest (hyperlinks). An Internet WWW server
`supports clients and provides information. The Web can be
`considered as the Internet With all of the resources addressed
`as URLs and Which uses HTML to display the information
`corresponding to URLs and provide a point-and-click inter
`face to other URLs.
`Universal Resource Locator (URL)
`AWay to uniquely identify or address information on the
`Internet. Can be considered to be a Web document version
`of an e-mail address. They can be accessed With a hyperlink.
`An example of a URL is “http://WWW.philyu.com:80/
`table.html”. A URL has four components. Starting from the
`left, the ?rst speci?es the protocol to use, separated from the
`rest of the locator by a “z”. Next is the hostname or IP
`address of the target host; this is delimited by the “//” on the
`left and on the right by a “/” or optionally a “z”. The port
`number is optional, and is delimited on the left from the
`hostname by a “z” and on the right by a “/”. The fourth
`component is the actual ?le name or program name. In this
`example, the “.html” extension means that this is an HTML
`?le.
`HyperText Markup Language (HTML)
`HTML is the language used by Web servers to create and
`connect documents that are vieWed by Web clients. HTML
`uses hypertext documents.
`Hypertext Transfer Protocol (http)
`http is an example of a stateless protocol, Which means
`that every request from a client to a server is treated
`independently. The server has no record of previous con
`nections. At the beginning of a URL, “httpz” indicates the
`?le contains hyperlinks.
`Internet BroWser or Web broWser
`Agraphical interface tool that runs Internet protocols such
`as http, and displays results on the user’s screen. The
`broWser can act as an Internet tour guide, complete With
`pictorial desktops, directories and search tools used When a
`user “surfs” the Internet. In the present invention the Web
`broWser is a client service Which communicates With the
`World Wide Web.
`
`10
`
`15
`
`25
`
`35
`
`45
`
`55
`
`65
`
`2
`
`Client Caches
`a cache Which is built into Web broWsers, and may either
`store only the document accesses during the current invo
`cation (nonpersistent cache such as Mosaic) or may cache
`documents across invocations.
`Caching Proxies
`SpecialiZed servers in the netWork Which act as agents on
`the behalf of the client in order to locate the cached copy of
`a document, if possible. Usually caching proxies serve as
`secondary or higher level caches, because they are con
`cerned only With cache misses not available on client caches.
`Http Daemon (httpd)
`A server having Hypertext Markup Language and Com
`mon GateWay Interface capability. The HTTPD is typically
`supported by an access agent Which provides the hardWare
`connections to machines on the intranet and access to the
`Internet, such as TCP/IP couplings.
`BACKGROUND
`The recent increase in popularity of the World Wide Web
`(WWW or Web) has lead to a considerable increase in the
`amount of traffic over the Internet. As a result, the Web has
`noW become one of the primary bottlenecks on netWork
`performance. When documents are requested by a user Who
`is connected to a server via a sloW netWork link, there can
`be noticeable latency at the user end. Further, transferring
`the document over the netWork leads to an increase in the
`level of traf?c over the netWork. This reduces the bandWidth
`available for other requests. In order to reduce access
`latencies, it is desirable to store (cache) copies of popular
`documents closer to the user, from Which the access laten
`cies are more acceptable.
`The cache can be implemented at various points on the
`netWork. For example, a large university or corporation may
`have its oWn local cache, from Which all the users subscrib
`ing to that netWork may fetch documents. It is often desir
`able to implement specialiZed servers in the netWork, called
`caching proxies, Which act as agents on the behalf of the
`client in order to locate the cached copy of a document.
`Usually caching proxies serve as secondary or higher level
`caches, because they are concerned only With misses not
`available on client caches. Client caches are built into the
`Web broWsers, and may either store only the document
`accesses during the current invocation (nonpersistent cache
`such as Mosaic) or may cache documents across invoca
`tions.
`One prior art caching method is called the least recently
`used (LRU) method. In LRU, a list of the objects is
`maintained in the cache and is ordered based on decreasing
`frequency of access. In other Words the most recently
`accessed object is the ?rst in the list, While the least recently
`accessed object is at the bottom. When a neW object is
`accessed, one or more objects may have to be pruned from
`the list to make room in the cache for the most recently
`accessed object. Thus, this method does not take siZe into
`account in its cache replacement process.
`Another important issue is admission control; i.e., When
`to alloW an object to enter the cache at all. It may not alWays
`be favorable to insert an object into the cache, because it
`may loWer the probability of a hit to the cache. For tWo
`objects Which are equally frequently accessed, the cache hit
`ratio is maximiZed When the replacement policy is biased
`toWards documents of smaller siZe. This is because it is
`possible to store a greater number of smaller siZe documents.
`While deciding Which documents to replace When a neW
`object enters, both the relative frequency of access, and the
`siZe of the objects should be taken into account.
`
`Petitioner Apple Inc. - Exhibit 1069, p. 9
`
`

`
`6,012,126
`
`3
`In Abrams M., Standridge C. R., Abdulla G., Williams S.,
`and Fox E. A., “Caching Proxies: Limitations and Potentials.
`Fourth International World Wide Web Conference
`Proceedings”, Abrams et al., [ABRAMS] discuss some
`cache replacement policies Which do take siZe into account
`in the decision making process. The following are tWo of the
`policies discussed:
`(1) LRU (LRU-THOLD) thresholding policy: In this case,
`a threshold siZe is speci?ed, above Which no object may be
`cached. For the purpose of replacement, a pure LRU policy
`may be used. Thus, this scheme uses siZe for the purpose of
`admission control only; While the replacement policy still
`folloWs pure LRU replacement; i.e., When an object is to be
`admitted to the cache, the objects in the list Which Were the
`least recently accessed are pruned to make room for the
`incoming object.
`(2) LRU-MIN policy: This policy tries to minimiZe the
`number of documents replaced. Let the siZe of the incoming
`object be S. If there are any objects in the cache Which have
`a siZe Which is at least S, then remove that object from the
`cache based on LRU order. On the other hand, if there are
`no objects With siZe at least S, then consider objects of siZe
`at least S/2, and so on until enough free cache space has been
`created.
`ABRAMS concludes that the LRU-MIN is preferable
`over most Workloads, since (unlike LRU-THOLD) there are
`no parameters to be chosen. It achieves generally better
`performance, even When optimal thresholds are used for the
`LRU-THOLD policy. In fact, the optimal value of the
`thresholds to be used for the LRU-THOLD policy depend
`upon the nature and type of the underlying data, Which may
`vary from one application to another.
`
`SUMMARY
`
`The present invention is directed to an improved com
`puter system and method for caching objects of non-uniform
`siZe. The caching logic may be organiZed as tWo logical
`components, although it is clearly not so limited.
`(1) The selection logic: When a neW object enters the
`cache, one or more of the objects Which are in the cache may
`have to be purged. Consequently, the object(s) to be purged
`must be selected.
`The order of removal of the objects is preferably priori
`tiZed based on object siZe, frequency of object access, and a
`time to obsolescence (TTO) of the object. According to one
`aspect of the present invention, the selection logic is such
`that the removal of the objects has a priority value Which is
`a product of both the object siZe and the time since last
`requested (space-time product) (the time since last requested
`can be considered to be an inverse of the frequency of
`access). The frequency of access may be adjusted by the
`TTO, Which recogniZes that the bene?ts of caching
`decreases as the object approaches obsolescence.
`According to another aspect of the selection logic, the
`objects may be classi?ed in ranges having geometrically
`increasing intervals. Speci?cally, multiple LRU stacks can
`be independently maintained Wherein each LRU stack con
`tains only objects in a predetermined range of siZes. Thus,
`only the least recently used object in each stack need be
`considered by the selection logic, thereby eliminating the
`need to compare the space-time product of each object in the
`cache.
`(2) Admission Control logic: When an object not cur
`rently in the cache is accessed, it may or may not be cached
`at all. The decision Whether or not to cache the object is
`called the admission control logic. The admission control
`
`4
`logic preferably uses a popularity criterion for the objects
`accessed. The popularity criterion may be implemented
`through an auxiliary stack Which contains the identities or
`URLs (universal resource locators) and time stamps of the
`objects Which have been recently accessed. The auxiliary
`stack is preferably maintained in least recently used (LRU)
`order, and since it contains the identities of objects rather
`than the objects themselves, the memory required is rela
`tively small. The auxiliary stack serves as a dynamic popu
`larity list and an object may be admitted to the cache if and
`only if it appears on the popularity list and satis?es certain
`other requirements Which We shall discuss in the detailed
`description.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`For further features and advantages of the present
`invention, refer to the detailed description and accompany
`ing draWings, Wherein:
`FIG. 1A illustrates a system having features of the present
`invention;
`FIG. 1B depicts an example of multiple range-based LRU
`stacks in accordance With the present invention;
`FIG. 2 depicts an example of a ?oWchart for the caching
`logic of FIG. 1A;
`FIG. 3 depicts an example of a ?oWchart for the selection
`logic of step 515 in FIG. 2;
`FIG. 4 depicts an example of an admission control ?oW
`chart in accordance With the present invention and is a more
`detailed description of step 520 of FIG. 2;
`FIG. 5 depicts a ?oWchart of another version of the
`selection logic, using multiple LRU stacks; and
`FIG. 6 depicts a ?oWchart of another version of the
`selection logic, for disk caching.
`
`DETAILED DESCRIPTION
`FIG. 1A depicts a computer system and netWork having
`features of the present invention. Those skilled in the art Will
`appreciate that this architecture could support both disk
`caching via disk 105 as Well as caching via main memory
`150. Furthermore, although the present invention Will be
`described in terms of a server cache, the caching logic can
`also (or alternatively) be implemented on the client 95
`and/or a proxy server. In the case of netWork 130 comprising
`the Web, a client 95 is running a conventional Web broWser
`including a cache. As is conventional, the broWser can
`communicate an http request for an object (such as a
`document, graphic, or program) by folloWing hyperlinks or
`explicitly invoking a URL. Assume, by Way of example
`only, that a requested document is not contained in the client
`cache. The request is then hyperlinked to the appropriate
`httpd server.
`According to the present invention, a caching logic 160
`uses cache meta-information contained in LRU stacks 120 to
`determine if the document is in the main cache 110. An
`example of the caching logic Will be described With refer
`ence to FIGS. 2—6. An example of the LRU stacks 120 and
`their maintenance Will be described With reference to FIG.
`1B and FIG. 5, respectively. If disk caching is employed,
`meta-information available on a disk layout data structure
`115 is similarly used to determine if the document is cached
`on disk 105. If found in the main cache 110 or cached on
`disk 105, the document is served therefrom to the requesting
`client 95. The auxiliary stack 125 is also updated, using LRU
`rules, With the identity and time stamp of the requested
`object.
`
`10
`
`15
`
`25
`
`35
`
`45
`
`55
`
`65
`
`Petitioner Apple Inc. - Exhibit 1069, p. 10
`
`

`
`6,012,126
`
`5
`If the document is not in the cache then a method is
`needed for cache admission control (FIG. 4) and, if insuf
`?cient cache space exists, selection and replacement of
`cached objects (FIGS. 2—3, and 5—6). The main memory 150
`contains several data structures Which are relevant to the
`caching logic 160 in accordance With the present invention.
`The multiple LRU stacks 120 in the main memory 150 are
`used to implement the replacement logic policy. The LRU
`stacks 120 are illustrated in more detail in FIG. 1B.
`By Way of overvieW, When a neW object enters the cache,
`the selection logic (FIGS. 2—3, and 5—6) determines Which
`of one or more of the objects in the cache should be purged.
`Preferably, the order of removal of the objects is prioritiZed
`based on the product of the object siZe and a time elapsed
`since last access. The time elapsed since the last access may
`be adjusted by a factor Which accounts for time to obsoles
`cence (TTO). The TTO factor recogniZes that each object
`can become obsolete and thus the bene?t of caching
`decreases With the TTO.
`As depicted in FIG. 1B, the LRU stack 120 may be
`classi?ed in ranges having geometrically increasing siZe
`intervals. Thus, only the least recently used objects 191—194
`in each range need be considered When choosing candidates
`for replacement. Thus, the multiple LRU stacks 170—185
`can advantageously reduce the number of space-time prod
`uct comparisons of cached objects.
`Returning again to FIG. 1A, the caching logic 160 pref
`erably also includes admission control logic (to be described
`With reference to FIG. 4) Which makes the ultimate decision
`Whether to admit an object to the cache. The admission
`control logic of the present invention preferably uses a
`popularity factor for the objects accessed. The popularity
`factor is implemented by the auxiliary stack 125 Which
`contains the identities or URLs and time stamps of objects
`Which have been recently accessed. The auxiliary stack 125
`is preferably maintained in LRU order, and contains the
`identities of objects (and not the objects themselves), thus
`reducing memory requirements.
`FIG. 2 depicts an example of the caching logic 160 in
`accordance With the present invention. A discrete counter i
`may be used as a surrogate for a time stamp. Those skilled
`in the art Will also appreciate that a system clock Would also
`suffice. In step 500, When the cache 110 is empty, the counter
`i is initialiZed to one. For each iteration of the loop formed
`by steps 500—540, the counter is incremented by one, in step
`535. Thus, in step 505, an object demanded in iteration i is
`O(i). In step 510, it is determined Whether or not O(i) is in
`the cache. In step 530, if O(i) is in the cache, it is retrieved
`from the cache and returned to the client 95. If O(i) is not in
`the cache, then it is possible that a cache replacement Will
`take place. In step 515, one or more objects P1 .
`.
`. Pk are
`tentatively selected for purging from the cache. Examples of
`the selection logic of step 515 Will be described in more
`detail With the reference to FIGS. 3, 5, and 6. Returning to
`FIG. 2, in step 520 the admission control logic checks
`Whether an exchange of the object O(i) for the tentative
`purges P1 .
`.
`. Pk is indeed favorable. An example of the
`admission control logic Will be discussed With reference to
`FIG. 4. Returning to FIG. 2, in step 525, if the admission
`control logic indicates that it is indeed favorable to imple
`ment the replacement, then the caching logic stores the
`object O(i) in the cache and purges P1 .
`.
`. Pk. In step 535,
`the discrete timestamp counter i is incremented by one. In
`step 540, the auxiliary stack 125 is updated With the iden
`ti?es and the time stamp of O(i). The stack 125 is preferably
`maintained using LRU rules.
`FIG. 3 depicts an example of a ?oWchart for the selection
`logic of step 515 in FIG. 2. As depicted, in step 550, an
`
`10
`
`15
`
`25
`
`35
`
`45
`
`55
`
`65
`
`6
`object in the cache is selected that has the largest product of
`siZe and the time elapsed since the last access. The time
`since last access may be adjusted by a factor according to
`time to obsolescence (TTO). The TTO adjustment recog
`niZes that the potential bene?t of caching decreases With the
`TTO. In some cases, the Web page speci?es the expected
`time to obsolescence. Generally, the time of last modi?ca
`tion of a Web page is included in its header. The time since
`last modi?ed can be use as an indication of the time betWeen
`modi?cations of a Web page. A fraction of this time interval
`may be used to estimate the TTO. For a given TTO, the
`effectiveness of caching is reduced by a factor of (1-1/beta)
`Where beta is a ceiling function of the term TTO divided by
`the time since last accessed. In step 560, it is determined
`Whether the current set of selected objects occupies at least
`S units of memory space. If not, the process returns to step
`550 to select the next object. OtherWise, the current set of
`selected objects form the tentative purges P1 .
`.
`. Pk.
`FIG. 4 depicts an example of an admission control logic
`in accordance With the present invention and is a more
`detailed description of step 520 of FIG. 2. An object of the
`admission control logic is to limit entry to the cache to
`objects Which meet a popularity criterion. The admission
`control policy logic takes as its input the time stamps of the
`object O(i) and the tentative purges P1 .
`.
`. Pk Which Were
`selected by the selection logic. As discussed, the selection
`logic employs the auxiliary stack 125, Which contains only
`time stamps and URLs (identities) of the objects rather than
`the actual objects themselves. It is assumed that an initial
`check has determined that the object O(i) is in the stack 125.
`If not, then the object O(i) is not popular enough, and there
`is no point in performing the replacement. As depicted, in
`step 580, if the object O(i) is in the stack 125, then the
`dynamic frequencies of O(i) and P1 .
`.
`. Pk is calculated in
`steps 585 and 590, respectively. In step 595, the dynamic
`frequency of O(i) is compared to the sum of the dynamic
`frequencies of P1 .
`.
`. Pk. In step 605, if the dynamic
`frequency of O(i) is larger, then the replacement is per
`formed. OtherWise, in step 600, the replacement is not
`performed. Those skilled in the art Will appreciate that one
`could also factor the TTO into the dynamic frequency
`calculation.
`FIG. 5 depicts a ?oWchart of another version of the
`selection logic, using multiple LRU stacks 170—175 for
`main memory caching of Web documents. Recall that mul
`tiple LRU stacks are maintained, each of Which includes a
`range of siZes Which are a geometric multiple of one another.
`Note that although FIG. 1B uses four stacks With a multiple
`of 2, in general, it could be any number of stacks and any
`constant, A. In step 610, set the counter j to 1 and the
`currently created space Z to 0. In step 615, check Whether
`the space occupied by the set of chosen objects (none so far)
`is at least equal to the required space S. If not, in step 620,
`the least recently used object is selected from each of the
`stacks 170—185 and the object With the largest value of the
`product of the siZe and the time elapsed since last requested
`is further selected. The use of multiple LRU stacks may thus
`exhibit improved performance due to the reduction in num
`ber of space-time computations and comparisons (as com
`pared to a single LRU stack). As discussed hereinbefore, the
`time since last access may be adjusted by the TTO. In step
`625, the siZe of the selected object is added to the value Z
`Which is used to track the amount of freed space. In step 630,
`the object Pj is set to the current object, and in step 635, the
`counter j is incremented by 1. Next, the process returns to
`step 615 Wherein the space Z is compared to S. If Z is greater
`than or equal to S, in step 640, the process ends With the set
`
`Petitioner Apple Inc. - Exhibit 1069, p. 11
`
`

`
`6,012,126
`
`7
`. Pk being output. This version of the
`.
`of chosen objects P1 .
`selection logic is particularly advantageous for caching main
`memory 150 objects. HoWever, When objects are cached on
`disk 105 the fact that a set of objects Which are purged from
`the cache need to be immediately adjacent should be
`accounted for.
`FIG. 6 depicts a ?oWchart of another version of the
`selection logic, for disk caching. As depicted, to ensure that
`only physically adjacent objects are purged, tWo pointers are
`maintained, Start and End. All the objects lying betWeen
`these tWo pointers are candidates for purging, and the siZe of
`the objects Which physically lie betWeen the pointers Start
`and End is at least S. The dynamic frequency (DF) of a set
`of adjacent objects is the sum of the reciprocals of the time
`since last accesses of those objects, DF=(1/T1+1/T2+1/Tn).
`In step 650, the pointers are initialiZed, Start=End=1. At this
`point, the total amount of Space occupied by the objects
`lying betWeen Start and End is siZe[1]. In step 655, a check
`is made Whether the pointer Start has already scanned
`through each object in the cache. If not, then check Whether
`the space betWeen Start and End is suf?cient to accommo
`date the incoming object, in step 660. If the space is not
`sufficient, then in step 665, increment the pointer End until
`the group of objects betWeen the pointers Start and End
`represent enough space to accommodate the incoming
`object; the dynamic frequency of the object associated With
`the pointer End is also added to the set of current eligible
`objects. Once We have obtained the set of adjacent objects,
`in step 670, it is checked Whether the dynamic frequency of
`this set is less than the current minimum. If so, in step 675,
`this group is saved as the current minimum. Then, in step
`680, the pointer Start is incremented by 1, and the Space is
`set to Space-siZe[start]. Next, the process repeats at step 655
`in order to ?nd the neXt eligible adjacent group for purging.
`Finally, note that When a block gets referenced, a check is
`made Whether it is stale. If it indeed is stale then the time to
`obsolescence (TTO) is recalculated and then the admission
`control logic is reapplied to determine Whether to continue
`to cache the block after a refresh.
`NoW that the invention has been embodied by Way of a
`detailed description, With alternatives, various modi?cations
`and improvements Will occur to those skilled in the art.
`Thus, it should be understood that the detailed description is
`provided as an eXample and not as a limitation. The proper
`scope of the invention is de?ned by the appended claims.
`For eXample, the preferred embodiment has been described
`as a general cache replacement policy for Web servers.
`Those skilled in the art Will appreciate, hoWever, that the
`present invention is applicable to any kind of situation Where
`the objects to be cached are of non-uniform siZe, and is not
`necessarily restricted to a Web application.
`We claim:
`1. A method for caching objects of non-uniform siZe,
`comprising the steps of:
`receiving an object over a netWork;
`determining Whether to admit a netWork object to a cache
`based on one or more objects to be replaced in the
`cache; and
`When the netWork object is determined to be cacheable,
`replacing the one or more objects in the cache With the
`netWork object as a function of a cached object siZe, a
`time since last requested, and a time to obsolescence
`(TTO).
`2. The method of claim 1, further comprising the step of:
`maintaining multiple LRU stacks, each stack correspond
`ing to a range of object siZes, Where a neWly cached
`object is added to an associated stack range.
`
`10
`
`15
`
`25
`
`35
`
`45
`
`55
`
`65
`
`8
`3. The method of claim 2, further comprising the step of
`partitioning stack ranges into a predetermined number of
`ranges according to a geometric progression.
`4. The method of claim 1, further comprising the step of:
`for each object request, storing object information in an
`auxiliary stack; Wherein said step of determining
`Whether to admit a requested object to a cache is based
`on the object information.
`5. The method of claim 4, Wherein the object information
`includes an object identi?er, and an object request time.
`6. The method of claim 2, further comprising the step of
`eXamining the least recently requested object in each LRU
`stack to determine the replacement candidate.
`7. The method in claim 1, further comprising the step of
`selecting an object replacement candidate With the largest
`value of the product of the object siZe and the time since last
`requested, multiplied by an adjustment factor according to
`the TTO.
`8. The method of claim 7, Wherein the adjustment factor
`is a function of a time since the object Was last modi?ed and
`the time since last requested.
`9. The method of claim 1 Wherein the cache includes
`multiple hierarchical caches.
`10. The method of claim 9 Wherein the hierarchical cache
`comprises one or more of a main memory cache and a disk
`cache.
`11. The method of claim 10 Wherein the cache is a disk
`cache, and Wherein said step of replacing the one or more
`objects in the cache comprises the step of selecting a set of
`physically contiguous objects in the

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket