`Guo et al.
`
`(10) Patent No.:
`(45) Date of Patent:
`
`US 9.462,074 B2
`Oct. 4, 2016
`
`USOO9462O74B2
`
`(54)
`
`METHOD AND SYSTEM FOR CACHING
`STREAMING MULTIMEDIA ON THE
`INTERNET
`
`(71)
`
`Applicant: Sound View Innovations, LLC,
`Parsippany, NJ (US)
`
`(72)
`
`(73)
`
`(*)
`
`(21)
`(22)
`(65)
`
`(60)
`
`Inventors: Katherine H. Guo, Scotch Plains, NJ
`(US); Sanjoy Paul, Marlboro, NJ (US);
`Tze Sing Eugene Ng, Pittsburgh, PA
`(US); Hui Zhang, Pittsburgh, PA (US);
`Markus A. Hofmann, Fair Haven, NJ
`(US)
`Assignee: Sound View Innovations, LLC,
`Parsippany, NJ (US)
`
`Notice:
`
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 0 days.
`Appl. No.: 14/877.298
`
`Filed:
`
`Oct. 7, 2015
`
`Prior Publication Data
`US 2016/OO28849 A1
`Jan. 28, 2016
`
`Related U.S. Application Data
`Division of application No. 14/317,302, filed on Jun.
`27, 2014, now Pat. No. 9,167,015, which is a
`continuation of application No. 11/949,825, filed on
`Dec. 4, 2007, now abandoned, which is a continuation
`of application No. 09/538,351, filed on Mar. 29, 2000,
`now Pat. No. 7,398,312.
`
`(51)
`
`(52)
`
`Int. C.
`H04L 29/08
`H04L 29/06
`U.S. C.
`CPC ......... H04L 67/2842 (2013.01); H04L 65/403
`(2013.01); H04L 65/4084 (2013.01); H04L
`
`(2006.01)
`(2006.01)
`
`
`
`67/1023 (2013.01); H04L 67/1097 (2013.01);
`H04L 67/28 (2013.01); H04L 67/288
`(2013.01)
`
`(58) Field of Classification Search
`CPC .................... H04L 67/10233; H04L 67/1097;
`H04L 67/28: H04L 67/2842; H04L 65/403;
`HO4L 65/4084
`See application file for complete search history.
`
`(56)
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`
`5, 1995 Allen
`5,418,713 A
`8, 1996 Verbiest et al.
`5,550,577 A
`5,586.264 A 12/1996 Belknap et al.
`5,805,821 A
`9, 1998 Saxena et al.
`5,822,759 A * 10/1998 Treynor ................ GO6F 12/127
`T11 134
`
`5,940,391 A
`6,012,080 A
`6,029, 195 A
`6,061,720 A
`
`8, 1999 Malkin et al.
`1/2000 OZden et al.
`2, 2000 Herz
`5, 2000 Kamel et al.
`(Continued)
`
`OTHER PUBLICATIONS
`
`US. Appl. No. 60/177,786, filed Jan. 2000, Eyal, Aviv.
`(Continued)
`
`Primary Examiner — Jerry Dennison
`(74) Attorney, Agent, or Firm — Lerner, David,
`Littenberg, Krumholz & Mentlik, LLP
`(57)
`ABSTRACT
`An apparatus and method to enhance existing caches in a
`network to better Support streaming media storage and
`distribution. Helper machines are used inside the network to
`implement several methods which Support streaming media
`including segmentation of streaming media objects into
`Smaller units, cooperation of Helper machines, and novel
`placement and replacement policies for segments of media
`objects.
`
`9 Claims, 5 Drawing Sheets
`
`-
`
`EN
`
`CLE
`- 8
`
`
`
`AMC Ex. 1001 p.1
`
`
`
`US 9.462,074 B2
`Page 2
`
`(56)
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`
`6,138,162 A 10, 2000 Pistriotto et al.
`6,199,076 B1
`3/2001 Logan et al.
`6,209,787 B1
`4/2001 Iida
`6,248,946 B1
`6/2001 Dwek
`6,272.598 B1* 8/2001 Arlitt ................ GO6F 17,30902
`707/E17.12
`6,317,778 B1 * 1 1/2001 Dias .................... HO4L 67/1095
`TO9,214
`
`11/2001 Malkin et al.
`6,317,795 B1
`6,331,859 B1 12/2001 Crinon
`6,338,117 B1 *
`1/2002 Challenger ........... G06F 12,121
`T11,122
`
`4/2002 Guo et al.
`6,377,972 B1
`4/2002 Wallinski
`6,381,314 B1
`7/2002 Gupta et al.
`6,415,326 B1
`6.425,057 B1 * 7/2002 Cherkasova .......... G06F 12,121
`TO9,219
`6,438,579 B1
`8, 2002 Hosken
`6,449,647 B1
`9/2002 Colby et al.
`6,449,695 B1* 9/2002 Bereznyi ............... G06F 12,123
`707/E17.12
`6,453.404 B1* 9/2002 Bereznyi ............... G06F 12fO23
`T11 119
`6.463,508 B1 * 10/2002 Wolf ................... G06F 12,0888
`T11 130
`
`11/2002 Gupta et al.
`6,484,156 B1
`6,484.199 B2 11/2002 Eyal
`6,536,043 B1* 3/2003 Guedalia ................ HO4N 7,147
`348/E7.071
`
`6,553,376 B1 * 4/2003 Lewis ............... GO6F 17,30902
`6,587,127 B1
`7/2003 Leeke et al.
`6,647.417 B1
`11/2003 Hunter et al.
`6,708,213 B1
`3/2004 Bommaiah et al.
`6,728,760 B1 * 4/2004 Fairchild ................ G06Q 10/04
`707,999.2O2
`6,735,634 B1* 5/2004 Geagan, III .............. HO4L 1.22
`375/E7023
`7/2004 Belknap et al.
`6,763,377 B1
`6,785,784 B1* 8/2004 Jing ...................... G06F 12,121
`711 148
`6,807,550 B1 * 10/2004 Li ..................... GO6F 17/30017
`707/796
`
`10/2005 Gupta et al.
`6,956,593 B1
`6,978,348 B2 12/2005 Belknap et al.
`7,111,009 B1
`9/2006 Gupta et al.
`7,117,259 B1
`10/2006 Rohwer
`7,155,531 B1* 12/2006 Lango ............... HO4L 29,06027
`TO9,216
`
`1/2007 Gupta et al.
`7,162,690 B2
`3/2009 Gupta et al.
`7,506,262 B2
`7,631,015 B2 12/2009 Gupta et al.
`9,167,015 B2 * 10/2015 Guo .................... HO4L 65,4084
`2005/0081159 A1
`4/2005 Gupta et al.
`2006.0143560 A1
`6/2006 Gupta et al.
`
`OTHER PUBLICATIONS
`
`MP3.com. (Top 40) www.mp3.com, Aug. 6, 2003, MP3.com.
`Yahoo! Inc., (Top 100) http://launch.yahoo.com/musicvideos/lists/
`top.asp, Aug. 6, 2003, Yahoo! Inc.
`* cited by examiner
`
`AMC Ex. 1001 p.2
`
`
`
`U.S. Patent
`U.S. Patent
`
`Oct. 4, 2016
`Oct. 4, 2016
`
`Sheet 1 of 5
`Sheet 1 of 5
`
`US 9,462,074 B2
`US 9.462,074 B2
`
`AMC Ex. 1001 p.3
`AMC Ex. 1001 p.3
`
`mm STREAM "gaff
`
`y \- t
`g; ‘55: DATA STREAM
`y) AA Sr. Af
`(R)
`\«7’
`- r
`
`58
`
`8
`
`PEG,
`‘S
`FG.
`
`
`
`U.S. Patent
`
`Oct. 4, 2016
`
`Sheet 2 of 5
`
`US 9.462,074 B2
`
`AMC Ex. 1001 p.4
`
`
`
`
`it
`
`H
`
`
`
`
`S.
`is f, \
`(5 R,
`- (SR WAY
`
`i.
`
`d Y --- A.
`
`---
`
`V
`\
`
`---
`
`N -
`ver
`
`/
`
`/
`/s-
`
`------
`
`r
`f
`/
`
`-
`
`f
`^
`
`&
`
`)
`/
`
`N
`o
`
`
`
`U.S. Patent
`
`Oct. 4, 2016
`
`Sheet 3 of 5
`
`US 9.462,074 B2
`
`H -
`DISK
`S}{CK
`
`302d
`
`302e
`
`302f
`
`302g
`
`------------------aws------arearra- - - - - - -a-ra-i------a-a-a-a-a-ar-ra-a-
`
`- - - - - - - - -------
`
`CASE SERVER SORAGE
`
`G 38
`(PRIOR AR)
`
`--------
`
`x - -
`
`- - - - - - - - - - - - - - - - - - - -
`
`- -------------------------------------------
`
`.300
`
`A
`
`'302a
`
`302b
`
`3020
`
`302d
`
`302a
`
`3{2f
`
`32:
`
`302h
`
`3O2;
`
`|
`CHUNK CHUNK (CHUNKCHNK CHUNK CHUNK CHUNK (CHUNK (CHUNK
`- 9...9-i
`. . . . . . .
`
`s
`
`
`
`8.
`
`f
`
`k
`3K
`SCCK
`
`CACE SERE SCRAG
`G 3
`
`AMC Ex. 1001 p.5
`
`
`
`U.S. Patent
`
`Oct. 4, 2016
`
`Sheet 4 of 5
`
`US 9.462,074 B2
`
`-
`-
`a
`---,
`CSN RECESS Aix S.
`(OSEC FRC:8 HE RCA
`HERER SERVER
`
`YES
`
`----------all-l-al-sa
`
`
`
`rpaaaaac rur - a 1 - - 1 - a a -----------
`
`RERVE ONE OR iO3E
`CHUNKSCF REQUESTED
`S. CRJEC, RC3, OCA
`EPER SERf ER
`
`48
`
`-
`
`t
`r
`FDATE THE PAYBACK
`REQUEST TRE TO LEEN; ify
`NEXT C-NC BE
`RETRIEVED FROM THE Shi
`OBEC
`
`malamaraw warm-r- r ------> * r waaaar merrer---------
`
`r
`
`4 a.
`
`- \?
`- CA- N
`- REQUEST N.
`- BESERVICEDAT
`-
`N LOCALHELPER -
`Y SERVER -
`N /
`
`N.
`
`Y
`
`— - /
`FIND THE LOWEST COST
`CACHE IN THENETWORK
`
`RESREV: A NiSER OF
`CSKS rROF
`WES
`COSACEURON
`LOCALHELPER SERVER
`; CAN SERVICE THENEXT
`C:NK
`
`N -
`YES
`
`i.
`/
`-
`- -
`TERMINATE PROCESS ---
`
`i i8
`-N/
`- 1 car
`- END Y.
`ES
`< OF SNOBJECF) f... ...
`AChE -
`Y.
`NE ACE /
`N -
`420
`NO
`---f
`UPDATE THE PLAYBACK
`REQUEST TIME TO DENTIFY
`E. NEXT CHNK TO BE
`REREVED FROM THE SA
`CSEC
`
`—
`
`AMC Ex. 1001 p.6
`
`
`
`U.S. Patent
`
`Oct. 4, 2016
`
`Sheet S of 5
`
`US 9.462,074 B2
`
`
`
`
`
`r ---
`
`ra- - - - -
`
`----
`
`CHNK CHUNK |CUNK CHUNK CHUNK CNK CHUNK Chix CHUNK C-Nix
`1
`2
`3
`4
`5
`8
`7
`3
`9
`
`i2
`i
`TiE-SAii
`
`i3
`
`i. t", "t, t,
`STREAviiNG MEDA (SM) OBJECT
`FiG 53
`
`to
`
`to
`
`, or -
`-
`-r-, - -
`CCA, SERVER CACHE
`REji
`CAE-
`REMOTE CAE i (ONTEN SERVER
`E
`22
`23
`24
`4.
`
`
`
`
`
`
`
`2,3,4,7,8,9
`
`rwry-r---->
`
`2.8.8
`
`,3,5,6,7,8,9,
`
`1,2,3,4,5,6,7,8,9,10
`:
`
`AC SERAGE ARE
`G. S.
`
`AMC Ex. 1001 p.7
`
`
`
`US 9,462,074 B2
`
`1.
`METHOD AND SYSTEM FOR CACHING
`STREAMING MULTIMEDIA ON THE
`INTERNET
`
`CROSS-REFERENCE TO RELATED
`APPLICATIONS
`
`The present application is a divisional of U.S. patent
`application Ser. No. 14/317,302, filed Jun. 27, 2014, is
`scheduled to issue as U.S. Pat. No. 9,167,015 on Oct. 20,
`2015, which is a continuation of U.S. patent application Ser.
`No. 11/949,825, filed Dec. 4, 2007, now abandoned, which
`is a continuation of U.S. patent application Ser. No. 09/538,
`351, filed Mar. 29, 2000, issued as U.S. Pat. No. 7,398,312
`on Jul. 8, 2008, the disclosures of which are incorporated
`herein by reference.
`
`10
`
`15
`
`BACKGROUND OF THE INVENTION
`
`25
`
`30
`
`35
`
`40
`
`1. Field of the Invention
`The present invention relates to network systems, and
`particularly to public network systems, such as the Internet.
`More particularly, the invention relates to methods which
`improve the caching of streaming multimedia data (e.g.,
`audio and video data) from a content provider over a
`network to a client’s computer.
`2. Description of the Related Art
`Computer networks such as the Internet are increasingly
`being used to transmit multimedia data (e.g., audio and
`Video data). In the network-based context, one simple model
`of producing the information involves the client requesting
`the downloading of the multimedia data. Once downloaded,
`the client may then consume, or present, the information.
`This model is relatively easy to implement, however, it is
`non-optimal in that the client is required to wait for the
`downloading to complete before the presentation can begin.
`This delay can be considerable.
`A more Sophisticated model of producing information
`involves a content server at one network site streaming the
`multimedia information over the network to a client at
`another site. The client begins to present the information as
`it arrives (i.e., just-in-time rendering), rather than waiting for
`the entire data set to arrive before beginning the presenta
`tion. At the client computer, the received data is buffered into
`a cache memory and continuously processed as soon as, or
`45
`soon after, being received by the client. The advantage of
`streaming is that the client computer does not have to wait
`until all the data is downloaded from the server before some
`of the data is processed and the multimedia output is created.
`An example of multimedia data streaming is found in the
`Real player that is available over the Internet at Universal
`Resource Locator (“URL) http://www.real.com The Real
`audioplayer continuously sends audio data over the Internet
`from a server computer to the client computers. The audio
`data is buffered and processed by the client computers while
`the data is being sent. The client computers process the data
`by creating an audio output from the audio data.
`Applications. Such as the Real audioplayer, have condi
`tioned computer network users to expect instantaneous
`streaming data on demand. The Internet, however, is often
`unable to deliver streaming data. This inability is most
`pronounced for video data. The inability to deliver streaming
`data on demand is due in part to the fact that live and
`on-demand streaming multimedia (SM) objects are gener
`ally delivered over the Internet and other data networks via
`unicast connections. This architecture has many shortcom
`ings, both from the content provider's point of view and the
`
`50
`
`55
`
`60
`
`65
`
`2
`user or recipient’s point of view. From the content provider's
`point of view, he is faced with a server load that increases
`linearly with the number of clients. That is, each additional
`client requesting streaming multimedia data imposes an
`additional burden upon the content provider to meet the
`increased demand. From the Internet Service Provider's
`(ISPs) point of view, streaming multimedia under a unicast
`architecture poses network congestion problems. From the
`clients point of view, there is often long delays between the
`time the video content is requested by a client and the time
`when the video content actually begins playing (i.e., high
`start-up latency). In addition to the high start-up latency
`there also exists unpredictable playback quality due to
`network congestion.
`Web caching has been extensively implemented on the
`Internet to reduce network load (i.e., bandwidth consump
`tion), server load, and high start-up latency. The utilization
`of Web caching on the Internet has been extensively studied.
`For a more detailed discussion of Web caching, see T.
`Bernesrs-Lee, A. Lutonen, and H. F. Nielsen Meyr, Cern
`httpd: http://www.w3.org/Daemon/Status.html, 1996; and
`C. M. Bowman, et al. Harvest: 'A scalable, customizable
`discovery and access system’Technical Report CU-CS-732
`94, Dept. of Computer Science, University of Colorado,
`Boulder, USA, 1994 both references are incorporated by
`reference herein. See also, D. Wessels, “ICP and the squid
`cache', National Laboratory for Applied Network Research,
`1999, http://ircache.nlanr.net/Squid; this reference is also
`incorporated by reference herein. However, current caching
`systems, like those described above, are restricted to Support
`static web objects such as HTML documents or images.
`Static web objects are typically small and as such are always
`cached in their entirety. Current caching methods, therefore,
`do not adequately support streaming multimedia data (i.e.,
`web objects) such as video and audio SM objects. Streaming
`multimedia data like video objects, for example, are usually
`too large to be cached in their entirety. A single, two hour
`long MPEG movie, for example, requires about 1.4 Gbytes
`of disk space. Given a fixed investment in disk space, only
`a few streams could be stored at a cache, thus, decreasing the
`hit probability and the efficiency of the caching system. A
`natural solution would be to break video objects into smaller
`pieces for the purpose of caching. This solution is deficient,
`however, in that existing caching systems will treat different
`chunks from the same video object independently, while it
`might be desirable to consider the logical relationship
`among the various pieces.
`SM objects can be generally differentiated from static web
`objects in that SM objects consist of multimedia data whose
`transmission has temporal characteristics such that the trans
`mission rate is explicitly regulated or else the data becomes
`useless. In addition, the size of SM objects is typically at
`least an order of magnitude or two larger than that of a static
`web object, and therefore, do not lend themselves to be
`cached in their entirety. Given that caches have finite disk
`space, it is not feasible to statically store more than a few
`complete SM objects. If there are several simultaneous
`requests for different SM objects, it is easy to show that the
`cache will be busy replacing one SM object with another
`resulting in significant performance degradation.
`Based on the foregoing, there is a need for a system and
`method for enhancing current caching systems to Support
`streaming multimedia over a public network system, Such as
`the Internet.
`
`BRIEF SUMMARY OF THE INVENTION
`
`Illustrative embodiments of the present invention present
`a novel architecture and methods for Supporting high quality
`
`AMC Ex. 1001 p.8
`
`
`
`US 9,462,074 B2
`
`3
`live and on-demand streaming multimedia on a public
`network system, such as the Internet. By using helper
`servers (HS), also referred to as a helper, which operate as
`caching and streaming agents inside the network, existing
`caching techniques are enhanced to better Support streaming
`multimedia over the Internet. The HSs serve to implement
`several methods specifically designed to Support streaming
`multimedia, including segmentation of streaming multime
`dia objects (SM objects) into smaller units (i.e., chunks),
`cooperation of the HSS, and novel cache placement and
`replacement policies of the constituent units (chunks) which
`make up the SM objects.
`The HSs reduce a content provider's memory and pro
`cessing requirements by reducing the server load. Further,
`the invention reduces congestion problems by not being
`constrained by the unicast architecture of the prior art. And,
`the invention, reduces the long delays between the time
`video content is requested by a client and the time when the
`Video content actually begins playing (i.e., reduces high
`start-up latency).
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`10
`
`15
`
`25
`
`30
`
`35
`
`40
`
`The foregoing features of the present invention will
`become more readily apparent and may be understood by
`referring to the following detailed description of an illus
`trative embodiment of the present invention, taken in con
`junction with the accompanying drawings, where:
`FIG. 1 is an illustration of a network system which
`include HSs in accordance with the present invention;
`FIG. 2 is an illustration of a network system constructed
`according to one implementation of the invention;
`FIG. 3a is a block diagram of the construction of a cache
`according to the prior art;
`FIG. 3b is a block diagram of the construction of a cache
`according to an embodiment of the invention;
`FIG. 4 is a flowchart illustrating the method of helper
`selection according to an embodiment of the present inven
`tion;
`FIG. 5a is a block diagram of an SM object segmented
`according to an embodiment of the present invention; and
`FIG. 5b is a table illustrating cache storage of various
`elements in the network.
`
`DETAILED DESCRIPTION OF PREFERRED
`EMBODIMENTS
`
`45
`
`This application is related to U.S. Pat. No. 6,708,213, filed
`on Mar. 29, 2000 by Ethendranath Bommaiah, Katherine H.
`Guo, Markus Hofmann, and Sanjoy Paul, having a common
`50
`assignee; the contents of which are incorporated herein by
`reference.
`To facilitate an understanding of illustrative embodiments
`of the present invention, it is advantageous to first consider
`the network operating environment of the present invention,
`as well as definitions relating to system architecture and
`operation.
`Illustrative embodiments of the present inventive archi
`tectures, systems, and methods described herein focus on
`data streaming in global, worldwide public networks, such
`as the Internet. Those skilled in the art will recognize,
`however, that the present architectures, systems, and meth
`ods are applicable to a wide variety of data networks and
`applications. The following terms and definitions are used in
`the present disclosure.
`Cache: a region in a computer disk that holds a Subset of
`a larger collection of data.
`
`60
`
`65
`
`55
`
`4
`Cache Replacement Policy: a policy which specifies
`which cache item(s) should be removed from the cache
`when the cache is full or nearly full.
`Streaming multimedia object (SM object): a type of data
`whose transmission has temporal characteristics such that
`the data may become useless unless the transmission rate is
`regulated in accordance with predetermined criteria (e.g.,
`audio and video files). Transmission can start at any point
`within the object and can be terminated by the receiver at
`any time.
`Helper Server (HS): a HS, also referred to as a helper, is
`one of a plurality of servers in the network that provide
`certain value-added services. For example, a HS can provide
`caching services and/or prefetching services. HSS selec
`tively cooperate and communicate SM objects (or segments
`of such objects) between and among each other and between
`content servers and clients. That is, the HS understands an
`SM objects transmission requirements and can behave, in
`Some respects, like a content server.
`Data stream: a data stream transmits segments of a
`streaming multimedia object between elements in the net
`work. The source might be the sender (i.e., the content
`provider) or a HS. Receiving hosts could be HS or receivers
`(i.e., clients). FIG. 1 illustrates several of the terms defined
`herein. Specifically, FIG. 1 shows an illustrative source 10
`delivering a data stream directly to each of the HSS H1 (2)
`and H2 (4). H2 is further shown delivering a data stream to
`each of the HSS H3 (6) and receiver R (8). In general, the
`data stream from H2 to H3 need not be the same as that
`arriving at receiver R (8), but in this example the data stream
`from H2 to H3 is illustratively part or all of the same SM
`object transmitted by the data stream arriving at receiver R
`(8).
`Streaming architectures in accordance with illustrative
`embodiments of the present invention Support techniques to
`enhance caches to Support streaming media over a public
`network system, such as the Internet. While caching is the
`traditional approach for improving scalability, it fails to
`scale in terms of object size and number of Supported
`streams for streaming multimedia objects. In particular,
`existing solutions for streaming multimedia on the Internet
`have several shortcomings because these solutions use a
`separate unicast stream for each request, thus requiring a
`stream to travel from the server to the client across the
`Internet for every request.
`To overcome these drawbacks and to further enhance the
`ability of existing caching systems to properly scale in terms
`of object size and number of Supported streams, illustrative
`embodiments of the present invention advantageously com
`bine two orthogonal mechanisms which are implemented via
`a novel system architecture to enhance existing caching
`systems. Briefly stated, the methods are: (1) novel cache
`placement and replacement policies of SM objects which
`address the issues of data representation (i.e., how SM object
`data is segmented for storage), and data distribution (i.e.,
`how SM object data is distributed and communicated
`throughout the network), and (2) cooperation of HSs which
`address how the HSS (i.e., caching and streaming agents
`inside the network) communicate via a novel Scalable state
`distribution protocol which directs requests to the most
`appropriate HS or the content server.
`An exemplary arrangement of using the invention is
`shown in FIG. 2 which illustrates a public network system.
`FIG. 2 further includes a server computer, as represented by
`content server 12, which stores and serves content over a
`network 14. The illustrative network 14 is a high-speed,
`high-bandwidth interactive distribution network, and can be
`
`AMC Ex. 1001 p.9
`
`
`
`5
`representative of the Internet. The content server 12 serves
`content in the form of text, audio, video, graphic images, and
`other multimedia data. In the Internet context, the content
`servers might represent Web sites which serve or multicast
`content in the form of hypermedia documents (e.g., Web
`pages) to "client computers, as represented by computers
`26-40. The network further includes HS 22-24. Each HS
`22-24 is configured as a conventional database server having
`processing capabilities, including a CPU (not shown) and
`storage. HSS 22-24 cache Internet resources, such as those
`requested by client computers 26-40 that have been down
`loaded from the content server 12 to allow localized serving
`of those resources. The interaction and cooperation of the
`above entities are further described below.
`Cache placement and replacement policies form an inte
`gral part of a caching system for both static and streaming
`multimedia objects (SM objects). However, existing cache
`placement and replacement policies do not support SM
`objects well. The present invention defines several aspects
`which serve to support the cache placement and replacement
`policies associated with SM objects. They include a hotness
`rating associated with each SM object, data representation,
`and data distribution.
`1. Helper Server Helper Hotness Rating
`A first aspect for describing cache placement and replace
`ment policies of the present invention is the helper hotness
`rating. Caching systems for SM objects attempt to reduce
`end-to-end delay, server load and network load by distrib
`uting the SM objects among the HSS 22-24 located closest
`to the clients 26-40. It is not feasible, however, to replicate
`all SM objects in all the HSS 22-24 in the network 12 due to
`limited disk space on the HSS 22-24. From the HSs 22-24
`point of view, given the limited disk space, a metric is
`required to identify those SM objects which are more
`frequently requested by clients 26-40 for cache storage at the
`HSs 22-24. A SM object is defined to be “hot” if a large
`number of client requests arrive at the HS 22-24 in a short
`period of time. Accordingly, each SM object is assigned a
`helper hotness rating at each HS 22-24 in the network at
`which the object is cached, defined by:
`
`10
`
`15
`
`25
`
`30
`
`35
`
`40
`
`helper hotness rating=(total # of client requests for
`the SM object) time span over which the
`requests are received at the HS
`(1)
`This equation illustrates that high client demand for an
`SM object equates to a high helper hotness rating. The helper
`hotness rating is a local measure (i.e., local to the HS)
`representing how popular a SM object is among clients
`26-40 served by that HS.
`An exception exists where the client request history is not
`long enough to calculate a reasonable server hotness rating
`for a particular SM object. In such a case, a rating will be
`manually assigned to that object. Some objects can be
`reasonably predicted to be very hot, for example, the top 10
`Videos of the week, top news stories, and a video showing
`a NASA spacecraft landing on Mars.
`In general, the role of the HSS 22-24 is to reduce network
`congestion by preventing client requests from going to the
`content server 12 whenever possible. In carrying out that
`responsibility, many client requests never reach the content
`server 12. As such, hotness as defined by equation (1) is an
`inappropriate metric for the content server 12. Accordingly,
`a more practical hotness metric, applicable to only the
`content server 12, is defined as:
`
`45
`
`50
`
`55
`
`60
`
`65
`
`server hotness rating X, HSs(Helper hotness ratings)
`=
`all HSS
`
`See
`
`(2)
`
`US 9,462,074 B2
`
`6
`This equation states that, for a particular SM object, the
`content server metric for hotness is the sum of all the
`constituent helper hotness ratings for all HSS 22-24 in the
`network 14. That is, each HS 22-24 reports its local hotness
`rating to the content server 12 for inclusion in the general
`summation defined by Equation (2) to derive the server
`hotness rating.
`The applicability of the helper and server hotness ratings
`will become more apparent in the context of the description
`provided below.
`2. Data Representation/Segmentation
`A second aspect for describing cache placement and
`replacement policies of the present invention is the segmen
`tation of the SM objects. A major distinction between SM
`objects and regular (i.e., static) objects on the web is their
`size. The size of SM objects is typically an order of
`magnitude or two larger than the size of Static objects. For
`example, a single, two hour long MPEG movie requires
`about 1.4 Gbytes of disk space. From the perspective of a
`system designer, two design options are presented. A first
`design option is to store SM objects in their entirety at the
`HSS 22-24. This solution is deficient, however, in that, given
`the limited disk space at the HSS 22-24, only a small
`percentage of SM objects could be stored at the HS 22-24 at
`one time. A second design option proposes breaking the SM
`objects into Smaller segments, and treating these segments
`as special web objects observing their temporal relationship.
`An exception to this approach, might be to store SM objects
`defined to be very hot in their entirety.
`Notwithstanding the exception for storing very hot objects
`in their entirety, a SM object will generally be segmented at
`the HSS 22-24 to better utilize the HSs cache storage
`capacity. As an illustration, assume a representative HS
`22-24 has a static cache Storage whereby the minimal
`storage allocation unit of the static cache storage is a disk
`block of size S. In accordance with the object segmentation
`approach of the present invention, the SM object is divided
`into multiple chunks where each chunk is made up of a fixed
`number of segments. The chunks which make up the SM
`object can then be cached and replaced independently,
`thereby significantly increasing the utilization of the cache
`Storage.
`As a further consideration in the segmentation of SM
`objects, each segment of a SM object has a starting playback
`time and an ending playback time inter-relating the various
`segments which make up a SM object. In addition, client
`requests for SM objects contain an explicit starting and
`ending playback time. Thus, when a SM object request
`arrives at a HS, a simplistic hit/miss determination associ
`ated with classical web caching is inadequate. ASM object
`request will likely result in the client receiving some number
`of segments of the SM object from more than one HS 22-24.
`This not only increases signaling cost, but also increases the
`probability of losing synchronization. Therefore, it is pref
`erable to cache Successive segments of an SM object at a
`particular HS rather than caching a sequence of disjointed
`segments having multiple gaps.
`FIG. 3a shows a block diagram of a cache storage 300 of
`a representative HS 22-24 in the network. The cache storage
`300 is shown to be made up of a plurality of contiguous disk
`block units 302a-302i, each unit being of a fixed size, S. The
`cache storage 300 may also be viewed as being logically
`divided into chunks in accordance with the data segmenta
`tion and storage principles of the present invention. For
`purposes of illustration, a chunk is selected to be four times
`(4x) the fundamental disk block unit S, as illustrated in FIG.
`
`AMC Ex. 1001 p.10
`
`
`
`US 9,462,074 B2
`
`5
`
`10
`
`15
`
`30
`
`35
`
`40
`
`45
`
`50
`
`60
`
`65
`
`25
`
`7
`3b. As shown, chunk O is made up of disk blocks 302a
`302d, and chunk 1 is made up of disk blocks 302e-h.
`When an SM object is received at a HS 22-24, disk blocks
`are allocated on demand. Given that SM objects are repre
`sented in chunks, caching begins at the first available chunk
`at the HS 22-24. For example, assuming that disk block 302a
`represents the first available storage location at the HS
`22-24. The first chunk of the received SM object, Chunk O.
`will be allocated to disk block segments 302a-302d. When
`the chunk boundary is reached, i.e., 302d, caching begins at
`the next disk block 302e. That is, chunk 1 of the SM object
`will be allocated to disk block 302e-302h. It is noted that a
`chunk of an SM object chunk may not always be cached to
`its chunk boundary because an SM object stream can
`terminate at any time.
`In practice, allocating cache storage in accordance with
`the principles of a chunk size influences the size and number
`of gaps in an object. Dividing the cache storage into larger
`chunk sizes increases the probability of blocking where
`blocking is defined as the inability of being able to store a
`new SM object at the HS when the disk space is full.
`Alternatively, dividing the SM object into smaller chunks
`increases the probability of creating gaps, i.e., non-contigu
`ous stored chunks, at an HS. In either case, it may be
`necessary for a client to get all the constituent segments of
`a SM object from more than one HS 22-24. To do so, some
`form of intelligent pre-fetching is required which pre-fetches
`Subsequent segments while a current segment is being
`streamed to the client.
`3. Data Distribution
`A third aspect for describing cache placement and
`replacement policies of the present invention is data distri
`bution. Data distribution encompasses two distinct Sub
`aspects. A first sub-aspect refers to how data is distributed
`between the various communicating elements (i.e., content
`server 12, HSS 22-24, and clients) in the network 14, as will
`be described with reference to the following placement
`policies (1) server push where the content server 12 pushes
`popular multimedia SM objects to a plurality of HSS 22-24,
`(2) helper pull where each HS 22-24 individually pulls
`popular SM objects from the content server 12, and (3)
`demand driven where clients 26-40 request SM objects
`from particular HS caches and HS 22-24 in turn passes the
`request to the content server 12.
`The second sub-aspect of data distribution pertains to how
`data is stored on disk, which is embodied in two approaches,
`a deterministic approach and a random approach, and which
`is discussed in detail below.
`3.a Server Push
`Server push describes a first cache placement policy
`associated with the data distribution aspect. Studies have
`shown that access patterns for static objects follows a Zipf
`like distribution. See, for example, L. Breslau, P. Cao, L.
`Fan, G. Phillips, and S. Shenker. “Web caching and Zipf-like
`distributions: Evidence and implications'. Proceedings of
`55
`IEEE Infocom >99, March 1999 New York, N.Y. Zipf's law
`states that for a given set of static objects, the probability that
`a request is for an object with a rank r is inversely propor
`tional to r, defined as:
`(3)
`P=K(1/R)
`