throbber
as) United States
`a2) Patent Application Publication 0) Pub. No.: US 2009/0055471 Al
`(43) Pub. Date: Feb. 26, 2009
`
`Kozatet al.
`
`US 20090055471A1
`
`(54) MEDIA STREAMING WITH ONLINE
`CACHING AND PEER-TO-PEER
`FORWARDING
`
`(76)
`
`Inventors:
`
`Ulas C. Kozat, Santa Clara, CA
`(US); Mehmet U. Demircin,
`Dallas, TX (US); Oztan Harmanci,
`Mountain View, CA (US); Sandeep
`Kanumuri, Sunnyvale, CA (US)
`
`Correspondence Address:
`BLAKELY SOKOLOFF TAYLOR & ZAFMAN
`LLP
`1279 OAKMEAD PARKWAY
`SUNNYVALE, CA 94085-4040 (US)
`
`(21) Appl. No.:
`
`12/194,157
`
`(22)
`
`Filed:
`
`Aug. 19, 2008
`
`Related U.S. Application Data
`
`(60) Provisional application No. 60/957,009, filed on Aug.
`21, 2007.
`
`Publication Classification
`
`(51)
`
`Int. Cl.
`GO6F 15/16
`
`(2006.01)
`
`(52) U.S. C1. occ cccecces cesses ceenenenscnecaeneens 709/203
`
`(57)
`
`ABSTRACT
`
`A system, method and apparatus are disclosed herein for
`media streaming. In one embodiment, the system comprises
`one or more media servers to serve media content and a
`
`plurality ofpeers communicably coupled to one or more other
`peers of the plurality of peers and at least one of the one or
`more media servers to receive segments of media content,
`where at least one of peers allocates a set of resources for
`serving the segments of media content
`including cache
`memory to store the segments and media files and uplink
`bandwidth to send the segments ofmedia contentto the one or
`more peers to which the one peer is communicably coupled.
`The system also includesa first control server to track media
`content demandandthe allocated resourcesofthe plurality of
`peers to determine whichpeer should cache which segment of
`the media file and to return peer location information speci-
`fying the one or morepeer locations from which each peeris
`to receive each segment of each media content requested. The
`control server is operable to send the location information to
`each peer. In one embodiment, the one control server is also
`operable to calculate a utility of each caching option and
`enforce it by sending triggers to the peers to initiate the
`cachingat those peers.
`
`
`
`
`
`
`
`
`
`
`
`
`(1) REQ(media-
`name, segment)
`
`\e\3
`ov.
`you
`o>
`/ Media
`‘Media.
`/Control\. Media
`\
`/ Server’
`Server”
`Server’
`\Server /
`NS
`
`
`
`(3) REQ(medt
`name, segment)
`
`
`
`(2) REPLY(supply
`.
`locations)
`
`(4) SEND(Miedia- —/Giient\
`

`name,segment)
`<
`
`\ (Peer) 4 (03,
`/ Client \.
`/ Client»
`(Peer) “
`>(Peer)/
`(3) REQ(media-
`
`
`ent)Client
`[03
`name, $s
`
`
`
`(Peer) yA
`“
`
`
`/Client
`\(Peer)» tod,
`Data Co v. Bright Data
`
`|
`
`7. \
`
`media-
`
`name, segmen
`
`
`
`lo3y
`
`
`
`Data Co Exhibit 1024
`Data Co Exhibit 1024
`Data Co v. Bright Data
`
`

`

`Patent Application Publication
`
`Feb. 26,2009 Sheet 1 of 5
`
`US 2009/0055471 Al
`
`
`
`
`
`
`sole
`io 3
`_
`yo _ ,ov .
`_//MediaS, “Media,
`Control.
`/™MediaX
`
`Y
`>
`’
`Server,
`‘Server’
`\Server/
`Server”
`
`
`
`
`
`(3) REQ(media-
`name, segment)
`
`
`
`(
`
`
`(4) SEND(mediae A.
`name,segment)
`<< oon ‘S
`
`(Peer)Ais 3,
`
`SAPoe
`/ Client\
`S(Peer)
`(3) REQ(media-
`\Peer)
`
`
`/aClient *
`name, $
`ent)
`1n3
`
`media-
`=
`(Peer)A,,3 J
`
`name, segmen
`
`/Client
`\ (Peer) lo},
`
`q
`
`<
`
`[|
`
`(1) REQ(media-
`name, segment)
`
`2) REPLY(supply
`locations)
`
`
`
`
`
`‘Client
`
`
`
`{03,
`
`Figure 1.
`
`REPORT(Cache, BW, CPU, local content) 02
`f
`
`
`
`
`Cache/Memory
`
`
`MZ
`Uplink/Downlink BW <
`Pp
`>
`(Bits per second), PeeeT /
`
`
`
`
`PREFETCH(media-name, segment, supply locations),
`CACHE(media-name, segment)
`
`Figure 2
`
`CPU (MHz)
`
`€
`€
`

`

`Patent Application Publication
`
`Feb. 26,2009 Sheet 2 of 5
`
`US 2009/0055471 Al
`
`
`Download Times
`
`
`
`‘Slope = total download rate
`
`
`
`
`
` Segment
`
`
`
`
`t D(tsegmenti) = 1, D(tsegmentz) = 1,
`D(t,segment3) = 1, Dit segment4)= 0,
`
`
`Time
`(seconds)
`
`Figure 3:
`
`

`

`Patent Application Publication
`
`Feb. 26,2009 Sheet 3 of 5
`
`US 2009/0055471 Al
`
`$on-beae'4(tiade
`
`Cached Segment
`
`'
`
`Figure 4:
`
`

`

`Patent Application Publication
`
`Feb. 26,2009 Sheet 4 of 5
`
`US 2009/0055471 Al
`
`PROCESSOR
`
`
`
`MAIN
`STATIC
`MASS
`
`
`MEMORY
`MEMORY
`STORAGE
`
`
`
`MEMORY
`
`512
`
` 504 506
`507
`
`
`
`
`524
`
`
`
`EXTERNAL
`NETWORK
`INTERFACE
`
`520
`
`DISPLAY
`
`KEYBOARD
`
`521
`
`522
`
`CURSOR
`CONTROL
`DEVICE
`
`523
`
`HARD
`COPY
`DEVICE
`
`Figure 5
`
`

`

`Patent Application Publication
`
`Feb. 26,2009 Sheet 5 of 5
`
`US 2009/0055471 Al
`
`Tracking Module
`801
`
`Location Information
`Transmission Module
`602
`
`Control
`Logic
`610
`—_
`
`603
`
`Peer Interface
`
`Media Server
`Interface
`604
`
`To
`Peers
`
`To Media
`Servers
`
`Figure 6
`
`

`

`US 2009/0055471 Al
`
`Feb. 26, 2009
`
`MEDIA STREAMING WITH ONLINE
`CACHING AND PEER-TO-PEER
`FORWARDING
`
`PRIORITY
`
`[0001] The present patent application claimspriority to and
`incorporates by reference the corresponding provisional
`patent application Ser. No. 60/957,009, titled, “A Method and
`Apparatus for Improved Media Streaming with Online Cach-
`ing and Peer-to-Peer Forwarding,” filed on Aug. 21, 2007.
`
`FIELD OF THE INVENTION
`
`[0002] The present invention relates to the field of video
`streaming, content distribution, and communication net-
`works; more particularly, the present invention relates to
`media streaming with on-line caching and peer-to-peer for-
`warding.
`
`BACKGROUNDOF THE INVENTION
`
`Peer to peer content distribution and streaming is
`[0003]
`well-known and there are numerous system proposals and
`implementationsin the literature and industry. One such sys-
`tem includespeers, where eachpeerstores and streams videos
`to the requesting client peers. Each video is encoded into
`multiple descriptions and each description is placed on a
`different node. Whena serving peer disconnects, the system
`locates another peer whois storing the same description and
`has sufficient uplink bandwidth for the requesting client. This
`solution does not provide a cache or storage management
`policy.
`[0004] A method for arranging nodes within a wide area
`network has been disclosed in which users relay broadcast
`content among each other. The conventionally-encoded
`media stream is segmented into small files and eachfile is
`uploaded to users who re-upload them repeatedly in a chain-
`letter style multiplier networks. The clients at the same time
`playbackthe files continuously through a conventional media
`player after some playback delay.
`[0005]
`In another system, clients have a memory cache
`used for storing the downloaded media file. The clients are
`clustered together, depending ontheirarrivaltimes, to join the
`same media stream from the server in a chained fashion. They
`fetch the missing initial segments of the mediafile from the
`cache ofother clients in the chain. The specified system does
`not managethe resources proactively, but applies static rules
`of caching andserving.
`[0006] A data buffer managementtool has been disclosed
`in which a decision is made on what should remain in the
`
`mass storage and what should be retained in the buffer
`memory when serving multiple video ports. The tool makes
`use of the predictable nature of the video data stream in
`predicting future requirements for a given one of the data
`blocks to decide whether to retain it in the buffer or in the
`massstorage.
`[0007] A cache lookup system to retrieve data in a client-
`server network has been proposed, where clients use the
`cachesof other clients to retrieve the requested information.
`The system does not specify how the cache spaces should be
`optimized to reduce the server load. In another system
`referred to as BASS the BitTorrent is augmented by adding a
`media server into the system and forcing clients to download
`only the segments after their playback point. Clients can
`download both from the media server and use the BitTorrent
`
`peer-to-peer (P2P) connections simultaneously. The system
`combinesthe benefits of client-server and P2P architectures,
`butit does still follow a randomized cachingstrategy since it
`is based on BitTorrent system, where rarest segments in the
`neighborhoodofa client are pushed forwardto the client and
`tit for tat sharing policies are utilized.
`[0008] Caches of peers have been treated as seeds of new
`multicast sessions to improve the server bandwidth utiliza-
`tion. Again the cachingstrategy here is static and not adaptive
`to the demand.It also requires chaining ofnodes and patching
`missing information. Hence, the client caches are not opti-
`mized with respect to the demand.
`[0009] An erasure coding method has been proposed to
`generate encoding blocks from the original media and instead
`deliver unique encoding blocksto each ofthe clients. Clients
`store as many encoding blocksas possible depending ontheir
`buffer sizes and serve the cached contentto other peers. Again
`this method does not allow optimizing the cache for the
`demandheterogeneity across the video segments andits time-
`variability. Caching in the context of deciding where the new
`coming clients join into the distribution tree has been dis-
`cussed. Also random pre-fetching of future data has been
`proposed, as well as caching the most recent data and the
`control is over the topology rather than the cached data. In
`another solution, the “supplier” of a segment counts, but the
`supply is not used in caching decisions. The supply countis
`used to decide whom to ask for which segment(e.g., one
`policy is to ask for the rarest segment in the system). The
`solution utilizes a gossiping based protocolto establish deliv-
`ery.
`
`SUMMARYOF THE INVENTION
`
`[0010] A system, method and apparatus are disclosed
`herein for media streaming. In one embodiment, the system
`comprises one or more media servers to serve media content
`and a plurality ofpeers communicably coupled to one or more
`other peers of the plurality of peers andat least one of the one
`or more media servers to receive segments of media content,
`where at least one of peers allocates a set of resources for
`serving the segments of media content
`including cache
`memory to store the segments and media files and uplink
`bandwidth to send the segments ofmedia contentto the one or
`more peers to which the one peer is communicably coupled.
`The system also includesa first control server to track media
`content demandandthe allocated resourcesofthe plurality of
`peers to determine which peer should cache which segmentof
`the media file and to return peer location information speci-
`fying the one or morepeer locations from which each peeris
`to receive each segment of each media content requested. The
`control server is operable to send the location information to
`each peer. In one embodiment, the one control server is also
`operable to calculate a utility of each caching option and
`enforce it by sending triggers to the peers to initiate the
`cachingat those peers.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`[0011] The present invention will be understood morefully
`from the detailed description given below and from the
`accompanying drawings of various embodiments of the
`invention, which, however, should not be taken to limit the
`invention to the specific embodiments, but are for explanation
`and understanding only.
`
`

`

`US 2009/0055471 Al
`
`Feb. 26, 2009
`
`FIG. 1 is a block diagram of one embodimentof a
`
`[0012]
`system.
`FIG. 2 illustrates one embodimentof a client in a
`[0013]
`system reporting its resources periodically to a control server
`and the mannerin whichthe control server dictates the cach-
`ing decisions.
`[0014]
`FIG. 3 illustrates a demand curve generated using
`the arrival time and downloadrates over the time.
`
`FIG. 4 illustrates a dynamic programming (or
`[0015]
`equivalently trellis) based optimization to find the best
`sequence of caching strategy at each client node.
`[0016]
`FIG. 5 is a block diagram of one embodimentof a
`computer system.
`[0017]
`FIG. 6 is a block diagram of one embodimentof a
`control server.
`
`DETAILED DESCRIPTION OF THE PRESENT
`INVENTION
`
`In one embodiment, a system includes one or more
`[0018]
`media servers to provide media streaming services to many
`peers (e.g., clients). Each peer dedicates (or relinquishes)
`some of its memory resources, computer processing unit
`(CPU) resources, bandwidth resources (downstream and/or
`upstream) to the system and these resourcesare used to serve
`media to other peers. The servers facilitate the distribution of
`a piven media content by serving the requested portions, or
`segments, of the content either directly from themselves to
`the requesting client(s) or from one or moredistinct peers to
`the requesting client(s). In one embodiment, clients issue
`their requests to the servers and servers in return provide a
`numberof peer locations where the requested portionsof the
`media are located. Servers have the control over the peer
`resources relinquished (dedicated) to the system. Servers use
`the memory resources ofpeers to store (e.g., cache) segments
`of the media and use the uplink bandwidth of these peers to
`serve the cached segments. In another embodiment, clients
`issue their requests to the servers and servers direct one or
`more peers or servers to serve the requesting client.
`[0019]
`For a given media segment, the total uplink band-
`width summed across the peers who currently cache this
`segment defines a supply for peer-to-peer delivery of the
`segment. The total number of requests and the requested
`downloadrates for the same segmenton the other handdeter-
`mine the demand for the segment. Multiple techniques are
`disclosedthat are utilized over the system to match the supply
`and demand for each segment by making on-line caching
`decisions.
`
`In one embodiment, someofthe nodes(referred to
`[0020]
`herein as control servers) keep track of the current supply,
`current demand,andpredicted future demandofall segments
`of media files. This may be all mediafiles or some subset of
`the mediafiles (e.g., at least the popular media files). In one
`embodiment, the future demandpredictions take into account
`the deterministic behavior under normal media streaming
`operation logic as well as the stochastic behavior due to
`random peerarrivals, departures, failures, etc.
`[0021]
`In one embodiment, the caching decisions at each
`node are performed by the control servers with the aim of
`increasing, and potentially maximizing, the utility of the
`available cache space within a time-horizon given the supply
`and predicted demandin the given time-horizon for all the
`segments. The caching decisions are executed in different
`ways. In one embodiment, caching decisions can be in the
`form of pre-fetching some of the segments ahead oftime to
`
`the peers to balance the future demand and to reduce the
`future server load. This requires allocating some server band-
`width to fetch currently under-represented media segments.
`In another embodiment, one or moreservers do not perform
`any pre-fetching but a cache replacementstrategy is used.
`Whenevera peerfinishes downloading a segmentit requested
`(for playback, for example), the server decides whether to
`keep the previously cached segments or to keep the currently
`downloaded segment.
`In another embodiment,
`the peer
`makesthe decision. In one embodiment, the decision is made
`to improve the overall utility ofthe cache. The peer updatesits
`cache according to the decision. Pre-fetching and cache
`replacement strategies can also be used together to further
`improve the performance.
`[0022] Thus, the technologies disclosed herein differ in
`waysto optimize the system resources and in the mechanisms
`used to match the demand and supply. One embodimentofthe
`invention takes into account the media streaming require-
`ments and network/server/peer bandwidth and memory con-
`straints as well as random events (e.g., nodes joining and
`leaving the system) to develop an effective wayto pair peers
`and match supply and demand.In one embodiment, a cache
`trigger signaling accomplishes the cache optimization deci-
`sions. Both in-band (no extra bandwidth usage) and out-of-
`band(i.e., prefetching) caching methods may be used. These
`overall features makethe techniques disclosed herein, unique
`and different than the other peer-to-peer streaming solutions.
`[0023]
`In the following description, numerous details are
`set forth to provide a more thorough explanation of the
`present invention.It will be apparent, however, to one skilled
`in the art, that the present invention maybepracticed without
`these specific details. In other instances, well-known struc-
`tures and devices are shown in block diagram form, rather
`than in detail, in order to avoid obscuring the present inven-
`tion.
`
`Someportions of the detailed descriptions which
`[0024]
`follow are presented in terms of algorithms and symbolic
`representations of operations on data bits within a computer
`memory. These algorithmic descriptions and representations
`are the means usedbythoseskilled in the data processing arts
`to most effectively convey the substance of their work to
`others skilled in the art. An algorithm is here, and generally,
`conceivedto be a self-consistent sequence of steps leading to
`a desired result. The steps are those requiring physical
`manipulations of physical quantities. Usually, though not
`necessarily, these quantities take the form of electrical or
`magnetic signals capable of being stored, transferred, com-
`bined, compared, and otherwise manipulated. It has proven
`convenient at
`times, principally for reasons of common
`usage, to refer to these signals as bits, values, elements, sym-
`bols, characters, terms, numbers, or thelike.
`[0025]
`It should be borne in mind, however,thatall ofthese
`and similar terms are to be associated with the appropriate
`physical quantities and are merely convenient labels applied
`to these quantities. Unless specifically stated otherwise as
`apparent from the following discussion,it is appreciated that
`throughoutthe description, discussions utilizing terms such
`as “processing” or “computing” or “calculating” or “deter-
`mining”or “displaying” or the like, refer to the action and
`processes of a computer system, or similar electronic com-
`puting device, that manipulates and transforms data repre-
`sented as physical (electronic) quantities within the computer
`system’s registers and memories into other data similarly
`represented as physical quantities within the computer sys-
`
`

`

`US 2009/0055471 Al
`
`Feb. 26, 2009
`
`tem memoriesorregisters or other such information storage,
`transmission or display devices.
`[0026] The present invention also relates to apparatus for
`performing the operations herein. This apparatus may be
`specially constructed for the required purposes, or it may
`comprise a general purpose computerselectively activated or
`reconfigured by a computer program stored in the computer.
`Such a computer program may bestored in a computerread-
`able storage medium,such as, but is not limited to, any type of
`disk including floppy disks, optical disks, CD-ROMs, and
`magnetic-optical disks, read-only memories (ROMs), ran-
`dom access memories (RAMs), EPROMs, EEPROMs, mag-
`netic or optical cards, or any type ofmedia suitable for storing
`electronic instructions, and each coupled to a computer sys-
`tem bus.
`[0027] The algorithms and displays presented herein are
`not inherently related to any particular computer or other
`apparatus. Various general purpose systems may be used with
`programs in accordance with the teachings herein, or it may
`prove convenient to construct more specialized apparatus to
`perform the required methodsteps. The required structure for
`a variety of these systems will appear from the description
`below.In addition, the present invention is not described with
`reference to any particular programming language. It will be
`appreciated thata variety of programming languages may be
`used to implementthe teachings of the invention as described
`herein.
`[0028] A machine-readable medium includes any mecha-
`nism for storing or transmitting information in a form read-
`able by amachine(e.g., a computer). For example, a machine-
`readable medium includes read only memory (“ROM”);
`random access memory (“RAM”); magnetic disk storage
`media; optical storage media; flash memory devices; electri-
`cal, optical, acoustical or other form of propagated signals
`(e.g., carrier waves, infrared signals, digital signals, etc.); etc.
`
`Overview
`
`[0029] A media streaming system comprises one or more
`media servers to serve media content to peers (e.g., client
`computer systems), multiple peers to receive segments of
`media content from and be communicably coupled to other
`peers and one or more of the media servers, and at least one
`control server. Each peer allocates a set of resources for
`serving the segments of media content,
`including cache
`memory to store the segments and media files and uplink
`bandwidth to send the segments of media content to other
`peers to which they are communicably coupled. The control
`server(s) track media content demand and the allocated
`resources of the peers to determine location information
`specifying the one or more peer locations from which each
`peeris to receive each segmentof each item of media content
`requested and sendsthe location information to the peer.
`[0030]
`In one embodiment, access to an item of media
`content (e.g., a video) is controlled by one control server in
`the system during the time the video is available for playback
`in the system. In one embodiment, each control server is
`operable to determine how media files are segmented and
`determine downloadrates and playback delays of each seg-
`ment.
`
`In one embodiment, each peer specifies locally
`[0031]
`available resources to at least one control server. In one
`
`embodiment, each peer has a cache memory limited in size to
`storing only one segment of media. In one embodiment, a
`control server causes a segment of media to be streamed to a
`
`peer after the peer joins the system. In response, the peer
`caches the segment
`into its cache memory.
`In another
`embodiment, each peer has a cache memory that can store
`multiple segments and the upload rate of the caching peeris
`sharedbyall the cached segmentsin a well-specified manner,
`e.g., each segment takes an equalrate allocation (e.g., where
`the upload rate is R and peer can cache 3 segments, then each
`segmentis served at R/3; ifa peer can cache 4 segments, then
`each segmentis served at rate R/4).
`[0032]
`Inone embodiment, the control servers sendtriggers
`to peers to start caching process for one or more segments of
`media content. In one embodiment, peers receive a trigger
`from a control server to cache one or more segments of media
`content. In one embodiment, a peer pre-fetches these one or
`more segments, in response to the trigger, from the one or
`more media servers and one or more other peers. In one
`embodiment,the peer receives the one or more segments to be
`played immediately or in the future.
`[0033]
`In one embodiment, a control serveror a peer deter-
`mines whether the peer continues to store the segment or
`overwrites the segment with a new segment being down-
`loaded for playback.
`In one embodiment, determining
`whether to cache the new segmentis based on a determined
`amountof reductionin loadof at least one of the one or more
`media servers achieved over a period of time. In another
`embodiment, determining whether to cache the new segment
`is based on a prediction of future demandof the new segment
`and capability of peers and media servers to supply the new
`segment to other peers.
`[0034]
`In one embodiment, a control server tracks supply
`and demandof each segment of media content by each peer.
`In one embodiment, a control server determines which peers
`are possible candidates to serve a requested segment using
`peer arrival rates and supply-demandanalysis with respectto
`the peers and supply and demand of segments of media con-
`tent with respect to the peers. In one embodiment, a control
`server determines this by attempting to maximize uplink
`bandwidth utilization at each peer by making desired seg-
`ments of media content accessible at a particular time. In one
`embodiment, the media (e.g., video)is partitioned into con-
`tiguous segments and the control server computes an esti-
`mated future demandfor each of the segments.
`[0035]
`In one embodiment, a control server is operable to
`estimate supply and demand curves corresponding to each
`segment of media content at a future time and use eachesti-
`mate to determine the location information. In one embodi-
`
`ment, the first control server generates the estimate using peer
`arrival and departure timestatistics. In another embodiment,
`the control server generates the estimate using peerarrival
`and departure timestatistics, information indicative of when
`a particular media segment is requested, node inter-arrival
`and inter-departure statistics (e.g. mean and standard devia-
`tion of inter-arrival and inter-departure times), information
`about round-trip-time communication (the round trip com-
`munication delay between the control server and each peer as
`well as the round trip delay between paired peers) and pro-
`cessing delays(the delay that occurs because each nodehas to
`parse the packets and execute certain functions to facilitate
`the streaming operation). In yet another embodiment, the
`control server generates the estimate using one or more of a
`group consisting of media playback rate, media download
`rates, and media segmentsizes. These estimates are updated
`in regular or irregularintervals.
`
`

`

`US 2009/0055471 Al
`
`Feb. 26, 2009
`
`In one embodiment, a control server generates the
`[0036]
`estimate based on a utility measure computed for each media
`file and each segmentof a mediafile from supply and demand
`curves corresponding to the supply and demandwith respect
`to each segment. In one embodiment, the control server deter-
`mines which segments to cache based on computed utility
`functions associated with each segment.
`[0037]
`FIG. 1 is a block diagram of a media streaming
`system. Referring to FIG. 1, the system comprises a number
`of media servers 101,_, that store and serve original media
`content (e.g., files); a control server 102 that monitors and
`maintains the system operations as well as perform resource
`management, control, allocation and optimization; andcli-
`ents 103,_; (also referred to herein as peers) that have limited
`local resources (e.g., memory space, bandwidth, CPU cycles,
`etc.). Although only one control server is shown in FIG. 1,
`there can be any numberof control servers in the system.
`Similarly, while only three media servers are shown and only
`five clients are shown, there can be any numberofthem in the
`system.
`[0038] Control server 102, media servers 101,_, and/or cli-
`ents 103,, can be physically separate or collocated nodes,
`with each node in the system typically interconnected via
`some communication link and/or computer network. Clients
`103,_, can themselvesbe the original generator (and hence a
`media server) of a media content and/or consumersofexist-
`ing media content.
`[0039]
`In one embodiment, clients dedicate some oftheir
`local resources to the media distribution system to the extent
`they are used in the distribution of media content among
`peers. The incentives providedto clients for such a dedication
`is not disclosed or subject of the current disclosure. In one
`embodiment, clients directly send their requests (e.g., REQ
`(media-name, segment)) for a particular segment of a media
`file to one of control server 102 which maintains a database
`(e.g., list) of clients in the system and the segments of media
`content that they are currently caching. Hence, they maintain
`a global view ofthe media content. In response, control server
`102 searches from its database for the locations that can
`
`supply the requested segmentof the mediafile at the desired
`rate. Control server 102 replies (e.g., REPLY(supply loca-
`tions)) back to the requesting client 103 with the list of loca-
`tions andtheir possible attributes such as, for example, avail-
`able resources, distance, etc. When the requestingclient(e.g.,
`client 103,) receives the reply message from control server
`102, client 103, contacts the locations and if the locations
`satisfy the conditions of the request, they start streaming the
`requested segmentto the client. In one embodiment,the list of
`locations includes one or more media servers 101 and one or
`moreother client nodes 101. In the example in FIG.1, client
`103, sends requests for segments from media server 101, and
`client 103,,. The list of locations mayinclude solely client
`nodes or solely media servers or a combination of both.
`Although not depicted in FIG. 1, in one embodiment, control
`server 102 can directly send control messages to a set of
`locations which then start pushing the requested video seg-
`mentto the requesting peer. In such a “‘push-based”operation
`mode, the requesting peer expects video payloads in response
`to its request to control server 102.
`[0040]
`In one embodiment, the requesting client can sup-
`port parallel streaming from many points, where each stream
`carries unique information. In one embodiment, the unique
`information feature can besatisfied by explicitly requesting
`non-overlapping portions of a given segment from different
`
`nodes. In another embodiment, the unique information fea-
`tureis satisfied by unique encoding blocks generated from the
`original message blocksstored at each location. The encoding
`blocks can be generated using (butnotlimitedto) fixed rate or
`rateless erasure codes such as, for example, Reed-Solomon,
`Tornado Codes, Raptor Codes, LT Codes, etc.
`[0041]
`In one embodiment, the media streaming system
`uses an explicit control signaling mechanism between control
`server 102 and clients 103, _,. FIG. 2 illustrates a client 201,
`which is one of many clients in the system, reporting their
`local resources periodically to a control server 202. Referring
`to FIG.2, client 201 sends control server 202 a report with the
`size of its cache memory allocated to the system,its uplink/
`downlink bandwidth, its CPU cycles dedicated to the system
`and an indication of the local content stored in its cache. In
`
`one embodiment, clients also use report messages as
`
`“ALIVE”messagesto confirm that they are available and can
`execute as supply and/or demand nodes. Control server 202
`decides whether prefetching or different caching is neededat
`each client and signals the decision to the clients. In one
`embodiment, control server 202 signals client 201 to prefetch
`a media segment by specifying the segment by PREFETCH
`(media-name, segment, supply locations) and/or signals cli-
`ent 201 to cache a media segment by specifying the segment
`by CACHE(media-name, segment). Thus, in one embodi-
`ment, the control servers maintain a global state of clients and
`mediaservers in the system. In another embodiment, control
`servers may have a more limited view ofthe system and make
`local decisions in coordination with other control servers.
`
`In one embodiment, control servers explicitly trig-
`[0042]
`ger caching decisions at the client nodes by issuing explicit
`control messages. In one embodiment, one control message
`requests clients to download(e.g., pre-fetch) some segments
`of a mediafile before the client actually requests them.In one
`embodiment, the segmentto be pre-fetched might not be ever
`requested or demanded bya client. In another embodiment,
`another control message requests clients to cache a future
`segmentthat is not yet demandedbythe client but predicted
`to be demandedbythe client in the near future. Whenclients
`issue their orders sequentially from the first segment to the
`last (whichis the case for video streaming applications),it is
`possible for a control server to anticipate when a client will
`issue a request to download a future segment. Hence,if the
`demand for any of the future segments is higher than the
`supply, the control servertriggers the caching by sending an
`explicit control message. Whena client receivesthe trigger,it
`continues its sequential download. When the segmentindi-
`cated by the trigger is scheduled to be received according to
`the normalvideo streaming operation,the clientstarts cach-
`ing that segment. In one embodiment, the clients contribute to
`the supply of a segment as soon as they start caching the
`segment. In another embodiment, clients contribute to the
`supply of a segmentonly after they fully cache the segment.
`[0043]
`In one embodiment, control servers track client
`arrivals into and departures from the streaming system, seg-
`ments requested and the associated request times, supply and
`demandstatistics for various segments, segment suppliers,
`supply/demandratesofclients, etc. In one embodiment,this
`information is used to predict the current and future supply
`and demand curves for each segment. In another embodi-
`ment, the predicted supply and demandcurves are used to
`define utility functions for each media segment and depend-
`ing on the valueofthe utility functions, control servers deter-
`mine the segmentsto be cachedatdifferent client locations at
`
`

`

`US 2009/0055471 Al
`
`Feb. 26, 2009
`
`different times. FIG. 3 illustrates an example of one estima-
`tion method. Other well known estimation models may be
`us

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket