throbber
?
`
`z
`
`(cid:3)
`
`Don Towsley
`
`Catching and Selective Catching: Efficient Latency Reduction
`Techniques for Delivering Continuous Multimedia Streams
`
`y
`
`Lixin Gao
`
`Zhi-Li Zhang
`
`Abstract
`
`We present a novel video streaming technique called catching for on-demand delivery of “hot” (i.e.,
`frequently accessed) video objects to a large number of clients. This technique not only significantly re-
`duces the server and network resource requirements but also is capable of providing near-instantaneous
`service to a large number of clients. We prove that the performance of catching is close to the best
`achievable by any broadcasting scheme that supplies near-instantaneous service. By combining this
`technique for delivery of “hot” video objects with controlled multicast [8] for delivery of “cold” video
`objects, we design an efficient video delivery scheme referred to as selective catching. Extending this
`scheme to a proxy-assisted video delivery environment, we also develop a proxy-assisted selective catch-
`ing scheme. Through empirical studies, we demonstrate the efficacy of the proposed video delivery
`schemes.
`
`1 Introduction
`
`y
`z
`(cid:3)
`?
`
`The past few years have seen the dramatic growth of multimedia applications which involve video stream-
`ing over the Internet. Server and network resources (in particular, server I/ O bandwidth and network
`bandwidth) have proved to be a major limiting factor in the widespread usage of video streaming over
`the Internet. In order to support a large population of clients, techniques that can efficiently utilize server
`and network resources are essential. In designing such techniques, another important factor that must
`be taken into consideration is the service latency, i.e., the time a client has to wait until the object he/she
`
`————————————-
`Department of Computer Science, Smith College, Northampton, MA 01060, USA. gao@cs.smith.edu
`Department of Computer Science and Engineering, University of Minnesota, Minneapolis, MN. 55455, USA.
`zhzhang@cs.umn.edu
`Department of Computer Science, University of Massachusetts, Amherst, MA 01030, USA. towsley@cs.umass.edu
`The first author was supported in part by NSF grant NCR-9729084, and NSF CAREER Award grant ANI-9875513, and
`the second author was supported in part by NSF CAREER Award grant NCR-9734428. The third author was supported in
`part by NSF grant ANI-9805185. Any opinions, findings, and conclusions or recommendations expressed in this material
`are those of the authors and do not necessarily reflect the views of the National Science Foundation.
`
`1
`
`Petitioners' Exhibit 1026
`Page 0001
`
`

`

`has requested is started to playback. The effectiveness of a video delivery technique must be evaluated
`in terms of both the server and network resources required for delivering a video obj ect and the expected
`service latency experienced by clients. Clearly, “popularity” or access pattern of video objects (i.e., how
`frequent a video object is accessed in a given time period) plays an important role in determining the
`effectiveness of a video delivery technique.
`In this paper we propose and develop a novel video delivery technique called catching, which can
`efficiently utilize server and network resources while providing near instantaneous service to clients.
`This technique is particularly suitable for “hot” (i.e., frequently access) video objects. The effective-
`ness of the technique is achieved through intelligent integration of the “server-push” and “client-pull”
`video delivery paradigms. Using this technique, a video server periodically “broadcasts” a video ob-
`ject via a number of dedicated multicast channels. A client who wishes to watch the video immediately
`joins an appropriate multicast channel without waiting for the beginning of the next broadcast period. At
`the same time, the client sends a request to the server to retrieve the missing initial video data (referred
`to as the prefix of the video object). The prefix is delivered by the server using a unicast channel and
`played back immediately by the client. On the other hand, the video data received from the multicast
`channels will be temporarily buffered at the client until they are played back. Hence, catching provides
`the minimal service latency by allowing a client to join the on-going multicast channels to receive video
`data “pushed” by the server while “pulling” the missing video data from the server via aunicast chan-
`nel. Using a smart broadcast scheme such as the Greedy Disk-conserving Broadcast (GDB) scheme [7],
`we can minimize the expected server and network channels needed to deliver a video object, given its
`access pattern. This is verified through simulations. Futhermore, we prove that the number of chan-
`nels required by catching is close to the minimum achievable by any broadcasting scheme that supplies
`near-instantaneous service.
`In order to account for the diverse access patterns for a collection of video objects in a video server,
`we design an efficient video delivery scheme called selective catching which combines catching with
`another video delivery technique –controlled multicast. Controlled multicast is a “client-pull” technique
`which is most effective in delivering “cold” video objects. Based on video access patterns, we introduce
`a simple policy for classifying “hot” and “cold” video objects and apply catching andcontrolled multi-
`cast accordingly to deliver the video objects to clients. Through empirical studies, we demonstrate that
`in terms of both server/network resource requirements and service latency, selective catching outper-
`forms either catching or controlled multicast applied alone.
`The proposed catching and selective catching techniques can also be applied in a proxy-assisted
`video delivery environment [12, 13] where initial partial video data (prefixes) of some video objects can
`be staged in proxy servers in a pre-determined manner. Under this proxy-assisted video delivery archi-
`tecture, we can take advantage of the resources (processing and disk storage) available at proxy servers
`to significantly reduce the server and (backbone wide-area) network resource requirements while at the
`same time providing instantaneous or near-instantaneous service to clients. We present a brief descrip-
`tion of the algorithms used by a central video server, proxy servers and clients to coordinate the video
`delivery using proxy servers. Simulations are carried out to illustrate the significant reduction in server
`and network resource requirements achieved using this proxy-assisted video delivery architecture.
`The remainder of this paper is organized as follows. The related work is briefly surveyed in Sec-
`tion 1.1. In Section 2 we describe the problem setting and the background material necessary for the
`catching technique which we present in Section 3. The selective catching video delivery scheme is in-
`
`2
`
`Petitioners' Exhibit 1026
`Page 0002
`
`

`

`troduced in Section 4. In Section 5 we apply the selective catching technique to the proxy-assisted video
`delivery architecture. The paper is concluded in Section 6.
`
`1.1 Related Work
`
`In recent years a variety of “client-pull” and “server-push” techniques for videodelivery have been pro-
`posed (see, e.g., [6, 1, 3, 4, 7]). The simplest “client-pull” technique is to deliver a separate video stream
`upon each client request. This technique, while providing minimal service latency to a client, is obvi-
`ously not efficient in terms of server and network resource utilization. Clever “client-pull” techniques
`such as batching [1, 6] and patching [5, 8] have been proposed that take advantage of the underlying
`network multicasting capabilities to reduce server and network resource requirements. In the case of
`batching, this reduction in server and network resource requirements is achieved through increased ser-
`vice latency, as it delays earlier requests for a video object until a certain number of requests for the same
`object arrive before the video object is scheduled to be delivered. Hence, batching is less effective for
`“cold” video objects. On the other hand, “patching”, which allows multiple clients to share a multicast
`channel whenever possible, is most effective in reducing the server and network resource requirements
`for “cold” video objects.
`“Server-push” techniques are typically designed for “hot” video objects. They employa fixed num-
`ber of multicast channels to periodically broadcast video objects to a group of subscribers. The differ-
`ence between various “server-push” techniques lies in the broadcast schemes used.These broadcast
`schemes determine the server and network resources required for broadcasting a video object. “Server-
`push” techniques have the advantage that they utilize server and network resources moreefficiently. But
`this efficiency is achieved through increased service latency, as a client can only start receiving a video
`object at the beginning of next broadcast period.
`The catching technique we propose reduces service latency while taking advantage of the efficiency
`of periodic broadcast schemes in utilizing server and network resources. It thus eliminates the shortcom-
`ing associated with periodic-broadcast-based ”server-push” techniques. Catching is similiar in spirit to
`the split and merge (SAM) protocol [14] proposed for interactive VOD systems, where a unicast stream
`is scheduled on demand by a client’s request. The selective catching technique further improves the
`overall performance by combining catching and controlled multicast to account for diverse user access
`patterns.
`The problem of delivering continuous media streams using proxy servers has been studied in a num-
`ber of contexts. In [12], we develop video staging techniques to store a per-determined amount of video
`streams in strategically placed proxy servers to reduce the backbone network bandwidth requirement
`for delivering video streams across a wide-area network. In [13], a prefix caching scheme is proposed
`to reduce the latency while delivering smoothed variable-bit-rate (VBR) continuous streams between
`the proxy and clients. Proxy-assisted video delivery is also proposed in the context of the dynamic
`skyscraper delivery scheme in [10].
`Our proxy-assisted catching scheme can improve service latency as well as reduce server and net-
`work resource requirements. Unlike [10], our scheme can handle variable object sizes, and is based
`on formal analysis of multicast scheduling policies. From this analysis, the design parameters can be
`derived in a straightforward manner. As a result, our solution can be optimized accordingly.
`
`3
`
`Petitioners' Exhibit 1026
`Page 0003
`
`

`

`2 Problem Overview and Preliminaries
`
`2.1 Problem Overview
`
`SCHEDULER
`
`CONTROL
`CHANNEL
`
`CHANNEL1
`
`CHANNEL2
`
`CHANNEL3
`
`DATA
`SERVER
`
`VIDEO
`STORAGE
`
`Video Server
`
`Network
`
`Client
`
`Client
`
`Client
`
`Client
`
`DISK
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`DISPLAY
`
`Figure 1: An overview of VOD system architecture.
`
`
`
`Consider a typical stored video-on-demand (VOD) delivery system as shown in Figure 1. A central
`video server delivers video streams from a video object store to a large number of clients across a high-
`speed network. The central server organizes the server and network resources re quired to deliver a video
`stream
`into a (logical) channel. A channel can be either a unicast channel or a multicast channel. The
`server uses a unicast channel to deliver a video stream to a single client, whereas the server uses a multi-
`cast channel to deliver a video stream simultaneously to a group of clients (this group of clients is referred
`to as a multicast group). In addition to the logical channels used for delivering video streams (i.e., video
`delivery channels), we also assume that there are control channels to deliver signaling messages to a
`client or a group of clients and vice versa for control purposes (e.g., which video object is requested by
`a client, which data channels a client should tune in to, when to start video play-back, etc.). The video
`server has a scheduler which receives client requests for video objects via control channels, processes
`client request and determine when and which video delivery channels to deliver requested video objects
`to clients.
`Each client contains a set-top box, a disk, and a display monitor. A client is connected to the network
`via a set-top box, which selects one or more network channels to receive a requested video object ac-
`cording to the instructions from the server. The received video data are either sent to the display monitor
`for immediate playback, or temporarily stored on the disk which is retrieved and played back on the dis-
`play monitor later. The client storage space is the maximum disk space required throughout the client
`playback period. For ease of exposition, in this paper we assume that the client disk space is sufficiently
`
`
`
`In this paper we use the term video stream to denote a continuous flow or “stream” of video data (belonging to a certain
`video object) delivered from the server to a client or a group of clients. As will be clear later, a single video object can be
`partitioned into segments and delivered using multiple video streams via several delivery channels.
`
`4
`
`Petitioners' Exhibit 1026
`Page 0004
`
`

`

`
`
`e assume that there is a total ofC logical channels available, and there areN video objects in the
`server object store. The length of theith object,i= ;;:::;N, isLi minutes long. We assume that the
`requests for theith video object arrive according to a Poisson distribution with an expected inter-arrival
`time of =(cid:21)i, where(cid:21)i is the request rate of videoi.
`
`. The client network bandwidth is the maximum client network band-
`large to store at least half a video
`width required to receive video data from the network throughout the client playback period. In this
`paper, we assume that client has the capability of receiving video data from two channels at the same
`time
`
`. W
`
`Given a client request for a video object, the service latency experienced by a client is the amount
`of time that the client has to wait until he/she can start the playback of the requested video object. A
`key issue in the design of video delivery techniques is how to efficiently utilize server and network re-
`sources (i.e., use the least number of video delivery channels necessary for delivering a video object)
`while keeping the (expected) service latency experienced by clients as small as possible.
`
`2.2 Preliminaries
`
`We briefly describe two video delivery techniques we have developed earlier to provide the necessary
`background for the catching technique we will introduce in Section 3.
`
`2.2.1 Controlled Multicast: An Optimal Patching
`
`Controlled multicast is a patching technique which allows multiple clients to share a multicast chan-
`nel without delaying a client request. As a result, it is capable of supporting a large number of clients
`while providing near instantaneous service. Controlled multicast differs from the patching technique
`proposed in [5] in that it employs a patching threshold to optimize the expected channels needed to de-
`liver a given video object.
`As a “client-pull” technique, controlled multicast allocates channels at therequest of clients. The
`following example illustrates how control multicast works. Consider two requests spaced 5 minutes
`apart for a 90 minute long video. A multicast channel is allocated to transmit the entire video in order
`to satisfy the first request. The second request is satisfied by allocating a separate unicast channel to
`“patch-up”, i.e., transmit the first five minutes of the video to the client while inthe same time requiring
`the client to prefetch the rest of the video from the first channel. Because the second client is five minutes
`behind, it will buffer continually five minutes of the video data.
`Controlled multicast only allows clients to share a multicast channel to receive a video stream when
`the later client requests for the same video object arrive within a certain time from the first client request.
`Otherwise, a complete video transmission for the video object is scheduled using a new multicast chan-
`
`nel. In other words, for each video objecti, a thresholdTi is defined to control the frequency at which a
`
`
`
`This assumption is not essential, since our proposed schemes can be easily extended to a general case where clients have
`any given amount of disk storage space, as we will point out in Section 3. In all of our empirical studies, the amount of client
`disk storage space used is actually only at most one third of a video object.
`With the advent of high-speed access technologies such as ADSL and cable modems, this is not an unreasonable
`assumption.
`
`5
`
`Petitioners' Exhibit 1026
`Page 0005
`
`

`

`This process repeats forever. For the details of interaction between the server and clients, readers are
`referred to [8].
`
`2.2.2 GDB: An Efficient Periodic Broadcast
`
`(1)
`
`(2)
`
`complete stream of videoi is delivered [8]. For simplicity of exposition, we describe controlled multi-
`cast assuming that there are infinite number of server channels. When the first request for videoi arrives
`at timet, the scheduler schedules a complete video stream of videoi on a channel, sayMGi, immedi-
`ately. Any subsequent request for videoi piggybacks from channelMGi so long as the request arrives
`withinTi minutes from the starting time of the previous complete transmission of videoi (which is time
`t in this case). Otherwise, the request is served by initiating a new complete transmission of videoi.
`In [8], the optimal value for the thresholdTi of videoi is derived and is given below
`Ti=(qLi(cid:21)i+ (cid:0) )=(cid:21)i:
`Using this optimal threshold, the expected server bandwidth required to deliver videoi to clients is
`qLi(cid:21)i+ (cid:0) :
`playback. Furthermore, clients are guaranteed a maximum service latency ofd minutes with only 4 dedi-
`cated channels, where periodically broadcasting the video stream everyd minutes would require 11 ded-
`everyd minutes but using only 4 channels. The key issue in periodic broadcast schemes is to determine
`a periodic broadcast scheme. GivenKi dedicated multicast channels, a partition functionf(n) divides
`a video object intoKi segments,T ;T;:::;TKi, as follows. Forn= ;;:::;Ki, segmentTn con-
`tainsf(n)Li=PKim= f(m) minutes of video data, and its data starts at the
`Pn(cid:0) m= f(m)Li=PKim= f(m)
`Pnm= f(m)Li=PKim= f(m) th minute of the video. In [7], a set of
`channels (i.e., receiving from two channels simultaneously), the optimal partition functionf(n) used in
`
`Greedy Disk-conserving Broadcast (GDB) [7] is a “server-push” video delivery technique based on the
`innovative periodic broadcast schemes developed recently [1, 3, 4]. Under these schemes, a video ob-
`ject is partitioned into segments and each segment is periodically broadcasted via a dedicated channel.
`The sizes of these segments are carefully designed in such a manner that clients who wish to receive the
`video object can join the appropriate channels to receive various segments at scheduled times to ensure
`continuous playback of the video object. Figure 2 illustrates how the schemes work through a simple
`example. A video is partitioned into four segments: A, B, C, D. Each segment is broadcast periodically
`via a dedicated channel. Clients prefetch video data according to a schedule that ensures the continuous
`
`icated channels to guarantee the same maximum service latency. In effect, a complete stream is multicast
`
`how to partition video objects into segments so as to enable continuous playback at clients. A method
`for partitioning video objects is referred to as a partition function, which determines the performance of
`
`th minute of the video and ends at the
`constraints on partition functions are derived based on resource availability at the client side. In partic-
`ular, GDB is shown to be most efficient among all the existing schemes, given the same client resource
`constraints. For example, if the network bandwidth at the client side is only sufficient to support two
`
`6
`
`Petitioners' Exhibit 1026
`Page 0006
`
`

`

`Video:
`
`A
`
`B
`
`C
`
`D
`
` d
`
`2d
`
`2d
`
`5d
`
`Channel 1:
`
`Channel 2:
`
`Channel 3:
`
`Channel 4:
`
`A A A
`
`A
`A
`AA A A A A A
`
`
`
`A
`
`A
`A A A A A
`
`A
`
`B B B B B
`
`C C C C C
`
`B
`
`C
`
`B B B B
`
`C
`
`C
`
`C C
`
`D
`
`D
`
`D
`
`D
`
`arrive
`Client 1:
`
`display
`C
`B
`A
` save D
`
`D
`
`Client 2:
`
` d
`
`D
`
`disk data
`
`arrive
`
`display
`A
`B C
` save
`B C
`
` save
`
`D
`
`disk data
`
` d4
`
`time
`
`Figure 2: An example of periodic broadcast scheme.
`
`7
`
`Petitioners' Exhibit 1026
`Page 0007
`
`

`

`GDB has the following form:fGDB(n)=>>>>>><>>>>>>: ;
`ifn=
`ifn=;
`;
`ifn=;
`;
` ;
`ifn=;
`ifn>
`f(n(cid:0));
`to(fGDB(Ki)(cid:0) ) times the first segment size for a video object.
`
`3 Catching
`
`Given this partition function, it can be shown that the disk storage requirement at the client is equal
`
`(3)
`
`Periodic broadcast schemes can significantly reduce the number of channels required for delivering
`“hot” (i.e., frequently accessed) videos to large number of clients. However, theysuffer from the short-
`coming that clients experience increased service latency. In this section we present a novel video deliv-
`ery technique — catching, which achieves the resource efficiency of periodic broadcast schemes while
`providing near instantaneous service to clients. This technique synergically combines the “server-push”
`and “client-pull” video delivery paradigms: Like periodic broadcast, catching dedicates a certain num-
`ber of channels for periodic broadcasting. But unlike periodic broadcast, it reduces the service latency
`by allowing clients to join a on-going broadcast cycle at any time instead of waiting for the next broad-
`cast cycle. Clients catch up with the current broadcast cycle by retrieving the missing initial frames (or
`a prefix) of the video object from the server via a separate unicast channel. Clients play back the pre-
`fix as it is received from the server, while at the same time temporarily storing the video data received
`from the broadcast channel into the disk. We illustrate how catching works using a simple example. As
`shown in Figure 3, a video object is partitioned into four segments, A, B, C and D, where A, B and C are
`of equal length, whereas D is twice as long. Therefore the server dedicates four channels to broadcast
`
`broadcast cycle of segment A. It joins the on-going broadcast cycle of segment A to receive the remain-
`ing data of segment A. At the same time it initiates a unicast “catch-up channel” via which the server
`
`the four segments separately. Consider client 1 who arrivest seconds after the beginning of the current
`delivers the firstt seconds of video data of segment A to client 1. This “catch-up” stream is played back
`client 2 needs to buffer at mosts seconds of video data at any time, whereas client 1 needs to buffer up
`tot+d seconds of video data. In both cases, client 1 and client 2 receive video data from at most two
`
`immediately by client 1, while the broadcasted data of segment A is temporarily stored. At the end of
`the current broadcast cycle of segment A, client 1 starts receiving segment B by joining the next broad-
`cast cycle of segment B, while continuing the playback of segment A. At the end of the broadcast cycle
`of segment B, client 1 joins both the next broadcast cycles of segments C and D. By temporarily stored
`video data that do not need to be played back immediately, client 1 ensures the continuous playback of
`the video object. The behavior of client 2 is similar. The only difference is that at the end of the broad-
`cast cycle of segment B, client 2 only needs to join the next broadcast cycle of segment C. As a result,
`
`channels at any given time.
`From the above example, we see that the partition function used in catching for periodic broadcasting
`can be derived directly from that used in a pure periodic broadcast scheme such as GDB. For example,
`
`8
`
`Petitioners' Exhibit 1026
`Page 0008
`
`

`

`Video:
`
`BA
`
`C
`
` d
`
` d
`
` d
`
`D
`
`2d
`
`Dedicated
`Channel 1:
`
`Dedicated
`Channel 2:
`
`Dedicated
`Channel 3:
`
`A A A A A
`
`A
`
`B B B B B B
`
`C C C C C C
`
`D
`
`D
`
`D
`
`Dedicated
`Channel 4:
`Client 1:
`
`arrive
`t
`catchup stream: first t sec of A
`A
` receive:
`A B C
` receive:
`D
` receive:
`
`display A
`
`B
`
`C
`
`disk data
`
`arrive
`s
`Client 2:
`catchup stream: first s sec of A
`A
`A
`B C
`
` receive:
` receive:
`
`D
`
` d
`t
`
`BA
`
`C
`
`D
`
`display
`
`disk data
`
`s
`
`time
`
`Figure 3: Illustration of catching.
`
`9
`
`Petitioners' Exhibit 1026
`Page 0009
`
`

`

`given that the network bandwidth on the client side is only sufficient to support two channels, the parti-
`
`(4)
`
`to join only one broadcast channel while receiving the “catch-up” stream from the server. Note that no
`matter when a client arrives during a broadcast cycle of the first segment, by the end of the broadcast
`cycle of the second segment, the “catch-up” stream of the first segment must have completely received
`and played back. Hence, after the end of broadcast cycle of the second segment, a client only needs to
`join the broadcast channels to receive the appropriate video segments. The client schedule for determin-
`ing which channels to join and when to join is exactly the same as used in GDB. The only difference is
`
`tion function for catching is given byf(n)=>>>>>><>>>>>>: ;
`ifn(cid:20)
`ifn=;
`;
`;
`ifn=;
`ifn=;
` ;
`ifn> :
`f(n(cid:0));
`Comparing (3) and (4, we see thatf(n) is derived fromfGDB by adding two initial segments whose
`length is the same as that of the first segment offGDB. In other words,f( )=f()=fGDB( ) and
`f(n)=fGDB(n(cid:0)) forn(cid:21) . The first two identical segments are added to ensure that a client needs
`that in catching clients always needs to buffer at leastt seconds of video data at any time, if it arrivest
`based on the partition functionf(n) defined in (4). Namely, we assume that the network bandwidth on
`Consider video objecti whose length isLi. Given the partition functionf(n), supposeKi channels
`are dedicated to broadcast video objecti and letFi denote the length of the first broadcast segment of
`video objecti. Then from the definition of the partition function, we have
`Li=FiKiXn= f(n):
`Under the assumption that client requests for video objecti arrive according to Poisson distribution with
`an average rate(cid:21)i, the expected number of unicast channels needed to deliver “catch-up” streams to
`(cid:21)iFi=:
`This is because the expected length of the “catch-up” streams isFi=. Hence, the total expected number
`of channels required for delivering video objecti to clients is
`Ki+Fi(cid:21)i=:
`
`seconds later than the beginning of an on-going broadcast cycle of the first segment.
`Clearly, the ability of catching to achieve near-instantaneous service lies in the fact that, besides
`the dedicated broadcast channels, catching allocates additional unicast channels on-demand to deliver
`“catch-up” prefix streams. Hence the performance of catching is determined bythe number of dedicated
`channels used for periodic broadcast as well by the number of “catch-up” channels needed. The rest of
`this section is devoted to the analysis of the optimal performance of catching for a video object with a
`known user access pattern. For simplicity of exposition, we will analyze the performance of catching
`
`the client side is only sufficient to support two channels at the same time, and that clients have sufficient
`disk storage space to buffer at least half of the length of a video object in question. At the end of this
`section, we will briefly discuss how the analysis can be extended to more general cases.
`
`clients is
`
`10
`
`(5)
`
`(6)
`
`(7)
`
`Petitioners' Exhibit 1026
`Page 0010
`
`

`

`unicast channels are needed to deliver “catch-up” streams to clients. Therefore, there is trade-off be-
`tween the number of dedicated broadcast channels and the expected number of unicast channels required
`for catching up. In order to optimize resource efficiency of catching, we minimize the total expected
`number of channels required to deliver each video object. This leads to the following optimization prob-
`lem:
`
`cated broadcast channels such that the total expected number of channels required for delivery of video
`
`Therefore, the expected number of channels required for catching is
`
`(8)
`
`(9)
`
`that
`
`From (5), we see that the smaller the number of dedicated broadcast channelsKi is, the larger the first
`broadcast segmentFi becomes. On the other hand, from (6), it is clear that the largerFi results in more
`MinimizeKi+Fi(cid:21)i=
`subject toLi=FiPKin= f(n).
`LetK(cid:3)i be the solution to the above optimization problem. Namely,K(cid:3)i is the optimal number of dedi-
`objecti is minimized, i.e.,
`K(cid:3)i=argminKi(Ki+Fi(cid:21)i=):
`K?i+Li(cid:21)i=(K?iXj= f(j)):
`For the partition functionf(n) given in (4),K(cid:3)i can be derived analytically, the detail of which is rele-
`gated to Appendix A. Here we provide an order estimate forK(cid:3)i . From (4), it is not too hard to verify
`PKij= f(j)=O(Ki=). Substituting this into (9) and taking the first-order derivative, we have
`K?i=O(log((cid:21)iLi)):
`Furthermore, the total expected number of channels required for videoi is
`O(log((cid:21)iLi))
`O(log((cid:21)iLi)):
`Li(cid:21)i=(K?iXj= f(j))=O( )
`with optimalK(cid:3)i . Comparing with (2), it is clear that for(cid:21)i reasonably large, the total expected num-
`
`(10)
`
`(11)
`
`since the expected number of unicast “catch-up” channels required (see Equation (9))
`
`ber of channels required by catching to deliver a video object is considerably less than that required by
`controlled multicast. In other words, for “hot” video objects, catching is much more efficient than con-
`trolled multicast. To verify this observation, let us consider an numerical example. In Figure 4 we plot
`
`11
`
`Petitioners' Exhibit 1026
`Page 0011
`
`

`

`server channels vs. arrival rate
`
`Controlled Multicast
`Catching
`
`1
`
`2
`
`3
`
`4
`
`5
`arrival rate
`
`6
`
`7
`
`8
`
`9
`
`10
`
`Figure 4: Number of server channels vs. arrival rate.
`
`40
`
`35
`
`30
`
`25
`
`20
`
`15
`
`10
`
`05
`
`number of server channels
`
`the expected number of channels required to deliver 90 minute long video object under catching and
`controlled multicast respectively, as the arrival rate of client requests for the video object varies from
`0.1 to 0.9. We can see that catching requires fewer channels than controlled multicast when the request
`rate is greater than 0.4. When the request rate drops below 0.4, controlled multicast has better perfor-
`mance. This is because for a “cold” video, client requests for the video object do not come frequently
`enough to take advantage of periodic broadcast.
`Although the analysis in this section is carried out based on the assumption that clients only have
`sufficient network bandwidth to support two channels, and that clients have sufficient disk storage space
`to store half of a video object, these assumptions are not essential. We can easily extend our analysis
`to cases where clients can support more than two channels by choosing the appropriate partition func-
`tions [7]. Furthermore, since the amount of disk storage space required at clients equals to the size of
`the largest broadcast segment [7], we can restrict the partition function in such a manner that the largest
`broadcast segment size is always smaller than the available client disk storage space (please refer to [7]
`to see how this can be done).
`
`L and whose request arrival is a Poission process with arrival rate(cid:21), any broadcast scheme needs at least
`log((cid:21)L) channels to supply the instantaneous sevice.
`Consider thei-th frame of the video. First, we prove that we need at leasti+ =(cid:21) channels for
`
`3.1 Optimality of Catching
`
`In this section, we prove that the number of channels required by catching close to the minimum achiev-
`able by any broadcast scheme that supplies near-instantaneous service. Formally, for a video of length
`
`12
`
`Petitioners' Exhibit 1026
`Page 0012
`
`

`

`broadcasting framei.
`Let random varibleXi denote the time between two consecutivei-th frame multicast. Here we use
`one frame time as our time unit. We claim thatE[Xi](cid:20)i+ =(cid:21).
`Suppose framei is broadcast at timet. Any client that arrives between timet(cid:0)i andt can retrieve
`the same frame multicast at timet. However, the first client that arrives after timet could not. Lets
`denote the time that the first client arrives after timet. Framei has to be broadcast once between time
`t and times+i to ensure the continuous playback of the first client arrring after timet. Therefore,
`Xi(cid:20)s+i(cid:0)t. Since the arrival process is Poission with rate(cid:21), we know thatE[s(cid:0)t]= =(cid:21).
`Therefore,E[Xi](cid:20)i+ =(cid:21).
`Since we need to use at least =E[Xi] channels for thei-th frame, we need at leasti+ =(cid:21) channels
`for broadcasting framei. Summing all frame

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket