throbber
as) United States
`a2) Patent Application Publication (10) Pub. No.: US 2003/0067872 Al
`(43) Pub. Date: Apr. 10, 2003
`
`Harrell et al.
`
`US 20030067872A1
`
`(54) FLOW CONTROL METHOD FOR QUALITY
`STREAMING OF AUDIO/VIDEO/MEDIA
`OVER PACKET NETWORKS
`
`(75)
`
`Inventors: Chandlee Harrell, Cupertino, CA (US);
`Edward R.Ratner, Sunnyvale, CA
`(US); Thomas D. Miller, Alamo, CA
`(US); Adityo Prakash, Redwood
`Shores, CA (US); Hon Hing So, San
`Jose, CA (US)
`
`Correspondence Address:
`OKAMOTO & BENEDICTO, LLP
`P.O. BOX 641330
`SAN JOSE, CA 95164 (US)
`
`(73)
`
`Assignee: Pulsent Corporation, Milpitas, CA (US)
`
`(21)
`
`Appl. No.:
`
`10/243,628
`
`(22)
`
`Filed:
`
`Sep. 13, 2002
`
`Related U.S. Application Data
`
`(60) Provisional application No. 60/323,500,filed on Sep.
`17, 2001.
`
`Publication Classification
`
`(SD) Ute C07 ic cccccccssessssesesnstssessnesevesecnee GOIR 31/08
`(52) US. CM.
`accscccsscsssesstesesesetesesennee 370/229; 370/477
`
`(57)
`
`ABSTRACT
`
`A method and apparatus for client-side detection of network
`congestion in a best-effort packet network comprising
`streaming mediatraffic is disclosed. Said method and appa-
`ratus provide for quality streaming media services in a
`congested network with constrained bandwidth over the
`last-mile link. A client media buffer detects at least one level
`
`of congestion and signals a server to enact at least one error
`mechanism. Preferred error mechanisms include packet
`retransmissions, stream prioritization, stream acceleration,
`changes in media compression rate, and changes in media
`resolution. Said method and apparatus allow distributed
`management of network congestion for networks compris-
`ing multiple clients and carrying significant streaming media
`traffic.
`
`Video server
`202
`
`i
`
`j
`
`|
`
`
`
`Va 204
`
`|
`
`
`Chent
`media
`
`buffer
` Last-mile link
`
`
`
` Clent
`212
`
`
`SONY- Ex.-1008
`Sony Corporation - Petitioner
`
`1
`
`SONY - Ex.-1008
`Sony Corporation - Petitioner
`
`1
`
`

`

`Patent Application Publication Apr. 10,2003 Sheet 1 of 9
`
`US 2003/0067872 Al
`
`
`
` a
`
`IAIIIIY
`
`yodsuv1y,
`
`IoAeT
`
`YIOMISN
`
`IaAe'T
`
`uoneornddy
`ejequoneorddy
`
`00!
`
`JAPPUNOASYING[“Sty
`
`
`
`
`
`yWuIsUeL|
`
`UOISSIMISUBIT818YORBIS[090101gdV/dDOL
`
`
`JaAR]ayoed
`
`
`
`
`
`2
`
`
`

`

`US 2003/0067872 Al
`
`C‘old
`
`Patent Application Publication Apr. 10,2003 Sheet 2 of 9
`
`
`
`BIpsut
`
`Jaygng
`
`O1Z
`
`wey)
`
`Z1Z
`
`apTU-yse7]
`
`902
`
`aoe/oe
`
`oye
`
`ylomjou
`
`wat[D UT]
`
`|Q
`
`oO
`NS
`
`
`
`pOT
`
`
`
`JOAIOSODPLA,
`
`C07
`
`3
`
`
`
`
`
`
`

`

`Patent Application Publication Apr. 10,2003 Sheet 3 of 9
`
`US 2003/0067872 Al
`
`
`
`
`
`pZepeeyang|:
`
`
`
`YSUMSWeanspoztto0tg
`
`
`
`ou07SAlLOY
`
`OoFestuinsaU/eS
`
`
`
`
`
`TojjngeIpey1ely>
`
`T9poyy
`
`OlCy
`
`
`
`
`
`|
`
`
`
`indaqseyyngoat,——_______,
`
`JayjngMOTIa1eAATale
`AdwyBnMOTyay
`ITA]Jalen,YIRAye
`
`
`8I¢LATbiecle
`
`
`
`OIE
`
`STE
`
`€‘B17
`
`
`
`au07au0zZAlsACDay2eeTYMO[]IIAQBOTHAO
`
`auO7LISTWMOTIIOATE]
`
`
`
`
`
`
`
`4
`
`
`
`
`

`

`US 2003/0067872 Al
`
`8lé
`
`fSl]
`
`AdurgqEN1a1eAA
`qayjngOceyr
`yeycle
`
`
`
`sunsayueissty
`
`
`
`H3TVMOTIEAQ
`
`su07Z
`
`
`
`
`
`JofjngdeIpsy]uals)
`
` Lins)-tndaqapt,sayjng
`
`
`
`
`
`sed4abatesatv
`
`1
`
`tone
`
`petty
`
`II9poyy
`
`Ol
`C
`
`Patent Application Publication Apr. 10,2003 Sheet 4 of 9
`
`eu07Tl v
`
`SdLYoUuMs
`
`au0z
`
`5
`
`
`
`
`
`

`

`Patent Application Publication Apr. 10,2003 Sheet 5 of 9
`
`US 2003/0067872 Al
`
`
`
`Yous
`
`SUIESIIS
`
`
`
`
`
` ajdurexgyoims.
`
`
`
`3
`
`yYU_YS01MOT
`
`InvemUsTH
`
`fo
`Oae Q=/
`
`
`
`asned
`
`aye)
`
`duLayayy
`
`sdqy00r
`
`
`
`6
`
`

`

`Patent Application Publication Apr. 10,2003 Sheet 6 of 9
`
`US 2003/0067872 Al
`
`
`
`
`
`
`
`
`
`so[dwexyapoyyues
`
`MO81YOVN-
`
`sdqwIT
`
`
`
`
`
`‘OTEYTBSuMsayueSITH|
`
`
`
`0}YoItms“yoeqAeydmBaq
`
`
`
`BUIAISS9121-]1q-U3ry
`
`UMOPMOTS\oesoer
`
`ayed dusayayy
`
`'
`
`
`
`uydeq‘QZ¢yreur
`
`
`
`anumuod‘xorgeid
`
`
`
`BUIAIOS9381-119-M0]
`
`SUMNSOY/UEISET
`
`MIOate1
`
`aut]
`
`SULAawWINUITUTTAY
`
`sdqyO07
`
`
`
`7
`
`
`
`
`
`
`

`

`Patent Application Publication
`
`Apr. 10, 2003 Sheet 7 of 9
`
`US 2003/0067872 Al
`
`pena
`
`(Raia
`
`UQMO]Jaa
`
`q@wieans
`
`
`
`xew)sdqwt']
`
`a[IW-1se]
`
`(Ajloedes
`
`aE]MOVN
`
`a[qeidasseun
`
`aAogeaslipue
`
`aunsay/ueis
`
`ysew
`
`Z‘81q
`
`
`
`S001)
`re21qe1d3908
`
`9221YOVN
`
`AADISBAOQY
`
`
`
`
`
`
`(JUSUIpoqurgqpoUajeig)weiselqo1¥19JeyIngBIDSquUsTTD
`
`
`
`
`apoAIS(Buyasasou)..:nog:Noreq[edTePpow
`aunsayooSUINSSY/URISSSLY
`
`-\YIBWMOTLIZAQ°SWOTYARToleAyRSIS
`
`JaeAYMOTeeyeuMOlaqTedyaew\yee.mojadogSOE:
`
`
`
`
`BAOQEaslyADUauinsay|:el
`MOLHOAQBA0geMOLPBACoeyewSUT[I]surBaq
`
`
`900?8THroyOOAwpacosy\moTeuin \jfaeySty.meeeal
`
`
`
`
`yreSUOZ.BaHAMOyoqPI\ale11gMO]12
`
`
`asryaTOF,afenoZBAL)auunsay/LieisJayyng“MS_}BAOgR
`pazmiaatsy|oreqdougaS08MAYel
`.,SAH.
`asTy
`cy/ae,U81q<a.
`YARISIEAAou\vb
`{auozyIg4\YSIH]saogeastyi\0b
`
`
`
`
`yrewI")18JaiSUBALISYUM
`
`
`aAaogEeasryauinsayjues
`
`aunsay/14nigaaogy>IIspoy
`aaogeastyauO7
`
`ANASmolag[124
`
`yieustinsay
`“aI211GMO01
`JajingAdwq\.“BOL”:
`
`Dayoeayew|“MOLMaPuy)
`uondaoxgcs
`
`
`
`MOlaq[]24
`
`MO[LISAQ
`
`t
`
`Seu
`
`rareUSI
`
`spuosas
`
`(BujasasOu)you
`
`YOMARG:
`
`
`
`‘auoz159)
`
`PEAS,
`
`
`
`8
`
`
`
`
`
`
`
`
`
`
`
`
`
`

`

`Patent Application Publication Apr. 10,2003 Sheet 8 of 9
`
`US 2003/0067872 Al
`
`
`
`UOBINSYUODYIOMIONTSGT9AQOSPLASATeUasaIday
`
`
`
`
`
`
`
`
`
`wstTO
`
`BIPOU
`
`Jazng
`
`1wat[g
`
`OI¢
`Z1Z
`
`
`Jeq]1002)
`
`(s}uarys
`
`pe
`
`
`
`ewoHy1wstT>
`
`
`
`
`
`
`
`
`
`D$“SLT
`
`
`
`YOUMS
`
`ALY|
`ospg
`
`0¢8
`
`
`
`
`
`BIDaT
`
`aBei0}
`
`Ss
`
`cl8
`
`
`
`NM
`
`jerauay
`
`jure]
`
`808
`
`yr
`)
`
`ea
`a
`
`9
`
`
`
`
`
`
`
`
`
`

`

`Patent Application Publication Apr. 10,2003 Sheet 9 of 9
`
`
`
`
`
`
`
`WOISaYyYIOMINAqsjusWUsIssyJORIS[090101gaatiejuasaiday
`
`US 2003/0067872 Al
`
`‘dan/doL
`
`dL
`
`JOUIOYIA
`
`AeuIey
`
`INWIM
`
`VNdemoyH
`
`
`
`WLV
`
`ISd
`
`
`
`WLV
`
`LYNOS
`
`
`
`18“SL
`
`JOUIBYIA|
`yUuryBeg
`
`/AeuIeyyy
`[eorsdyg
`
`[OD0101g
`
`IaAR]
`
`WOIssag
`
`10dsuv.sL
`
`Y1OMI9NT
`
`10
`
`10
`
`
`
`
`
`

`

`US 2003/0067872 Al
`
`Apr. 10, 2003
`
`FLOW CONTROL METHOD FOR QUALITY
`STREAMING OF AUDIO/VIDEO/MEDIA OVER
`PACKET NETWORKS
`
`CROSS-REFERENCES TO RELATED
`APPLICATIONS
`
`[0001] This application claims the benefit of U.S. Provi-
`sional Patent Application No. 60/323,500, filed on Sep. 17,
`2001, which is hereby incorporated by reference.
`
`BACKGROUND OF THE INVENTION
`
`[0002]
`
`1. Field of the Invention
`
`[0003] The present invention relates generally to the field
`of computer networks, more particularly to the flow of
`audio/video/media data through packet networks, andstill
`more particularly to congestion detection and flow control
`over packet networks.
`
`[0004]
`
`2. Description of the Related Art
`
`[0005] Media data, such as audio or video data, has
`traditionally been broadcast over dedicated bandwidth RF
`(radio frequency) channels to ensure quality delivery to the
`receiver. With the explosion of the Internet and the accom-
`panying deployment of broadband packet networks,
`the
`opportunity has arisen to include entertainment or media
`services along with other data services on the broadband
`networks. However, delivering streaming media over broad-
`band packet networks at a marketable quality with current
`technologyis difficult.
`
`[0006] Existing best-effort packet networks were not
`designed for high-bandwidth real-time data, such as stream-
`ing video. These networks were designed to accommodate
`the economic average ofdata traffic, so they often confront
`congestion in various nodes of the network when peaks in
`data traffic occur. Such congestion results in the loss or
`corruption of packets and thus interferes with the quality
`level of real-time data. In particular, such congestion can
`cause interruptions or delays in streaming media, resulting
`in a quality of service that is inferior to broadcast standards
`and thus not marketable to a broad customerbase.
`
`[0007] Much of the general Internet operates as a packet
`network with the TCP/IP stack of protocols for packet
`transmission. TCP (the “transmission control protocol”) is
`responsible for breaking data into manageable pieces, reas-
`sembling and ordering the pieces at
`the other end, and
`retransmitting any pieces that are lost. IP (the “Internet
`protocol”) determines a route for the data pieces between the
`transmitter and the receiver. Application-level data, such as
`streaming media data, is broken into pieces and each piece
`is given a transfer header by TCP. An IP header is then
`affixed to each piece, followed in somecases by a data link
`header providing information relevant to the actual data link
`over which the data will be transferred. At the receiving end,
`these headers are removed in inverse order until the original
`data is recovered.
`
`[0008] An alternative to TCP called UDP (User Datagram
`Protocol) may be used at the transport layer. Unlike TCP,
`UDP does not guarantee delivery, but UDP does deliver
`packets at a specified rate. It is thus often used for real-time
`application data. UDP is responsible for packetization, mul-
`tiplexing, and performing checksum operationsto verify the
`
`correct packet size. A second protocol, RTP (Real-time
`Transport Protocol), may be used in concert with UDP to
`handle data identification, sequence numbering, timestamp-
`ing, and delivery monitoring.
`
`[0009] FIG. 1 illustrates the organization of packets in the
`TCP/IP configuration. A packet of application data 100 first
`has a TCP header 102 attached. In the case of data for a
`real-time application, the TCP header 102 may be replaced
`with UDP/RTPheaderinformation as discussed above. Then
`a network layer or IP header 104 is attached, and finally a
`link layer header 106 is appended. The resulting packet is
`transmitted 108. As the packet is received 110, the headers
`106, 104, and 102 are removed in inverse order so that the
`original application data packet 100 is available to the
`receiver.
`
`[0010] Regional broadband networks, such as networks
`providing digital subscriber line (DSL) service into resi-
`dences over existing telecommunications copper wire infra-
`structure, are growing in popularity. Such networks provide
`opportunities to introduce higher quality streaming media
`because they provide greater ability for network engineering
`and flow control than for instance the general Internet. These
`networks typically communicate through the use of the
`asynchronous transfer mode (ATM) protocol with IP over
`ATM over SONET(Synchronous Optical Network) for the
`backbone link and with IP over ATM over DSL for the
`last-mile link. ATM provides potential quality of service
`control unlike the TCP/IP protocol family used for the
`general Internet, but due to the increase in costs associated
`with commissioning such service, existing telecommunica-
`tions access networks using ATM are not currently enabled
`for QoS (quality of service). Packet networks with QoS can
`reserve the bandwidth necessary for quality media stream-
`ing, but the significant expense associated with implement-
`ing this specification has thwarted its widespread introduc-
`tion.
`
`[0011] Avvariety of DSL specifications exist for providing
`data service over existing copper-wire telephone lines.
`Asymmetric DSL, or ADSL, is the most popular specifica-
`tion for residential customers, and it is reaching increasing
`numbers of households. ADSL is capable of providing
`downstream bandwidth in the 6 Mbps range over shorter
`distances, but more typically it can provide on the order of
`1.5 Mbps of downstream bandwidth and 384 kbps of
`upstream bandwidth to a broad customerbase. The potential
`for video content delivery over the general Internet, and
`specifically over DSL networks, is great, but its realization
`has been constrained not only by network congestion issues
`but also by the excessive bandwidth required for most
`quality video data. However, recent video compression
`advances by the assignee of this invention and potential
`future research allow broadcast quality video to be provided
`at compressionratios that are consistent with the typical 1.5
`Mbps bandwidth constraint of ADSL.
`
`[0012] As improving compression ratios make streaming
`video over ADSL or other constrained bandwidth networks
`feasible, several problems with implementation arise. In the
`presence of network congestion and a constrained last-mile
`link with limited headroom for error recovery, means must
`be found for avoiding error due to congestion-induced
`packet loss so that a service provider can maintain delivery
`of a high quality media stream to the client subscriber.
`
`11
`
`11
`
`

`

`US 2003/0067872 Al
`
`Apr. 10, 2003
`
`Furthermore, as streaming media, especially video, prolif-
`erates and consumes significant bandwidth network-wide,
`flow control techniques for managing network-wide con-
`gestion increase in importance.
`
`[0013] Existing flow control strategies for streaming
`media are minimal. Suchstrategies typically rely on server-
`side detection of congestion. Servers can monitor NACK
`(negative acknowledgement) signals that indicate when a
`client has not received a complete packet, and they can also
`monitor RTT (return trip time) to find how long packet
`transmission hastaken.In the case of streaming over TCP/IP
`networks, TCP can guarantee loss-less delivery of packets
`but not timely delivery. Servers can initiate flow control
`measures such as stream switching when they detect net-
`work congestion. Such measures typically result in pausing
`of the stream and rebuffering with relative frequency. This
`interruption of service is unacceptable for a streaming media
`provider who wishes to market competitive high quality
`entertainment services to customers.
`
`SUMMARY
`
`[0014] The present invention provides means for ensuring
`the delivery of quality streaming media to clients over
`packet networks that are subject to congestion situations.
`Motespecifically, this invention provides a novel solution to
`the problem of avoiding error in a media stream across a
`congested network with a constrained last-mile link. This
`invention also addresses the problem of managing network
`congestion when streaming media data consumesa signifi-
`cant share of network bandwidth, regardless of last-mile
`bandwidth availability.
`
`[0015] One embodiment of the invention comprises a
`method and apparatus for client-side detection of network
`congestion in a packet network featuring broadcast quality
`streaming media from a serverto at least one client. Another
`embodimentof the invention provides a method and appa-
`ratus for client-initiated error avoidance and flow control to
`ensure that network congestion does not disrupt the media
`stream from reaching the client. Another embodimentof the
`invention provides a method and apparatus for system-wide
`congestion control via distributed client-side congestion
`detection anddistributed client-initiated error avoidance and
`flow control.
`
`In one specific embodiment, a client receives a
`[0016]
`media stream into a media buffer, and the media buffer
`detects a plurality of levels of network congestion by
`monitoring the buffer level. The client is able to request a
`plurality of service adjustments from the media server in
`response to the plurality of congestion levels to avoid errors
`in the playback of the media stream. Such adjustments may
`include packet retransmissions, stream prioritization, stream
`acceleration, changes in media compression rate, changes in
`the enhancementlayer or layers in the case of multi-layered
`streams, dropping B frames in the case of video streaming,
`changes in media resolution, and maintaining audio while
`dropping video in exceptional cases. These adjustments
`allow the client to continue its continuous media stream with
`full quality whenever possible,
`and with gracefully
`decreased quality in the statistically rare instances when
`network congestion prevents the continuous transmission of
`the full-quality stream.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`<A further understanding of the nature and the
`[0017]
`advantagesof the invention disclosed herein may be realized
`by reference to the remaining portions of the specification
`and the attached drawings.
`
`[0018] FIG. 1 is an illustration of packet organization in
`the TCP/IP protocol stack.
`
`[0019] FIG. 2 is a diagram of a packet network to which
`the present invention applies.
`
`[0020] FIG. 3 is a diagram of a client media buffer
`operating in a first mode.
`
`[0021] FIG. 4 is a diagram of a client media buffer
`operating in a second mode.
`
`[0022] FIG. 5 is a graph showing metering rate for a
`low-bit-rate stream over time.
`
`[0023] FIG. 6 is a graph showing two examples of the
`metering rate over time during a start mode.
`
`[0024] FIG. 7 is a state diagram describing the operation
`of a client media buffer in a preferred embodiment.
`
`[0025] FIG. 8a is a block diagram of a telecommunica-
`tions access network to which the invention may be applied,
`featuring a video server and a DSLlinkto theclient.
`
`[0026] FIG. 8b is a chart listing the network protocols
`specified for the various regions of the access network in
`FIG. 8a.
`
`[0027] To aid in understanding, identical reference numer-
`als have been used wherever possible to designate identical
`elements in the figures.
`
`DETAILED DESCRIPTION OF THE SPECIFIC
`EMBODIMENTS
`
`[0028]
`
`1 Introduction
`
`[0029] One embodiment provides a solution to the prob-
`lem of providing uninterrupted streaming media over IP
`networks, such as telecommunications access networks, that
`do not otherwise guarantee Quality of Service (QoS). In
`particular, it provides for error avoidance despite limited
`recovery headroom in the last-mile link. For example, the
`invention might be applied to provide quality streaming of
`1.1 Mbpsof audio/video along with data and overhead over
`a 1.5 Mbps ADSLlink. In another example, the invention
`might be applied to deliver two 1.25 Mbps audio/video
`streams along with overhead and data over a single 3 Mbps
`link to a client.
`
`[0030] The invention is especially useful when streaming
`media traffic, such as for instance streaming video, con-
`sumes a significant proportion of the bandwidth of the
`access network. In this case, adjustments in the bandwidth
`required for
`individual media streams can significantly
`impact the congestion level of the overall network, which
`can then improve the quality of the media streams them-
`selves. In fact, the invention can reduce the overall conges-
`tion level even in the general case whenclient last-mile links
`are not constrained. The invention is also applicable to other
`packet networks, such as for instance wireless data net-
`works. It can also provide improvement in media streams
`over general IP networks, although the preferred embodi-
`
`12
`
`12
`
`

`

`US 2003/0067872 Al
`
`Apr. 10, 2003
`
`ment is tuned more specifically to providing a marketable
`broadcast quality of streaming media over telecommunica-
`tions access networks. The teachings of the present inven-
`tion can be extended in other embodiments to cover many
`other packet network configurations. The remainderof this
`specification will focus on a specific embodiment of the
`invention and several alternatives, but this specific embodi-
`ment and its stated alternatives should not be construed as
`limiting the invention itself or its applicability in other
`contexts.
`
`[0031]
`
`2 Problems Addressed
`
`[0032] As mention above in the Description of the Related
`Art, two key issues emerge as streaming media is offered
`over packet networks with constrained last-mile bandwidth.
`The first
`issue arises in the context of a single client
`occupying a negligible percentage of overall network band-
`width and having limited headroom in the last-mile link. In
`this scenario, changes in the individual client stream cannot
`affect the congestion situation of the network as a whole so
`measures must be taken to avoid error within the existing
`congestion situation. If there were no last-mile bandwidth
`constraint,
`in contrast, simply increasing the bandwidth
`devoted to the client’s media stream would allow for packet
`replacement and error avoidance. A typical last-mile link
`over an ADSL connection might allow 1.5 Mbps of total
`data, 0.4 Mbps of which might be devoted to overhead
`information and a data channel, leaving 1.1 Mbps for raw
`video and audio content. If the media stream consumesall
`or most of this 1.1 Mbps allotment
`to provide quality
`content, then after subtracting out protocol overhead there is
`very little room to push additional bits through the last-mile
`link to make up for the loss of packets. Congestion loss may
`exceed the headroom available for traditional error recovery.
`Otherstrategies are needed to ensure that these packets are
`replaced before playback of the stream is corrupted.
`
`[0034] The remainder of the specification will provide
`details of a method for client detection of network conges-
`tion and client-initiated measures to avoid error and to
`improve the congestion situation of the network. In its
`applications, this method addresses both of the above issues
`associated with streaming media over packet networks and
`thus provides a significant advance in the art.
`
`[0035]
`
`3 Detailed Description of the Drawings
`
`[0036]
`
`3.1 Overview
`
`[0037] FIG. 2 illustrates at a high level a typical packet
`network configuration to which the disclosed method
`applies. The network 200 is comprised of a server 202 that
`is connected to an access network 206 by a link 204, and a
`client 212 that is attached to a client media buffer 210 that
`
`is in turn attached to the access network 206 by a last-mile
`link 208. In many cases, the last-mile link 208 will have
`constrained bandwidth, implying that there is little head-
`room over the required media rate for error packet recovery
`in the case of lost or bad packets. Most current network links
`to consumers have constrained bandwidth overthe last mile,
`limiting the rate at which data packets can be transferred.
`Examplesof such links include digital subscriber line (DSL)
`connections, cable modem connections, traditional dial-up
`modem connections, wireless data network links, etc.
`
`[0038] The client media buffer 210 plays an important role
`in providing an error-free media stream to the client. A
`buffering portion and a signaling device operatively coupled
`to the buffering portion that can send signals to the server
`202 together comprise the client media buffer 210. The
`buffer is large enough to allow recovery from infrequent
`packetloss throughat least one congestion detection mecha-
`nism. In response to at least one detected congestion level,
`the buffer may implement at
`least one error avoidance
`mechanism. For instance, the buffer duration is long enough
`to allow packet
`retransmission before the lost packet
`obstructs the clent’s media streaming experience. The
`buffer may also be able to detect heavier congestion situa-
`tions with enough lead time to allow a switch to a lowerbit
`rate video stream. This switch prevents any hesitation or
`interruption in the frame sequence but may cause an accept-
`able degradation in video quality during a lower-bit-rate
`streaming period. Preferably, the buffer can detect multiple
`levels of network congestion and can initiate multiple levels
`of error handling for graceful degradation throughstatisti-
`cally less frequent congestion error situations.
`
`[0033] The second issue relates to the effects of streaming
`media traffic itself on overall network congestion. This
`second issue does not depend on a constrainedlast-mile link,
`but applies also to the general situation of unconstrained
`links to each client. As streaming media, and streaming
`video in particular, achieves broader and higher quality
`deployment,it is likely to consume a significant proportion
`of available bandwidth over entire packet networks. In this
`scenario, adjustments of the media streams themselves can
`impact the overall network congestion level. Thus, practical
`measures to adjust media streams to maintain quality for
`the client media
`In the preferred embodiment,
`[0039]
`individual clients throughout
`the network can have the
`buffer operates as a well-known FIFO (first in, first out)
`additional impact of improving the congestion situation of
`buffer under good network conditions. However, the buffer
`the network as a whole. For example, suppose that streaming
`video accounts for 50% of all traffic over a network, and
`additionally contains a plurality of zones, corresponding to
`time increments of media data remainingin the buffer, which
`suppose that 20% of all packets are lost due to network
`indicate a plurality of network congestion levels and con-
`congestion. In real settings,
`the rate of packet loss will
`sequently a plurality of levels of danger for stream inter-
`typically vary across somestatistical distribution, but for
`ruption. During normal(i.e. low congestion) conditions the
`simplicity of illustration suppose that the rate is a uniform
`server provides a stream at a rate equaling the playbackrate
`20%. If each client individually drops its bit demand forits
`of the media. The bufferfills to an equilibrium level before
`video stream by 40%, then the network will experience a
`20% drop in overall traffic (from the 40% drop in the content
`playback begins, and then in the absence of congestion the
`consuming a 50% share of overall traffic). This drop in
`input/serving rate equals the output/playbackrate so that the
`buffer level remains at this equilibrium. If network conges-
`overall network demand will alleviate the congestion situ-
`tion causes the loss or delay of some packets, then the buffer
`ation and will eliminate the 20% packetloss rate altogether.
`
`Such a drop in client demand will work equally well to level will begin to drop. Whenit drops belowacritical level
`(where buffer levels are measured as playback time remain-
`alleviate statistically varying congestion situations that are
`ing), the client media buffer detects congestion and begins
`more typical of real networks.
`
`13
`
`13
`
`

`

`US 2003/0067872 Al
`
`Apr. 10, 2003
`
`signaling the server to avoid a playbackerror. If the buffer
`level continues to drop, it will cross anothercritical level at
`which the client signals the server to take more aggressive
`action to avoid letting missing packets traverse the entire
`buffer length. The critical buffer levels for the preferred
`embodimentand the actions taken as each is crossed will be
`explained further with respect to accompanying figures.
`
`[0040]
`
`3.2 Client Media Buffer—Normal Mode
`
`client buffer detects a first level of network congestion and
`enters a Prioritized Recover Zone 306. In this zone, the
`available headroom in the last-mile link is used aggressively
`to attempt to recover the one or more lost packets as quickly
`as possible and to refill
`the buffer. Upon entering the
`Prioritized Recovery Zone 306, the client signals the server
`to increase attention devoted to the target stream. This signal
`causes the server to initiate measures for priority serving,
`including raising the kernel process priority for the target
`stream, increasing the stream metering rate slightly (e.g. to
`[0041] FIG.3illustrates the client media buffer 210 of the
`110% of the normalrate), and using a higher DiffServ level,
`preferred embodiment when operating in a normal mode,
`with media streaming at best quality. The buffer receives
`if available. DiffServ, for Differentiated Services, is a pro-
`packets from the network at an In/Write 322 side, and
`tocol for specifying and prioritizing networktraffic by class
`transmits the sequenceddatato the client at an Out/Read 324
`that is specified by a six-bit field in the header for the IP
`side. The buffer transmits the data to the clientat the desired
`protocol. DiffServ is a new protocol proposed by the Internet
`Engineering Task Force (ETF) that may be available over
`playback rate, so after an initial build-up phase the buffer
`some IP networks.
`empties its contents at a constant rate. The server 202
`preferably delivers media data to the buffer at this same
`playback rate so that an equilibrium buffer level is main-
`tained. In FIG.3, the Start/Resume mark 320 indicates this
`equilibrium level under congestion-free conditions. This
`level lies at the middle of an Active Zone 304 of the video
`buffer.
`
`is
`it
`If the server provides data more quickly,
`[0042]
`possible to overflow the buffer and thus to lose any packets
`sent while the buffer is fully occupied. For instance, drift in
`buffer level can occur as a result of differences between the
`server clock and client clock. A High Water mark 312 allows
`the buffer to detect when it is nearly full and thusto initiate
`action to slow the stream down and prevent overflow. The
`region above the High Water mark 312 is an Overflow Alert
`Zone 302. When the High Water mark 312 is reached, the
`client begins sending signals to the servertelling it to pause
`serving. The client continues to send these signals until the
`buffer level returns below the High Water mark 312. When
`the buffer level drops back to the Start/Resume mark 320,
`the client signals the server to resume serving. In case the
`client’s pause signals do not reach the server because of
`network congestion, the buffer may completely fill and then
`begin to overflow. An Overflow mark 310 indicates that
`overflow has occurred and data may belost. Preferably the
`Overflow Alert Zone 302 is of sufficient size to prevent this
`error situation from occurring. In the unlikely event that the
`Overflow mark 310 is reached, the client continues sending
`pause signals to the server.
`
`In light congestion situations, occasional packets
`[0043]
`maybelost or corrupted. The client media buffer recognizes
`these errors via packet sequencing and a checksum opera-
`tion. The buffer periodically requests retransmission of lost
`or corrupted packets by the server as needed. The server
`sendsretransmitted packets with top priority to replace these
`packets before they cause a client error during playback. As
`congestion worsens, however, these retransmissions may not
`always be completed in ample time to avoid error because of
`limited recovery headroom in the link, so further steps are
`initiated by the client media buffer.
`
`[0044] A Low Water mark 314 indicates that the bufferis
`being depleted, i.e. the rate at which data is being received
`is lower than the playback rate. The buffer may deplete to
`this level for instance if the server clock is slightly slower
`than the client clock. Also, a missing packet may fall below
`the Low Water mark 314 if it has not been recovered in time
`
`[0045] An incremented metering rate is included to speed
`up serving in the case when the server’s clock is slightly
`slower than the client’s clock, or to recover a lost packet as
`quickly as possible by using all of the available last-mile
`headroom. This metering adjustmentis particularly tuned to
`the case of a single client that has negligible influence on
`congestion in a large network. Note that in the case of many
`clients detecting congestion on a network with heavy video
`traffic, increasing the streams’ metering rates may impact
`congestion since it requires faster data transmission. To
`mitigate this concern, the relative increase in the metering
`rate may be engineered in light of the expected network
`traffic loads and headroom constraints over the last-mile
`links, and the amount of increase may be reduced during
`more serious congestion situations involving heavy video
`traffic.
`
`[0046] As the buffer refills, no action is taken to change
`the priority level back to normal until the Start/Resume mark
`320 is crossed. At this point, the client signals the server to
`turn off the measures for priority serving and to resume
`normal streaming.
`
`[0047] However, if the congestion continues or worsens,
`an Ultra Low Water mark 316 maybe reached byeither the
`last received packet or more typically by a lost or corrupted
`packet. At this point, the buffer enters a Stream Switch Zone
`308 and detects a serious network congestion problem. The
`client signals the server to compensate by switching to a
`lower-bit-rate encoded stream. As an important feature, the
`Stream Switch Zone 308 is situated so that the server has
`time to switch streams before playback is interrupted by data
`loss in the original stream. This drop in encoding bit rate
`allows a significant increase in headroom bandwidth over
`the last-mile link, which is used to help the buffer recover to
`a safer level. In the case of a video stream, the stream switch
`preferably occurs at a GOP (group of pictures) boundary
`since subsequent frames in a GOP depend on a key framefor
`accurate reconstruction of the sequence. In this case, when
`requesting a stream switch, the client will also indicate the
`boundaryof the last complete GOP in the buffer. Depending
`upon the proportion of the unfinished GOP in the buffer, the
`server will decide either to replace it with a new lower-bit-
`rate GOPor to finish that GOP at the higher bit rate before
`switching to the lower-bit-rate stream. For instance, if only
`a few frames of a GOP remain unsent,
`the server can
`determinethat it saves more bits to send those few framesat
`
`by normal retransmission requests. In either situation, the
`
`a high bit rate per frame rather than to replace almost an
`
`14
`
`14
`
`

`

`US 2003/0067872 Al
`
`Apr. 10, 2003
`
`entire GOP of frames at a lower bit rate per frame (such a
`tradeoff depends on the specific bit rates at which the two
`streams are encoded)

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket