throbber
US 20030067872A1
`
`(19) United States
`(12) Patent Application Publication (10) Pub. No.: US 2003/0067872 A1
`(43) Pub. Date: Apr. 10, 2003
`
`Harrell et al.
`
`(54) FLOW CONTROL METHOD FOR QUALITY
`STREAMING OF AUDIO/VIDEO/MEDIA
`OVER PACKET NETWORKS
`
`(75)
`
`Inventors: Chandlee Harrell, Cupertino, CA (US);
`Edward R. Ratner, Sunnyvale, CA
`(US); Thomas D. Miller, Alamo, CA
`(US); Adityo Prakash, Redwood
`Shores, CA (US); Hon Hing So, San
`Jose, CA (US)
`
`Correspondence Address:
`OKAMOTO & BENEDICTO, LLP
`P.0. BOX 641330
`SAN JOSE, CA 95164 (US)
`
`(73)
`
`Assignee: Pulsent Corporation, Milpitas, CA (US)
`
`(21)
`
`Appl. No.:
`
`10/243,628
`
`(22)
`
`Filed:
`
`Sep. 13, 2002
`
`Related US. Application Data
`
`(60) Provisional application No. 60/323,500, filed on Sep.
`17, 2001.
`
`Publication Classification
`
`Int. Cl.7 ..................................................... G01R 31/08
`(51)
`(52) US. Cl.
`............................................ 370/229; 370/477
`
`(57)
`
`ABSTRACT
`
`Amethod and apparatus for client-side detection of network
`congestion in a best-effort packet network comprising
`streaming media traffic is disclosed. Said method and appa-
`ratus provide for quality streaming media services in a
`congested network with constrained bandwidth over the
`last-mile link. Aclient media buffer detects at least one level
`
`of congestion and signals a server to enact at least one error
`mechanism. Preferred error mechanisms include packet
`retransmissions, stream prioritization, stream acceleration,
`changes in media compression rate, and changes in media
`resolution. Said method and apparatus allow distributed
`management of network congestion for networks compris-
`ing multiple clients and carrying significant streaming media
`traflic.
`
`<
`
`r
`
`Video server
`202
`
`l
`
`
`
`
`i f 204
`,
`.
`
`SONY - Ex.-1008
`
`1
`
`Sony Corporation - Petitioner
`
`SONY - Ex.-1008
`Sony Corporation - Petitioner
`
`1
`
`

`

`Patent Application Publication Apr. 10, 2003 Sheet 1 0f 9
`
`US 2003/0067872 A1
`
`
`
` <
`
`aAgeoeH
`
`
`
`
`
`flommmmEmENhHNHNQv—oflwm150:.anMQAHUH.
`
`
`
`
`
`tommSfiP
`
`.534
`
`M850Z
`
`H993
`
`aonmommmxw
`
`H293
`
`3mmgouging
`
`H863
`
`02
`
`N3
`
`EVfitnexufiamN.mE
`
`o2
`
`
`
`
`
`
`
`
`
`
`
`1! msueufi
`
`2
`
`
`
`

`

`Patent Application Publication Apr. 10, 2003 Sheet 2 0f 9
`
`US 2003/0067872 A1
`
`
`
`526
`
`£38
`
`oHm£3
`
`326
`
`SN
`
`v3$8.538m
`
`momKK
`
`Eomm
`
`M838:
`
`wow
`
`5E382>
`
`mom
`
`m.uE
`
`3
`
`
`
`
`
`

`

`Patent Application Publication Apr. 10, 2003 Sheet 3 0f 9
`
`US 2003/0067872 A1
`
`vmmEmMDSO_,
`
`Sufism£onE020
`
`Howe:
`
`mem3:330Hmem
`
`swamBog5meH835
`
`
` 3mmHmx823mSmM52.5meM82Mud:
`
`uquBENbv>08MN1a.m030GE>O
`
`2muse5Bo5>.432
`
`mME
`
`
`
`~3th885mBEEE
`
`U>CU¢V
`
`UGON
`
`4
`
`
`
`

`

`Patent Application Publication Apr. 10, 2003 Sheet 4 0f 9
`
`US 2003/0067872 A1
`
`v
`
`mHm
`
`»ME
`
`Sufism£32EEO
`
`E382
`
`@omgo /
`
`,
`
`_____
`
`______
`
`28M23m5EHfixam
`
`
`
`p.52Bombxyox52
`
`mmoNon
`
`
`Hfimsmomm¥av4
`#avémHm
`
`ngm#525me
`
`
`
`ogmomgpmawwm
`
`5
`
`
`
`

`

`Patent Application Publication Apr. 10, 2003 Sheet 5 0f 9
`
`US 2003/0067872 A1
`
`
`
`
`
`
`
`
`
`VHSpmMacadam“SumnSEm
`
`39:358%BS930me
`
`m.mE
`
`Vl'llLl'llu'l‘llll‘l'lllll‘Il‘l‘lll‘L
` rm08:
`EEK/m@fiwmHamQwfimOpBOWHBEE39mmfilm.
`
`
`2933mm:BEm/2a2
`
`,.::_£._::U,\
`
`>$52:
`
`..:..r§":s_
`
`
`,
`_..__.
`_‘._
`fl
`
`REV
`
`go28
`
`
`
`\-mx
`
`91m
`
`fiupaww
`
`v525
`
`33OS
`
`6
`
`
`

`

`Patent Application Publication Apr. 10, 2003 Sheet 6 0f 9
`
`US 2003/0067872 A1
`
`
`
`"amis“Smumgpmgmm
`
`
`
`moafimxm282ESm
`
`
`
`,Mo.0?”34%;-..
`
`an:2
`
`
`wEtum
`
`
`
`BELBAEE8383$£3933Emmi_
`
`AmmanOHNH
`
`8583on
`
`
`
`mEBHmE8:88.64
`
`afio9“
`
`91m Bil!.la)9w
`
`_
`
`
`m.3588£33me
`war/Hum33-93-32
`
`
`
`2&3"0%VEE
`
`08333.35Em
`
` on“
`
`mam
`
`
`
`7
`
`
`
`
`
`
`

`

`Patent Application Publication
`
`Apr. 10, 2003 Sheet 7 0f 9
`
`US 2003/0067872 A1
`
`0532as:
`
`
`
`meV393:.—00ESE
`
`0:5.ng
`
`020008
`
`
`
`8Ev5.34.
`
`038000005.
`
`“>0003.:0:0
`
`0E0m0mE8m
`
`0:9:
`
`u.mfi
`
`400030533
`BEMU<Z
`
`\tEm303“
`
`
`
`
`
`
`
`GEEEOQEmwobflvivEEwSQ28mHobsm900230:0
`
`
`
`
`
`«LURE2&3me.m..Ha..
`
`
`
`
`
`...oasmumfiflm....4..D0E009.5290_.new.323:£HUzm>OLmMm:
`
`
`
`
`
`
`
`
`
`
`0:05323:0»x008/...”ficm.323005gmuEsmum
`
`
`
`
`
`0108VEE.3....kade30.—0£D/mom\5003..nux3w.0:0830E0>OSEN..4020.0803302%V/.300003m
`
`
`
`
`
`swam,,03.33am0203304myfiflmmcfl320300.5A/IIL\\
`
`.
`
`
`
`
`
`
`3050.30.3030.3dmum30.....v7.0.5mEEmEmB
`
`
`inwdoNtfiq/323:mm//0:2:03200
`
`
`
`3E..how@3onmeufismumEEm....swam./VtmE
`
`4...5003AME
`
`fig...Em.323E
`
`
`
`.45...4.”.EE085033.3me
`
`
`
`3.2.3me0:»«EB
`
`
`
`£00.33
`
`
`
`mEimwSkEm033wHHQfioz
`
`
`
`..
`
`“>00038
`
`0300a32
`
`ufismumEmm
`
`.m:0N..
`
`323:“0
`
`
`
`VtmE3och>O
`
`5330mm
`
`
`
`
` .20..:0322#58Za020:ESE553m
`
`
`.5meMmfigw.303032
`/swam0>0Lm02m/....mg?..
`
`
`
`.mGVm..mEonEJW.
`
`323:mm
`
`30930
`
`£05
`
`000003
`
`>05323:fi
`
`05E”Baum
`
`Baum308m....wob4.
`“cozuuuxm.w.H
`
`5:003VEEBogmfib
`
`
`
`8
`
`
`
`
`
`
`
`
`
`
`
`
`
`

`

`Patent Application Publication Apr. 10, 2003 Sheet 8 0f 9
`
`US 2003/0067872 A1
`
`
`
`
`
`5508V
`
`@3020
`
`
`
`35m“5.6
`
`if
`
`8m,
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`OoonH
`EEGQU
`
`350
`
`3m
`.MNHN
`
`mmom
`
`SE25
`
`H.550\\\
`
`9
`
`
`
`

`

`Patent Application Publication Apr. 10, 2003 Sheet 9 0f 9
`
`US 2003/0067872 A1
`
`
`
`
`
`
`
`
`
`10
`
`10
`
`

`

`US 2003/0067872 A1
`
`Apr. 10, 2003
`
`FLOW CONTROL METHOD FOR QUALITY
`STREAMING OF AUDIO/VIDEO/MEDIA OVER
`PACKET NETWORKS
`
`CROSS-REFERENCES TO RELATED
`APPLICATIONS
`
`[0001] This application claims the benefit of US. Provi-
`sional Patent Application No. 60/323,500, filed on Sep. 17,
`2001, which is hereby incorporated by reference.
`
`BACKGROUND OF THE INVENTION
`
`[0002]
`
`1. Field of the Invention
`
`[0003] The present invention relates generally to the field
`of computer networks, more particularly to the flow of
`audio/video/media data through packet networks, and still
`more particularly to congestion detection and flow control
`over packet networks.
`
`[0004]
`
`2. Description of the Related Art
`
`[0005] Media data, such as audio or video data, has
`traditionally been broadcast over dedicated bandwidth RF
`(radio frequency) channels to ensure quality delivery to the
`receiver. With the explosion of the Internet and the accom-
`panying deployment of broadband packet networks,
`the
`opportunity has arisen to include entertainment or media
`services along with other data services on the broadband
`networks. However, delivering streaming media over broad-
`band packet networks at a marketable quality with current
`technology is difficult.
`
`[0006] Existing best-effort packet networks were not
`designed for high-bandwidth real-time data, such as stream-
`ing video. These networks were designed to accommodate
`the economic average of data traffic, so they often confront
`congestion in various nodes of the network when peaks in
`data traffic occur. Such congestion results in the loss or
`corruption of packets and thus interferes with the quality
`level of real-time data. In particular, such congestion can
`cause interruptions or delays in streaming media, resulting
`in a quality of service that is inferior to broadcast standards
`and thus not marketable to a broad customer base.
`
`[0007] Much of the general Internet operates as a packet
`network with the TCP/IP stack of protocols for packet
`transmission. TCP (the “transmission control protocol”) is
`responsible for breaking data into manageable pieces, reas-
`sembling and ordering the pieces at
`the other end, and
`retransmitting any pieces that are lost. IF (the “Internet
`protocol”) determines a route for the data pieces between the
`transmitter and the receiver. Application-level data, such as
`streaming media data, is broken into pieces and each piece
`is given a transfer header by TCP. An IP header is then
`affixed to each piece, followed in some cases by a data link
`header providing information relevant to the actual data link
`over which the data will be transferred. At the receiving end,
`these headers are removed in inverse order until the original
`data is recovered.
`
`[0008] An alternative to TCP called UDP (User Datagram
`Protocol) may be used at the transport layer. Unlike TCP,
`UDP does not guarantee delivery, but UDP does deliver
`packets at a specified rate. It is thus often used for real-time
`application data. UDP is responsible for packetization, mul-
`tiplexing, and performing checksum operations to verify the
`
`correct packet size. A second protocol, RTP (Real-time
`Transport Protocol), may be used in concert with UDP to
`handle data identification, sequence numbering, timestamp-
`ing, and delivery monitoring.
`
`[0009] FIG. 1 illustrates the organization of packets in the
`TCP/IP configuration. Apacket of application data 100 first
`has a TCP header 102 attached. In the case of data for a
`
`real-time application, the TCP header 102 may be replaced
`with UDP/RTP header information as discussed above. Then
`a network layer or IP header 104 is attached, and finally a
`link layer header 106 is appended. The resulting packet is
`transmitted 108. As the packet is received 110, the headers
`106, 104, and 102 are removed in inverse order so that the
`original application data packet 100 is available to the
`receiver.
`
`[0010] Regional broadband networks, such as networks
`providing digital subscriber line (DSL) service into resi-
`dences over existing telecommunications copper wire infra-
`structure, are growing in popularity. Such networks provide
`opportunities to introduce higher quality streaming media
`because they provide greater ability for network engineering
`and flow control than for instance the general Internet. These
`networks typically communicate through the use of the
`asynchronous transfer mode (ATM) protocol with IP over
`ATM over SONET (Synchronous Optical Network) for the
`backbone link and with IP over ATM over DSL for the
`
`last-mile link. ATM provides potential quality of service
`control unlike the TCP/IP protocol family used for the
`general Internet, but due to the increase in costs associated
`with commissioning such service, existing telecommunica-
`tions access networks using ATM are not currently enabled
`for QoS (quality of service). Packet networks with QoS can
`reserve the bandwidth necessary for quality media stream-
`ing, but the significant expense associated with implement-
`ing this specification has thwarted its widespread introduc-
`tion.
`
`[0011] Avariety of DSL specifications exist for providing
`data service over existing copper-wire telephone lines.
`Asymmetric DSL, or ADSL, is the most popular specifica-
`tion for residential customers, and it is reaching increasing
`numbers of households. ADSL is capable of providing
`downstream bandwidth in the 6 Mbps range over shorter
`distances, but more typically it can provide on the order of
`1.5 Mbps of downstream bandwidth and 384 kbps of
`upstream bandwidth to a broad customer base. The potential
`for video content delivery over the general Internet, and
`specifically over DSL networks, is great, but its realization
`has been constrained not only by network congestion issues
`but also by the excessive bandwidth required for most
`quality video data. However, recent video compression
`advances by the assignee of this invention and potential
`future research allow broadcast quality video to be provided
`at compression ratios that are consistent with the typical 1.5
`Mbps bandwidth constraint of ADSL.
`
`[0012] As improving compression ratios make streaming
`Video over ADSL or other constrained bandwidth networks
`
`feasible, several problems with implementation arise. In the
`presence of network congestion and a constrained last-mile
`link with limited headroom for error recovery, means must
`be found for avoiding error due to congestion-induced
`packet loss so that a service provider can maintain delivery
`of a high quality media stream to the client subscriber.
`
`11
`
`11
`
`

`

`US 2003/0067872 A1
`
`Apr. 10, 2003
`
`Furthermore, as streaming media, especially Video, prolif-
`erates and consumes significant bandwidth network-wide,
`flow control techniques for managing network-wide con-
`gestion increase in importance.
`
`[0013] Existing flow control strategies for streaming
`media are minimal. Such strategies typically rely on server-
`side detection of congestion. Servers can monitor NACK
`(negative acknowledgement) signals that indicate when a
`client has not received a complete packet, and they can also
`monitor RTT (return trip time) to find how long packet
`transmission has taken. In the case of streaming over TCP/IP
`networks, TCP can guarantee loss-less delivery of packets
`but not timely delivery. Servers can initiate flow control
`measures such as stream switching when they detect net-
`work congestion. Such measures typically result in pausing
`of the stream and rebuffering with relative frequency. This
`interruption of service is unacceptable for a streaming media
`provider who wishes to market competitive high quality
`entertainment services to customers.
`
`SUMMARY
`
`[0014] The present invention provides means for ensuring
`the delivery of quality streaming media to clients over
`packet networks that are subject to congestion situations.
`More specifically, this invention provides a novel solution to
`the problem of avoiding error in a media stream across a
`congested network with a constrained last-mile link. This
`invention also addresses the problem of managing network
`congestion when streaming media data consumes a signifi-
`cant share of network bandwidth, regardless of last-mile
`bandwidth availability.
`
`[0015] One embodiment of the invention comprises a
`method and apparatus for client-side detection of network
`congestion in a packet network featuring broadcast quality
`streaming media from a server to at least one client. Another
`embodiment of the invention provides a method and appa-
`ratus for client-initiated error avoidance and flow control to
`
`ensure that network congestion does not disrupt the media
`stream from reaching the client. Another embodiment of the
`invention provides a method and apparatus for system-wide
`congestion control via distributed client-side congestion
`detection and distributed client-initiated error avoidance and
`flow control.
`
`In one specific embodiment, a client receives a
`[0016]
`media stream into a media buffer, and the media buffer
`detects a plurality of levels of network congestion by
`monitoring the buffer level. The client is able to request a
`plurality of service adjustments from the media server in
`response to the plurality of congestion levels to avoid errors
`in the playback of the media stream. Such adjustments may
`include packet retransmissions, stream prioritization, stream
`acceleration, changes in media compression rate, changes in
`the enhancement layer or layers in the case of multi-layered
`streams, dropping B frames in the case of video streaming,
`changes in media resolution, and maintaining audio while
`dropping video in exceptional cases. These adjustments
`allow the client to continue its continuous media stream with
`
`and with gracefully
`full quality whenever possible,
`decreased quality in the statistically rare instances when
`network congestion prevents the continuous transmission of
`the full-quality stream.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`[0017] A further understanding of the nature and the
`advantages of the invention disclosed herein may be realized
`by reference to the remaining portions of the specification
`and the attached drawings.
`
`[0018] FIG. 1 is an illustration of packet organization in
`the TCP/IP protocol stack.
`
`[0019] FIG. 2 is a diagram of a packet network to which
`the present invention applies.
`
`[0020] FIG. 3 is a diagram of a client media buffer
`operating in a first mode.
`
`[0021] FIG. 4 is a diagram of a client media buffer
`operating in a second mode.
`
`[0022] FIG. 5 is a graph showing metering rate for a
`low-bit-rate stream over time.
`
`[0023] FIG. 6 is a graph showing two examples of the
`metering rate over time during a start mode.
`
`[0024] FIG. 7 is a state diagram describing the operation
`of a client media buffer in a preferred embodiment.
`
`[0025] FIG. 8a is a block diagram of a telecommunica-
`tions access network to which the invention may be applied,
`featuring a video server and a DSL link to the client.
`
`[0026] FIG. 8b is a chart listing the network protocols
`specified for the various regions of the access network in
`FIG. 8a.
`
`[0027] To aid in understanding, identical reference numer-
`als have been used wherever possible to designate identical
`elements in the figures.
`
`DETAILED DESCRIPTION OF THE SPECIFIC
`EMBODIMENTS
`
`[0028]
`
`1 Introduction
`
`[0029] One embodiment provides a solution to the prob-
`lem of providing uninterrupted streaming media over IP
`networks, such as telecommunications access networks, that
`do not otherwise guarantee Quality of Service (QoS). In
`particular, it provides for error avoidance despite limited
`recovery headroom in the last-mile link. For example, the
`invention might be applied to provide quality streaming of
`1.1 Mbps of audio/video along with data and overhead over
`a 1.5 Mbps ADSL link. In another example, the invention
`might be applied to deliver two 1.25 Mbps audio/Video
`streams along with overhead and data over a single 3 Mbps
`link to a client.
`
`[0030] The invention is especially useful when streaming
`media traffic, such as for instance streaming video, con-
`sumes a significant proportion of the bandwidth of the
`access network. In this case, adjustments in the bandwidth
`required for
`individual media streams can significantly
`impact the congestion level of the overall network, which
`can then improve the quality of the media streams them-
`selves. In fact, the invention can reduce the overall conges-
`tion level even in the general case when client last-mile links
`are not constrained. The invention is also applicable to other
`packet networks, such as for instance wireless data net-
`works. It can also provide improvement in media streams
`over general IP networks, although the preferred embodi-
`
`12
`
`12
`
`

`

`US 2003/0067872 A1
`
`Apr. 10, 2003
`
`ment is tuned more specifically to providing a marketable
`broadcast quality of streaming media over telecommunica-
`tions access networks. The teachings of the present inven-
`tion can be extended in other embodiments to cover many
`other packet network configurations. The remainder of this
`specification will focus on a specific embodiment of the
`invention and several alternatives, but this specific embodi-
`ment and its stated alternatives should not be construed as
`
`limiting the invention itself or its applicability in other
`contexts.
`
`[0031]
`
`2 Problems Addressed
`
`[0032] As mention above in the Description of the Related
`Art, two key issues emerge as streaming media is offered
`over packet networks with constrained last-mile bandwidth.
`The first
`issue arises in the context of a single client
`occupying a negligible percentage of overall network band-
`width and having limited headroom in the last-mile link. In
`this scenario, changes in the individual client stream cannot
`affect the congestion situation of the network as a whole so
`measures must be taken to avoid error within the existing
`congestion situation. If there were no last-mile bandwidth
`constraint,
`in contrast, simply increasing the bandwidth
`devoted to the client’s media stream would allow for packet
`replacement and error avoidance. A typical last-mile link
`over an ADSL connection might allow 1.5 Mbps of total
`data, 0.4 Mbps of which might be devoted to overhead
`information and a data channel, leaving 1.1 Mbps for raw
`video and audio content. If the media stream consumes all
`
`to provide quality
`or most of this 1.1 Mbps allotment
`content, then after subtracting out protocol overhead there is
`very little room to push additional bits through the last-mile
`link to make up for the loss of packets. Congestion loss may
`exceed the headroom available for traditional error recovery.
`Other strategies are needed to ensure that these packets are
`replaced before playback of the stream is corrupted.
`
`[0033] The second issue relates to the effects of streaming
`media traffic itself on overall network congestion. This
`second issue does not depend on a constrained last-mile link,
`but applies also to the general situation of unconstrained
`links to each client. As streaming media, and streaming
`video in particular, achieves broader and higher quality
`deployment, it is likely to consume a significant proportion
`of available bandwidth over entire packet networks. In this
`scenario, adjustments of the media streams themselves can
`impact the overall network congestion level. Thus, practical
`measures to adjust media streams to maintain quality for
`individual clients throughout
`the network can have the
`additional impact of improving the congestion situation of
`the network as a whole. For example, suppose that streaming
`video accounts for 50% of all traffic over a network, and
`suppose that 20% of all packets are lost due to network
`congestion. In real settings,
`the rate of packet loss will
`typically vary across some statistical distribution, but for
`simplicity of illustration suppose that the rate is a uniform
`20%. If each client individually drops its bit demand for its
`video stream by 40%, then the network will experience a
`20% drop in overall traffic (from the 40% drop in the content
`consuming a 50% share of overall traffic). This drop in
`overall network demand will alleviate the congestion situ-
`ation and will eliminate the 20% packet loss rate altogether.
`Such a drop in client demand will work equally well to
`alleviate statistically varying congestion situations that are
`more typical of real networks.
`
`[0034] The remainder of the specification will provide
`details of a method for client detection of network conges-
`tion and client-initiated measures to avoid error and to
`
`improve the congestion situation of the network. In its
`applications, this method addresses both of the above issues
`associated with streaming media over packet networks and
`thus provides a significant advance in the art.
`
`[0035]
`
`3 Detailed Description of the Drawings
`
`[0036]
`
`3.1 Overview
`
`[0037] FIG. 2 illustrates at a high level a typical packet
`network configuration to which the disclosed method
`applies. The network 200 is comprised of a server 202 that
`is connected to an access network 206 by a link 204, and a
`client 212 that is attached to a client media buffer 210 that
`
`is in turn attached to the access network 206 by a last-mile
`link 208. In many cases, the last-mile link 208 will have
`constrained bandwidth, implying that there is little head-
`room over the required media rate for error packet recovery
`in the case of lost or bad packets. Most current network links
`to consumers have constrained bandwidth over the last mile,
`limiting the rate at which data packets can be transferred.
`Examples of such links include digital subscriber line (DSL)
`connections, cable modem connections, traditional dial-up
`modem connections, wireless data network links, etc.
`
`[0038] The client media buffer 210 plays an important role
`in providing an error-free media stream to the client. A
`buffering portion and a signaling device operatively coupled
`to the buffering portion that can send signals to the server
`202 together comprise the client media buffer 210. The
`buffer is large enough to allow recovery from infrequent
`packet loss through at least one congestion detection mecha-
`nism. In response to at least one detected congestion level,
`the buffer may implement at
`least one error avoidance
`mechanism. For instance, the buffer duration is long enough
`to allow packet
`retransmission before the lost packet
`obstructs the client’s media streaming experience. The
`buffer may also be able to detect heavier congestion situa-
`tions with enough lead time to allow a switch to a lower bit
`rate video stream. This switch prevents any hesitation or
`interruption in the frame sequence but may cause an accept-
`able degradation in video quality during a lower-bit-rate
`streaming period. Preferably, the buffer can detect multiple
`levels of network congestion and can initiate multiple levels
`of error handling for graceful degradation through statisti-
`cally less frequent congestion error situations.
`
`the client media
`In the preferred embodiment,
`[0039]
`buffer operates as a well-known FIFO (first in, first out)
`buffer under good network conditions. However, the buffer
`additionally contains a plurality of zones, corresponding to
`time increments of media data remaining in the buffer, which
`indicate a plurality of network congestion levels and con-
`sequently a plurality of levels of danger for stream inter-
`ruption. During normal (i.e. low congestion) conditions the
`server provides a stream at a rate equaling the playback rate
`of the media. The buffer fills to an equilibrium level before
`playback begins, and then in the absence of congestion the
`input/serving rate equals the output/playback rate so that the
`buffer level remains at this equilibrium. If network conges-
`tion causes the loss or delay of some packets, then the buffer
`level will begin to drop. When it drops below a critical level
`(where buffer levels are measured as playback time remain-
`ing), the client media buffer detects congestion and begins
`
`13
`
`13
`
`

`

`US 2003/0067872 A1
`
`Apr. 10, 2003
`
`signaling the server to avoid a playback error. If the buffer
`level continues to drop, it will cross another critical level at
`which the client signals the server to take more aggressive
`action to avoid letting missing packets traverse the entire
`buffer length. The critical buffer levels for the preferred
`embodiment and the actions taken as each is crossed will be
`
`explained further with respect to accompanying figures.
`
`[0040]
`
`3.2 Client Media Buffer—Normal Mode
`
`[0041] FIG. 3 illustrates the client media buffer 210 of the
`preferred embodiment when operating in a normal mode,
`with media streaming at best quality. The buffer receives
`packets from the network at an In/Write 322 side, and
`transmits the sequenced data to the client at an Out/Read 324
`side. The buffer transmits the data to the client at the desired
`
`playback rate, so after an initial build-up phase the buffer
`empties its contents at a constant rate. The server 202
`preferably delivers media data to the buffer at this same
`playback rate so that an equilibrium buffer level is main-
`tained. In FIG. 3, the Start/Resume mark 320 indicates this
`equilibrium level under congestion-free conditions. This
`level lies at the middle of an Active Zone 304 of the video
`buffer.
`
`is
`it
`If the server provides data more quickly,
`[0042]
`possible to overflow the buffer and thus to lose any packets
`sent while the buffer is fully occupied. For instance, drift in
`buffer level can occur as a result of differences between the
`
`server clock and client clock. A High Water mark 312 allows
`the buffer to detect when it is nearly full and thus to initiate
`action to slow the stream down and prevent overflow. The
`region above the High Water mark 312 is an Overflow Alert
`Zone 302. When the High Water mark 312 is reached, the
`client begins sending signals to the server telling it to pause
`serving. The client continues to send these signals until the
`buffer level returns below the High Water mark 312. When
`the buffer level drops back to the Start/Resume mark 320,
`the client signals the server to resume serving. In case the
`client’s pause signals do not reach the server because of
`network congestion, the buffer may completely fill and then
`begin to overflow. An Overflow mark 310 indicates that
`overflow has occurred and data may be lost. Preferably the
`Overflow Alert Zone 302 is of sufficient size to prevent this
`error situation from occurring. In the unlikely event that the
`Overflow mark 310 is reached, the client continues sending
`pause signals to the server.
`
`In light congestion situations, occasional packets
`[0043]
`may be lost or corrupted. The client media buffer recognizes
`these errors via packet sequencing and a checksum opera-
`tion. The buffer periodically requests retransmission of lost
`or corrupted packets by the server as needed. The server
`sends retransmitted packets with top priority to replace these
`packets before they cause a client error during playback. As
`congestion worsens, however, these retransmissions may not
`always be completed in ample time to avoid error because of
`limited recovery headroom in the link, so further steps are
`initiated by the client media buffer.
`
`[0044] A Low Water mark 314 indicates that the buffer is
`being depleted, i.e. the rate at which data is being received
`is lower than the playback rate. The buffer may deplete to
`this level for instance if the server clock is slightly slower
`than the client clock. Also, a missing packet may fall below
`the Low Water mark 314 if it has not been recovered in time
`
`client buffer detects a first level of network congestion and
`enters a Prioritized Recover Zone 306. In this zone, the
`available headroom in the last-mile link is used aggressively
`to attempt to recover the one or more lost packets as quickly
`as possible and to refill
`the buffer. Upon entering the
`Prioritized Recovery Zone 306, the client signals the server
`to increase attention devoted to the target stream. This signal
`causes the server to initiate measures for priority serving,
`including raising the kernel process priority for the target
`stream, increasing the stream metering rate slightly (e.g. to
`110% of the normal rate), and using a higher DiffServ level,
`if available. DiffSerV, for Differentiated Services, is a pro-
`tocol for specifying and prioritizing network traffic by class
`that is specified by a six-bit field in the header for the IP
`protocol. DiffServ is a new protocol proposed by the Internet
`Engineering Task Force (IETF) that may be available over
`some IP networks.
`
`[0045] An incremented metering rate is included to speed
`up serving in the case when the server’s clock is slightly
`slower than the client’s clock, or to recover a lost packet as
`quickly as possible by using all of the available last-mile
`headroom. This metering adjustment is particularly tuned to
`the case of a single client that has negligible influence on
`congestion in a large network. Note that in the case of many
`clients detecting congestion on a network with heavy video
`traflic, increasing the streams” metering rates may impact
`congestion since it requires faster data transmission. To
`mitigate this concern, the relative increase in the metering
`rate may be engineered in light of the expected network
`traflic loads and headroom constraints over the last-mile
`
`links, and the amount of increase may be reduced during
`more serious congestion situations involving heavy video
`traflic.
`
`[0046] As the buffer refills, no action is taken to change
`the priority level back to normal until the Start/Resume mark
`320 is crossed. At this point, the client signals the server to
`turn off the measures for priority serving and to resume
`normal streaming.
`
`[0047] However, if the congestion continues 0r worsens,
`an Ultra Low Water mark 316 may be reached by either the
`last received packet or more typically by a lost or corrupted
`packet. At this point, the buffer enters a Stream Switch Zone
`308 and detects a serious network congestion problem. The
`client signals the server to compensate by switching to a
`lower-bit-rate encoded stream. As an important feature, the
`Stream Switch Zone 308 is situated so that the server has
`
`time to switch streams before playback is interrupted by data
`loss in the original stream. This drop in encoding bit rate
`allows a significant increase in headroom bandwidth over
`the last-mile link, which is used to help the buffer recover to
`a safer level. In the case of a video stream, the stream switch
`preferably occurs at a GOP (group of pictures) boundary
`since subsequent frames in a GOP depend on a key frame for
`accurate reconstruction of the sequence. In this case, when
`requesting a stream switch, the client will also indicate the
`boundary of the last complete GOP in the buffer. Depending
`upon the proportion of the unfinished GOP in the buffer, the
`server will decide either to replace it with a new lower-bit-
`rate GOP or to finish that GOP at the higher bit rate before
`switching to the lower-bit-rate stream. For instance, if only
`a few frames of a GOP remain unsent,
`the server can
`determine that it saves more bits to send those few frames at
`
`by normal retransmission requests. In either situation, the
`
`a high bit rate per frame rather than to replace almost an
`
`14
`
`14
`
`

`

`US 2003/0067872 A1
`
`Apr. 10, 2003
`
`entire GOP of frames at a lower bit rate per frame (such a
`tradeoff depends on the specific bit rates at which the two
`streams are encoded). In an alternative embodiment,
`the
`stream switch can occur at any frame boundary using known
`techniques. More specifics on the stream switching process
`will be provided with reference to FIG. 4 below.
`
`[0048] Note that delivering a lower-bit-rate stream will
`degrade the quality slightly, but the transition will be smooth
`and playback interruption will be avoided. Such an error
`mechanism is far more acceptable than the visual artifacts or
`the lengthy rebuffering interruptions caused by existin

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket