`
`
`PDXScholar PDXScholar
`
`Computer Science Faculty Publications and
`Presentations
`
`Computer Science
`
`12-1997
`
`Flow and Congestion Control for Internet Streaming Flow and Congestion Control for Internet Streaming
`
`Applications Applications
`Shanwei Cen
`
`Calton Pu
`
`Jonathan Walpole
`Portland State University
`
`Follow this and additional works at: https://pdxscholar.library.pdx.edu/compsci_fac
`
` Part of the Computer Sciences Commons, and the Digital Communications and Networking
`Commons
`Let us know how access to this document benefits you.
`
`Citation Details Citation Details
`
`Shanwei Cen ; Jonathan Walpole and Calton Pu, "Flow and congestion control for Internet media
`streaming applications", Proc. SPIE 3310, Multimedia Computing and Networking 1998, 250 (December
`29, 1997); doi:10.1117/12.298426; http://dx.doi.org/10.1117/12.298426.
`
`This Article is brought to you for free and open access. It has been accepted for inclusion in Computer Science
`Faculty Publications and Presentations by an authorized administrator of PDXScholar. Please contact us if we can
`make this document more accessible: pdxscholar@pdx.edu.
`
`Petitioners' Exhibit 1011
`Page 0001
`
`
`
`Flow and Congestion Control for Internet Media Streaming
`Applications
`Shanwei Cena, Jonathan Walpoleb and Calton Pu
`aTektronjx, Inc. , Video and Networking Division
`P.O.Box 500, M/S 50-490 Beaverton, Oregon 97077 USA
`bDepartment of Computer Science, Oregon Graduate Institute
`P.O.Box 91000, Portland, Oregon 97291 USA
`
`ABSTRACT
`The emergence of streaming multimedia players provides users with low latency audio and video content over
`the Internet. Providing high-quality, best-effort, real-time multimedia content requires adaptive delivery
`schemes that fairly share the available network bandwidth with reliable data protocols such as TCP. This
`paper proposes a new flow and congestion control scheme, SCP (Streaming Control Protocol) , for real-time
`streaming of continuous multimedia data across the Internet. The design of SCP arose from several years
`of experience in building and using adaptive real-time streaming video players. SCP addresses two issues
`associated with real-time streaming. First, it uses a congestion control policy that allows it to share network
`bandwidth fairly with both TCP and other SCP streams. Second, it improves smoothness in streaming
`and ensures low, predictable latency. This distinguishes it from TCP's jittery congestion avoidance policy
`that is based on linear growth and one-half reduction of its congestion window. In this paper, we present a
`description of SCP, and an evaluation of it using Internet-based experiments.
`
`Keywords: Internet, flow control, congestion control, adaptive systems, streaming multimedia systems
`
`1. INTRODUCTION
`The real-time distribution of continuous audio and video data via streaming multimedia applications accounts
`for a significant, and expanding, portion of the Internet traffic. Many research prototype media players have
`been produced, including Rowe's MPEG player,1 the authors' distributed video player,2 and McCanne's
`vic,3 etc. Over the past two years, many commercial streaming media players have also been released, such
`as RealAudio and RealVideo players, Microsoft Netshow, and Vxtreme.
`Common characteristics of media streaming applications include their need for high bandwidth, smooth
`data flow, and low and predictable end-to-end latency and latency variance. In contrast, the Internet is a
`best-effort network that offers no quality of service guarantees, and is characterized by a great diversity in
`network bandwidth and host processing speed, wide-spread resource sharing, and a highly dynamic workload.
`Hence, in practice, Internet-based applications experience large variations in available bandwidth, latency,
`and latency variance.
`To address these problems the new generation of Internet-based multimedia applications use techniques
`such as buffering and feedback-based adaptation of presentation quality. Buffering is used at the sender,
`receiver, or both, to mask short-term variations in the available bandwidth or latency. Through adaptation,
`applications scale media quality in one or more quality dimensions to better utilize the currently available
`bandwidth. For example, real-time play-out can be preserved in the presence of degraded bandwidth by
`adaptively sacrificing other presentation quality dimensions such as video frame rate or spatial resolution.
`Dynamic adaptation is a powerful approach, but it requires new flow and congestion control mechanisms
`for accurate discovery and appropriate utilization of the available network bandwidth. Congestion control
`mechanisms must determine, dynamically, the share of the network bandwidth that can be fairly used by
`the adaptive application in the presence of competing traffic. If the mechanisms are not sensitive enough
`
`Please send correspondence to Shanwei Cen at shanwei.cen©tek.com
`
`250
`
`SPIE Vol. 3310 . 0277-786X/971$10.00
`
`Downloaded From: http://proceedings.spiedigitallibrary.org/ on 01/17/2014 Terms of Use: http://spiedl.org/terms
`
`Petitioners' Exhibit 1011
`Page 0002
`
`
`
`to competing traffic the potentially high multimedia data rates could cause serious network congestion. On
`the other hand, if they are too sensitive they will under-utilize the bandwidth and presentation quality will
`be forced to degrade unnecessarily. In practice, such congestion control mechanisms must share bandwidth
`fairly among competing multimedia streams and must be compatible with TCP congestion control,4 since
`TCP is the base protocol for currently-dominant HTTP and FTP traffic.
`Flow control mechanisms should attempt to minimize end-to-end latency due to buffering, and maximize
`the smoothness of the data stream. Reliability through indefinite data retransmission is usually not desirable,
`since streaming applications can often tolerate some degree of quality degradation due to data loss, but are
`usually less tolerant of the delay introduced by the retransmission of lost data.
`This paper presents a flow and congestion control scheme, called SCP (Streaming Control Protocol), for
`unicast media streaming applications. Like TCP, SCP employs sender-initiated congestion-detection through
`positive acknowledgment, and uses a similar window-based policy for congestion avoidance. The similarities
`in congestion avoidance between SCP and TCP make SCP robust and a good network citizen, and enable
`SCP and TCP to share the Internet fairly.
`Unlike TCP, when the network is not congested, SCP invokes a hybrid rate- and window-based flow
`control policy that maintains smooth streaming with maximum throughput and low latency. Conversely,
`TCP repeatedly increases its congestion window size until packets are lost, then halves its window size (a
`behavior of TCP-Reno) . One consequence of this behavior is that TCP sessions exhibit burstiness and
`develop long end-to-end latency due to the build up of packets in network router buffers. This behavior is
`particularly problematic over PPP links since some PPP servers have buffers that hold 15 seconds of data,
`or more. Also unlike TCP, SCP does not retransmit data lost in the network. Thus it avoids the associated
`unpredictability in latency and wasted bandwidth.
`Further salient features of SCP include its support for rapid adaptation in the presence of drastic band-
`width fluctuations, such as those that occur when mobile computers migrate among different network types,5
`and its support for streaming-specific operations such as pausing.
`This paper is organized as follows. Section 2 presents the design of SCP. Section 3 follows with an
`analysis of SCP's steady-state bandwidth sharing. Section 4 describes the implementation of SCP. Section
`5 explains the experimental results. Section 6 then briefly discusses other congestion control schemes, and
`finally, Section 7 concludes the paper and discusses future work.
`
`2. THE DESIGN OF SCP
`A unicast streaming scenario with SCP involves a sender, which streams media packets over a network
`connection in real-time to a receiver. SCP policies are implemented at the sender side. Each packet contains
`a sequence number, and for each packet, SCP records the time at which it is sent, and initiates a separate
`timer* . The receiver acknowledges each packet it receives by returning an ACK containing the sequence
`number of the packet to the sender. Based on the reception of ACKs and expiration of timers, SCP monitors
`the available bandwidth, detects packet loss, and adjusts the size of its congestion window to control the
`flow and avoid network congestion. To achieve this task SCP maintains the following internal state variables
`and parameter estimators.
`. state — The current state. SCP has several states, each corresponding to a specific network and session
`condition and associated flow and congestion control policy.
`. Wi — The size of the congestion window (in number of packets).
`S W — The number of outstanding packets sent but not acknowledged. When W < W1, the congestion
`window is open, and more packets can be sent, otherwise it is closed and no packets can be sent.
`•
`— The threshold of W1 for switching from the slow-start policy to the steady-state policy.
`• Tbtt — An estimator of the base RTT (round trip time) — the RTT of a packet sent when the network
`is otherwise quiet.
`• Trtt — An estimator of the recent average RTT.
`• Drtt — An estimator of the standard deviation of the recent RTT.
`*For clarity, we explain the design of SCP based on a timer per packet, but this aspect is optimized in the implementation.
`
`251
`
`Downloaded From: http://proceedings.spiedigitallibrary.org/ on 01/17/2014 Terms of Use: http://spiedl.org/terms
`
`Petitioners' Exhibit 1011
`Page 0003
`
`
`
`State
`Network and session condition Congestion window adjustment policy
`slowStart Available bandwidth not dis-
`SCP opens the congestion window exponentially by increasing the
`window size by one upon the receipt of each ACK.
`covered yet
`Available bandwidth being
`SCP maintains appropriate amount of buffering inside the network
`to gain sufficient throughput, avoid excessive buffering or buffer
`fully utilized
`overflow, and trace the changes in available bandwidth.
`SCP backs off multiplicatively by halving the window size. Per-
`sistent congestion results in exponential back-off.
`When a new packet is sent, SCP shrinks the window size and
`invokes the slow-start policy.
`
`steady
`
`congested The network is congested.
`
`paused
`
`No outstanding packet in the
`network
`
`Table 1 . SCP states, network and session conditions, and flow and congestion control policies
`
`.
`_ An estimator of the timer duration.
`S — An estimator of the receiving packet rate.
`As long as the congestion window is open, the sender streams packets at a rate no more than -—,
`Tb,.jt
`instead of sending them out in bursts, so as to improve the smoothness of the stream. When an ACK is
`received, or a timer expires, the congestion window size WI is adjusted using a policy associated with the
`current state.
`sCP adopts window-based policies similar to those of TCP for slow-start, and exponential back-off upon
`network congestion. It also estimates RTT in a way similar to TCP. Therefore, many parameters and state
`variables have counterparts in TCP. However, SCP differs from TCP in that it has a base RTT estimator
`Tbrtt and a packet rate estimator i. The use of these estimators to ensure smooth streaming and low latency
`is discussed in the following sections.
`2.1. Overall architecture
`SCP is based on the observation that excessive packets in the round-trip network connection accumulate
`in buffers at network routers and switches and lead to an increased RTT. As the accumulation of packets
`increases, these buffers overflow and packets are dropped. The goal of SCP is to ensure smooth streaming at
`a suitably high throughput, but without causing excessive buffering or congestion in the network. To achieve
`this goal, SPC monitors fluctuations in the available bandwidth and pushes an appropriate number of
`additional packets into the network connection. If network congestion is detected, SCP reacts immediately
`with exponential back-off. Depending on the condition of the network and the streaming session under
`control, SCP is in one of the following four states: slowStart, steady, congested or paused. Each state is
`associated with a specific condition and congestion window size adjustment policy as listed in Table 1.
`SCP handles events that indicate changes in network and session conditions. Such events include indica-
`tions that: the SCP session becomes paused or active; the available network bandwidth is fully utilized; the
`network is congested; the network interface has been switched. Upon these events, SCP updates its internal
`state and possibly switches to a new policy. Figure 1 shows the events that SCP handles and its associated
`state transitions.
`2.2. Congestion window adjustment policies and state transitions
`= L, where L is a limit on the
`Initialization: Upon initialization, SCP sets W = 0, WI = 1 and
`congestion window size. Tbrtt , Trtt and Drtt are set to infinity. rto is set to an initial default value. After
`initialization, SCP enters the paused state. Transmission of the first packet brings SCP to the slowStart
`state.
`
`Slow-start: The slow-start policy is invoked after initialization or when SCP resumes from a pause. Its goal
`is to quickly grow the congestion window and discover the available network bandwidth. 1471 is incremented
`by 1 when an ACK is received in-order, thus doubling the congestion window after each RTT amount of
`time. SCP leaves the slowStart state upon detecting events indicating network congestion, full utilization
`of available bandwidth, pause in streaming, or network interface switch.
`
`252
`
`Downloaded From: http://proceedings.spiedigitallibrary.org/ on 01/17/2014 Terms of Use: http://spiedl.org/terms
`
`Petitioners' Exhibit 1011
`Page 0004
`
`
`
`gap in ACK / do back-off
`
`Figure 1. SCP state transition diagram
`
`Steady-state smooth streaming: By pushing an appropriate number of extra packets, the steady-state
`policy enables sufficient utilization of the available bandwidth while avoiding over-buffering. It also traces the
`fluctuations in available bandwidth. In the steady state, estimation of packet rate is enabled. Whenever a
`new value is available, congestion window size W1 and threshold W8 are adjusted according to Equation 1
`is a constant referred to as the window-size incremental coefficient.
`below, where
`
`Wj = W8 = rTbrtt +
`(1)
`The idea behind the flow control defined by Equation 1 is that SCP assumes that
`is an approximation
`of the network bandwidth (in terms of packet rate) available to the session, calculates the bandwidth-delay
`product of the network connection with minimum buffering i x Tbrtt , and adjusts its congestion window
`size W1 accordingly. The product i x Tbrtt S the amount of data SCP should keep outstanding inside the
`network in order to maintain sufficient throughput with minimum buffering. Since the network is shared
`and highly dynamic, the bandwidth available to the session under control fluctuates. If SCP just sets its
`congestion window size to x Tbrtt, when the available bandwidth decreases, 1 would decrease, and SCP
`would reduce its congestion window size and thus the amount of outstanding data. However, if the available
`bandwidth increases, there is no way for SCP to detect that. This is because the receiving packet rate f
`could not be increased unless SCP increases the sending packet rate first, but SCP would not increase its
`sending packet rate by increasing the congestion window size unless it observes a increased i. To solve this
`"chicken-and-egg" type of problem, SCP pushes W amount of extra outstanding data, which are held in
`router and switch buffers. When the network available bandwidth increases, releasing the extra data buffered
`in the network results in an increase in packet rate I, which in turn results in larger amount of outstanding
`data as defined by the increased W1 . It can be seen that the increase in W1 is no more than additive, which
`is what TCP does in its steady state. Due to the smoothing effect of lowpass filtering in estimation of I, W,
`will be changed gradually, causing smooth changes in throughput. Also, with the flow control policy stated
`in Equation 1, though at the beginning may not be a reliable estimation of the actual available bandwidth,
`if the network is stable enough, SCP will eventually bring to the actual bandwidth through iterations.
`Overall, the steady-state policy is able to converge to a stable throughput, and to trace the fluctuation in
`available bandwidth smoothly.
`Upon detecting network congestion, SCP backs off and enters the congested state. SCP enters the paused
`state when the session is paused, or when a switch in network interface is detected.
`
`253
`
`Downloaded From: http://proceedings.spiedigitallibrary.org/ on 01/17/2014 Terms of Use: http://spiedl.org/terms
`
`Petitioners' Exhibit 1011
`Page 0005
`
`
`
`The steady-state policy defined by Equation 1 is what makes SCP different from TCP. If network con-
`ditions remain the same for a sufficiently long period of time, SCP converges its window size and data rate
`to stable values. SCP is also able to limit the buildup of end-to-end latency resulted from buffering in
`routers, regardless of the actual sizes of router buffers. These properties are desirable for media streaming
`applications. In contrast, TCP repeatedly increases its congestion window size linearly until packets are lost,
`and then backs off by halving the window size. The result is that TCP traffic is inherently bursty. TCP also
`tends to fill up router buffers no matter how big they are. A consequence is that across network connections
`with deep buffers, TCP yields long latencies. The deeper the buffers are, the longer the latencies are.
`
`Exponential back-off upon network congestion: Upon network congestion, SCP invokes an expo-
`nential back-off policy similar to that of TCP. Whenever events such as gaps in ACK or timer expirations
`are detected, SCP backs off multiplicatively by reducing its congestion window size in half. The data rate
`estimator i is no longer accurate and thus is reset and disabled. If the back-off is triggered by timeout, rto
`may have been too short and is doubled. When the network is so congested that no packet can get through,
`with the multiplicative decrease in congestion window size and multiplicative increase in timer duration,
`scP backs off exponentially until it virtually stops sending any further packet. This gives a chance for the
`network to quickly recover from the congestion.
`At the time of a back-off, there may already be some outstanding packets inside the network. Loss of these
`packets does not correctly reflect the network condition after the back-off, and thus should be ignored. The
`congested state is designed for this purpose. After each back-off, SCP enters the congested state. Further
`back-off is disabled until the first packet sent in the current congested state is acknowledged, found lost or
`its timer expired. If the first packet is acknowledged, SCP enters the steady state, otherwise another round
`of back-off is initiated.
`
`Pause when there is no packet to send: In a real-time streaming session, a user may want to pause
`in the middle. If the sender has no data to send for a while, W eventually decreases to 0. At this moment,
`the streaming session becomes idle, and SCP enters the paused state.
`When an SCP session is paused, the bandwidth it previously took will gradually be discovered and taken
`by other sessions. When the session resumes at a later time, SCP invokes the slow-start policy with a reduced
`initial congestion window size. Currently, we adopt a ad-hoc policy of halving the congestion window size
`in every Tbrtt time elapsed in the paused state.
`
`Reset upon network interface switch: When either end of an SCP streaming session has its network
`interface switched (e.g., from Ethernet to cellular modem, or vice versa), the route from the sender to the
`receiver may be changed, and the new connection may go through links with totally different capacities.
`SCP handles this dynamic reconfiguration issue by providing a reset interface. Upon each network interface
`switch event, SCP discards the out-dated estimators, ignores all outstanding ACKs, and resumes with the
`slow-start policy to quickly discover the capacity of the new connection.
`
`3. ANALYSIS OF BANDWIDTH SHARING BETWEEN SCP SESSIONS
`With the steady-state rate- and window-based policy defined by Equation 1 ,it is possible for multiple SCP
`sessions to share network links in a fair and stable manner. In this section, we analyze a simple case, in
`which two sessions with the same packet size share a single network link. Both sessions send packets at
`maximum rate while the congestion window is open.
`Suppose that, in a steady state, session A has estimators
`'brtta, Trtta, and Wia = iaTbrtta + W; and
`session B has b, Tbrttb, Trttb, and Wib = rbTbrttb + W. Since A and B share the same network link, we
`have following observations:
`
`(1) The aggregate packet rate of the network link, ,is the sum of the two sessions: = +
`
`254
`
`Downloaded From: http://proceedings.spiedigitallibrary.org/ on 01/17/2014 Terms of Use: http://spiedl.org/terms
`
`Petitioners' Exhibit 1011
`Page 0006
`
`
`
`( 2) Base RTT estimate is the actuallink base RTT Tbrjtl plus some estimation error: Tbrjta Tbrttl + ea
`Tbrttl + eb . When Tbrtt 5 estimated as the minimum of the past RTT measurements of
`and Tbrttb
`a session, the main component of the estimation error is the residual buffering, which is caused by
`packets of other sessions preventing the network buffers from becoming empty. Another less important
`component is sampling noise.
`(3) Sessions A and B have the same RTT estimator (when the sampling noise can be ignored) : Trtta
`Triti . Furthermore, the number of packets sent by a session in one RTT equals its congestion
`Trttb
`window size. Thus the packet rate ratio of A and B equals the ratio of their window size:
`a(Tbrtt1 + ea) + W
`ro(Tbrttl + eb) +
`
`rbWz
`rb(eb — ea) + W
`Combining observations (1) and (3), it is clear that there is a single solution for 1a and ib. The session
`with a larger residual buffering tends to have a larger estimation error, thus gets a larger portion of the link
`bandwidth. In a special case where the two sessions have the same RTT estimation error, ea
`eb, we have
`ra
`!L indicating that the two sessions split the link bandwidth evenly.
`rb
`
`Vb
`
`WEb
`
`(2)
`
`4. IMPLEMENTATION OF SCP
`scP has been implemented as a layer on top of UDP, based on a toolkit for developing adaptive systems.6
`In this section, we briefly discuss the parameter estimators and various implementation issues. More details
`may be found in Cen's thesis.6
`Several parameters areestimated by SCP. Base RTT Tbrtt, average RTT Trtt, RTT standard deviation
`Drtt , and timer duration rto are estimated based on the history of the raw RTT measurements. Tbrtt S
`estimated as the minimum of all the past RTT measurements. The estimations Tnt ,Drtt and rto are done
`in a way similar to those in TCP.4 The receiving packet rate is estimated as the rate at which ACKs are
`received. This is reasonable since the receiver acknowledges every packet it receives.
`Several performance issues have been addressed in the implementation. The first is that, in the design
`of SCP discussed in Section 2, each outstanding packet has a separate timer. The overhead of maintaining
`these timers would make SCP non-scalable. In the implementation, only an ordered table is maintained
`to record the sending time of all outstanding packets. Upon receipt of an ACK, SCP removes the entry
`of the acknowledged packet from the table. Only when there is a new packet to send and the congestion
`window is closed, SCP checks the older packets in the table to see if their timers should have been expired,
`and performs back-off if necessary. Other issues, such as out-of-order packet delivering, congestion-window
`adjustment in low-data-rate sessions, etc., are also addressed.
`
`5. EXPERIMENTS AND RESULTS
`To evaluate the performance of SCP, a test program has been built to stream dummy packets between
`Internet hosts using SCP or TCP. The test program reads parameters and experimental scripts from files,
`and collects statistics for analysis.
`The network configuration for the experiments is shown in Figure 2. We have two LANs, one at Oregon
`Graduate Institute (OGI) and the other at Georgia Institute of Technology (Georgia Tech) connected through
`the long-haul Internet with 29 hops. Among the hosts, anquetil is a Linux PC notebook on subnet 1 of the
`OGI LAN. This notebook can optionally be moved to subnet 2 or a 28.8kb/s PPP link. This configuration
`covers typical types of Internet connections such as phone lines, LAN, and WAN. Network interface switching
`in mobile environments is simulated by having anquetil to switch between its Ethernet and PPP interfaces
`dynamically.
`Experiments have been performed to evaluate the performance of SCP in sessions over different network
`connections. For all the experiments, unless stated explicitly, the following parameter values are used:
`steady-state congestion window size incremental coefficient W = 3, and data packet size 1472Bt. The size
`tIp packet size = 20 + 8 + 1472 = 1500B is the Ethernet MTU. 1472B is the UDP/TCP payload size, including the SCP
`header (in the case of SCP), the various fields carried in the packet, followed by a block of randomized data.
`
`255
`
`Downloaded From: http://proceedings.spiedigitallibrary.org/ on 01/17/2014 Terms of Use: http://spiedl.org/terms
`
`Petitioners' Exhibit 1011
`Page 0007
`
`
`
`Figure 2. Network configuration for experiments
`
`SCP session
`TCP session
`
`20000
`
`15000
`
`10000
`
`5000
`
`! ;
`
`i
`
`SCP session --—
`TCP session
`
`20000 40000 60000 80000 100000
`Session time (ms)
`
`0 20000 40000 60000 80000100000
`Session time (ms)
`
`(a) Packet rate
`
`(b) Buffering delay
`
`-
`
`.9:
`ci
`
`C)
`
`4
`3.5
`3
`2.5
`2
`1.5
`
`0.5
`0
`
`0
`
`Figure 3. Performance of single SCP and TCP sessions from lemond to anquetil over PPP
`
`of TCP and SCP socket sending and receiving buffers is up to 64KB, which is the maximum supported in
`many Berkeley socket implementations. The value of W. is selected in an ad-hoc manner, but experiments
`have shown that it yield satisfactory performance.
`In every SCP or TCP session, the sender uses non-blocking mode to send packets. It repeatedly waits
`until the socket is ready to send (in the case of TCP) or the congestion window is open (in the case of SCP),
`timestamps a dummy packet, and sends it out immediately. Each packet carries a sender timestamp. The
`receiver records the time each packet is received. With the receiving timestamps, intervals between packets
`and packet rate can be measured. Assuming that the clocks of the sender and the receiver have the same
`rate, with the sender and receiver timestamps, it is possible to measure the application-level end-to-end
`buffering delay — the delay from when a packet leaves the sender application until when it is received by
`the receiver application, minus the minimum of all the delay measurements, which is the transmission and
`processing delay. For each packet, the buffering delay is measured as the difference between the sender and
`receiver timestamps, minus the minimum of all the measurements in the history. This end-to-end buffering
`delay include the times spent in the buffers in the sender, the receiver, and routers and switches. For all TCP
`and SCP sessions, buffering delay and receiving packet rate are measured. For SCP sessions, the congestion
`window size, and the gaps in ACKs are also measured.
`5.1. .Experiments over PPP link
`In the first set of experiments, packets are streamed from lemond to anquetil over the 28.8kb/s PPP link
`with a MTU of 1500B.
`
`256
`
`Downloaded From: http://proceedings.spiedigitallibrary.org/ on 01/17/2014 Terms of Use: http://spiedl.org/terms
`
`Petitioners' Exhibit 1011
`Page 0008
`
`
`
`-
`;;'
`a:
`!
`
`4
`.
`3 5
`3
`2.5
`2
`
`:
`
`'
`
`,
`
`.
`
`ong scsscn
`Short session 1 —
`short session 2 —s---—
`
`(
`
`I
`
`bartai->Due•
`anquetil->yellow —
`
`500
`
`Aflfl
`
`-
`-
`s 300
`!
`
`:
`
`0
`
`1 00000 200000 300000 400000
`Session time (ms)
`
`0
`
`40000
`20000
`Session time (ms)
`
`60000
`
`Figure 4. Smoothed packet rate of two SCP
`sessions from bartali to anquetil over PPP link
`
`Figure 5. Smoothed packet rate of two SCP
`sessions through single router
`
`First, single TCP and SCP sessions are played. For the SCP sessions, the receiving data rate is about
`26.9kb/s, which is close to the 28.8kb/s PPP link speed. Figure 3(a) shows the packet rate of
`2.24pkt/s
`a SCP session and a TCP session over time. It shows the raw packet rate (inverse of packet interval) versus
`the session time (the time relative to the beginning of the session). Figure 3(b) shows the buffering delay
`of these two sessions. After a startup phase, the SCP session yields a smooth packet rate, and utilizes the
`bandwidth sufficiently. The TCP session has similar data rate, except for the periodic downward spikes which
`are caused by packet loss (by the PPP server) . However, SCP and TCP sessions maintain different levels
`of buffering inside the network. The SCP session has a steady-state buffering delay of about 1.2 seconds.
`In contrast, the TCP session pushes the buffering delay up to 16 seconds and stays there. As a matter of
`fact, different TCP implementations have very different behaviors over PPP. SunOS TCP has a steady-state
`behavior similar to that of HP-UX TCP, but a much worse start-up phase. Linux TCP pushes the delay
`to 7 seconds and stays there. Long network buffering delay for TCP over PPP is extremely detrimental
`to interactive streaming traffic. In an interactive streaming video presentation, each time the user changes
`playback speed, it is simply not tolerable to see no effect until after more than 16 seconds.
`Careful readers may have noticed that 16 seconds is roughly the time for transferring 64KB (sender
`buffer size) of data over the 28.8kb/s PPP link. But this long delay is actually caused by a combination of
`PPP link's low bandwidth and the relatively large buffer in PPP server for individual PPP links. In our
`experiments, when UDP packets are sent from lemond or other hosts to anquetil over the PPP link, more
`than 15 seconds of data will be buffered in the PPP server before packets are dropped. A TCP connection
`over the PPP link will have its congestion window increased until either the sender or receiver socket buffer
`is full, or when packets are dropped when the buffer in PPP server overflows. To see that the long buffering
`delay is not caused by the big sender buffer itself, we also measured several TCP sessions in the reverse
`direction, from anquetil to lemond, and got buffering delays of about 2.5 seconds. It seems that LINUX has
`incorporate some techniques to reduce the latency for TCP over PPP to 7 seconds. Of cause, buffering delay
`can be reduced by reducing the TCP sender socket buffer size. But small sender socket buffer size is not
`always desirable. It will limit the size of congestion window, and will not be able to explore the bandwidth of
`long and fat network connections. There is not a single socket buffer size which fits all types of connections.
`Figure 4 shows the smoothed packet rate of three SCP sessions, when two of them are played simultane-
`ously from bartali to anquetil over the PPP link. It can be seen that the two competing sessions eventually
`reach a stable share of the PPP bandwidth. This experiment also shows the impact of residual buffering error
`in the based RTT estimation on bandwidth sharing. The base RTT estimation of the first half of the long
`session has a residual buffering caused by the first short session, thus it gets a larger share of the bandwidth.
`This result supports the observation in the bandwidth sharing analysis (Section 3). Later, after the first
`short session ends and the second short session starts, the latter gets a residual buffering error caused by
`the long session, and also gets a bigger portion of the PPP bandwidth. Further experiments show that if the
`start .times of two competing SCP sessions are close enough, they will split the bandwidth evenly.
`
`tSmoothing is achieved by applying a lowpass filter to the raw packet rate measurements.
`
`257
`
`Downloaded From: http://proceedings.spiedigitallibrary.org/ on 01/17/2014 Terms of Use: http://spiedl.org/terms
`
`Petitioners' Exhibit 1011
`Page 0009
`
`
`
`Experiment
`configuration
`Packet rate (pkt/s)
`Buffering delay (ms)
`
`Single SCP
`Single TCP
`anquetil—+yellow anquetil—*yellow
`380
`320
`O'3O around 5
`5'-'8O around 40
`
`SCP: anquetil--yellow SCP: anquetil—+ yell