throbber
The Loss-Delay Based Adjustment Algorithm: A TCP-Friendly Adaptation
`Scheme
`
`Dorgham Sisalem
`GMD-Fokus, Berlin
`sisalem@fokus.gmd.de
`
`Henning Schulzrinne
`Columbia University, New York
`schulzrinne@cs.columbia.edu
`
`Abstract
`
`Many distributed multimedia applications have the abil-
`ity to adapt to fluctuations in the network conditions. By ad-
`justing temporal and spatial quality to available bandwidth,
`or manipulating the playout time of continuous media in re-
`sponse to variations in delay, multimedia flows can keep an
`acceptable QoS level at the end systems. In this paper, we
`present a new scheme called the loss-delay based adjust-
`ment algorithm (LDA) for adapting the transmission rate of
`multimedia applications to the congestion level of the net-
`work. The LDA algorithm was designed to reduce losses
`and improve utilization in a TCP-friendly way that avoids
`starving competing TCP connections. It relies on the end-to-
`end Real Time Transport Protocol (RTP) for feedback infor-
`mation. In addition, we enhanced RTP with functionalities
`for determining the bottleneck bandwidth of a connection.
`The bottleneck bandwidth is then used for dynamically de-
`termining the adaptation parameters. Simulations and mea-
`surements of the LDA algorithm suggest the efficiency of the
`scheme in utilizing network resources, decreasing loss ratios
`and its fairness towards competing TCP traffic. Our inves-
`tigations also suggest that the report intervals and feedback
`information of RTP require some enhancements to accom-
`modate the needs of adaptive QoS control schemes.
`
`1 Introduction
`
`A large part of the multimedia conferencing applications
`used currently in the Internet, such as VIC [17] and VAT [14]
`are based on the UDP transport protocol. However, UDP of-
`fers no quality of service control mechanisms and can there-
`fore not guarantee any level of QoS. Fluctuations of network
`conditions combined with the inability of those protocols to
`support QoS control often render multimedia applications
`useless.
`
` This work was funded in part by the BMBF (German Ministry of Ed-
`ucation and Rese arch) and the DFN (German Research Network).
`
`In addition, deploying non-congestion-controlled UDP
`traffic results in extreme unfairness towards competing TCP
`traffic. Sending best-effort traffic without any considera-
`tion of the network congestion state can easily result in
`packet losses. In response to that, TCP connections sharing
`the same bottleneck would reduce their transmission rates.
`However, without any rate reduction on behalf of the non-
`congestion-controlled traffic, the TCP connections which
`constitute around 95% of the Internet traffic today [25]
`would starve and receive much less than their fair bandwidth
`shares. Therefore, UDP sources need to be enhanced with
`quality of service control mechanisms that not only aim at
`reducing loss ratios and improve bandwidth utilization but
`also are fair towards competing TCP connections, i.e, be
`TCP-friendly.
`Currently, different approaches are being discussed for
`solving the QoS and congestion control problems such as:
`resource reservation [4], priority mechanisms [1] and appli-
`cation control [7], i.e., to instruct the applications at the end
`systems to adapt the bandwidth share they are utilizing to the
`network congestion state.
`Reserving enough resources for supporting a certain QoS
`level in advance guarantees this quality level and would be
`the most straightforward approach for handling the prob-
`lem. However, as it is usually impossible to know the ex-
`act characteristics of a stream in advance, one would tend
`to over-allocate resources to guarantee the requested QoS
`level, leading to network underutilization. In addition to the
`complexity and scalability problems of reservation based
`QoS control schemes, these schemes do not easily allow the
`use of extra bandwidth for improved quality during network
`underload states for example.
`With priority mechanisms, different data packets or
`streams are labeled with different priorities and thereby
`treated differently at the network routers. Such an approach
`is simpler than the reservation approach as it requires no
`signaling and less complicated control mechanisms at the
`routers. However, the exact mechanisms for setting the pri-
`ority levels, the router mechanisms for controlling these lev-
`els and the actual gain achieved with such an approach are
`
`1
`
`Page 1 of 12
`
`VIMEO/IAC EXHIBIT 1020
`VIMEO ET AL., v. BT, 2019-00833
`
`

`

`still under discussion [1].
`In this paper, we propose to use QoS control mechanisms
`based on adapting the sender transmission rate in accor-
`dance with the network congestion state. That is, based on
`feedback information from the receivers, the sender would
`increase its transmission rate during underload situations
`and reduce it otherwise. This is especially beneficial when
`the bandwidth availability may change during a session, par-
`ticularly during long-lived sessions typical for multimedia
`communications. Note, that with such an approach no QoS
`guarantees can be made, but the user perceived quality is im-
`proved through loss reduction.
`Ergonomic studies and the experience gained from the
`Internet demonstrate that people can use audio and video
`data as long as the information content is above a minimum
`level [28]. This level depends on media content and the task
`at hand. For instance, a foreign language is more difficult to
`understand than a native language when audio quality is re-
`duced. So, at the expense of slight degradation in user satis-
`faction, it is possible to adjust the transmission rates of end
`systems in accordance with the network congestion state.
`Hence, deploying application control results in a better over-
`all bandwidth utilization, as it avoids network congestion. A
`major advantage of application control schemes is that they
`require no support from intermediate routers and are, thus,
`easy to deploy.
`Applications control works best for live transmission for
`unicast and small multicast groups; in large multicast groups
`in a heterogeneous environment, a “race to the bottom” can
`occur so that one poorly connected receiver determines the
`quality for the much larger number of well-connected re-
`ceivers. To avoid these problems, various proposals have
`been made for hierarchical data distribution [27, 18, 29].
`Those proposals are based on partitioning a data stream into
`a base layer, comprising the information needed to achieve
`the lowest quality representation and a number of enhance-
`ment layers. The different layers are then sent to different
`multicast sessions and the receivers determine how many
`sessions to join and thereby adjust their QoS in respect to
`their own requirements and capacities.
`While layered data transmission solves the heterogeneity
`problems, it might cause additional delays at the receivers.
`As the different layers might use different paths and hence
`have different round trip times, the receivers need to resyn-
`chronize the arriving data. Additionally, data partitioning
`might lead to a drift problem caused by motion compensated
`coding if only the lower bit rate layer is received [11]. Fi-
`nally, it increases the complexity of the end systems consid-
`erably.
`In this paper, we propose a new end-to-end rate adap-
`tation scheme called the loss-delay based adjustment algo-
`rithm (LDA) for adjusting the transmission rate of multime-
`dia applications to the congestion level of the network. The
`
`LDA algorithm resembles other adaptation approaches pro-
`posed in the literature [3, 5] that increase the transmission
`rate during network underload periods by an additive in-
`crease rate (AIR) and reduce the rate multiplicatively during
`congestion periods. However, these schemes show a rather
`aggressive adaptation behavior, often leading to the starva-
`tion of competing TCP traffic [23]. The LDA algorithm is
`on the contrary designed in a TCP similar fashion that pre-
`vents the starvation of TCP connections but sill allows for a
`stable transmission behavior.
`Another issue that none of the adaptation schemes we
`encountered addresses is how to choose AIR. The appro-
`priate value of AIR depends on the bandwidth share of a
`connection. As this share might change dynamically during
`the lifetime of a connection using a constant AIR value is
`rather inappropriate. The LDA algorithm adjusts its adapta-
`tion parameters dynamically in response to the losses, delays
`and capacity observed on the path traversed by the adap-
`tive connection. The capacity of the network is estimated
`by enhancing RTP with the packet pair approach presented
`in [13].
`In Sec. 2 of this paper we present some related work on
`the issue of TCP-friendly adaptation. The loss-delay based
`adjustment algorithm (LDA) is then presented in Sec. 3. Fi-
`nally, the performance of the algorithm in terms of band-
`width utilization, scalability and fairness towards competing
`TCP connection is investigated using simulations and mea-
`surements in Sec. 4.
`
`2 TCP-Friendly Adaptation Algorithms
`
`As deploying non-congestion controlled UDP traffic in
`the Internet might result in starving competing TCP con-
`nections [10], different approaches have been suggested that
`aim at adjusting the transmission behavior of UDP senders
`in a TCP similar fashion. In the following we present a brief
`summary of some of those approaches:
`
` Jacobs [12] presents a scheme that uses the congestion
`control mechanisms of TCP, however, without retrans-
`mitting lost packets. In his scheme, the sender main-
`tains a transmission window that is advanced based on
`the acknowledgments of the receiver. Based on the size
`of the window the sender can estimate the appropri-
`ate transmission rate. Thereby, the adaptive connec-
`tion is guaranteed to acquire the same bandwidth a TCP
`connection would be getting under the same conditions
`of losses and delays. However, the need to acknowl-
`edge each packet limits the benefits of such an ap-
`proach to unicast communication. Also, the acknowl-
`edgment packets increase the overall network traffic
`significantly diminishing the benefit of adaptation, par-
`ticularly for short voice packets.
`
`Page 2 of 12
`
`

`

` Floyd et al. [10] and Ott et al. [19] propose a model that
`estimates the throughput of a TCP connection (rTCP) un-
`der known delay and loss conditions as
`
`3 The Loss-Delay Based Adjustment Algo-
`rithm (LDA)
`
`rTCP =
`
` :  M
`pl
`RTT 
`with M as the maximum packet length, RTT as the
`round trip delay of the connection and l as the average
`loss measured during the lifetime of the connection.
`
`(1)
`
`Based on this estimation, Mahdavi et al. [15] and
`Turletti et al. [26] propose end-to-end, TCP-friendly
`congestion control schemes in which the end systems
`measure losses and delays in the network and restrict
`their transmission rates to the value estimated by Eq. 1.
`Floyd et al. [10] describe a mechanism in which routers
`identify connections that use more bandwidth than al-
`lowed by Eq. 1 and then throttle these connections.
`
`The TCP-throughput model is based on a TCP sender
`interpreting packet drops as an indication of conges-
`tion and responding by reducing the congestion win-
`dow and thereby the effective sending rate by half. Fur-
`ther, this sender should only increase its transmission
`window by at most one packet each round trip time.
`
`While this model is rather simple, it is only partially
`correct. It does not consider timeout cases or delayed
`acknowledgments. However, even under those restric-
`tions different studies [10, 16] have shown this model
`to be accurate enough for losses of less than 16%.
`
`that the TCP-throughput model is
`Note, however,
`based on the average loss and delay observed during
`the lifetime of a connection. However, adaptation de-
`cisions need to be taken based on the current loss and
`delay values. As the observed losses and round trip
`delays change dynamically, using the TCP-throughput
`model as the sole basis for adaptation results in a rather
`oscillatory adaptation behavior which might lead to an
`annoying perceived QoS by the users due to the rapid
`changes in the quality of the received data stream. In
`addition, the TCP-model does not state the rate increase
`method during no loss periods. Testing a simple adap-
`tation mechanism based on setting the bandwidth to the
`rate estimated by the TCP model using the loss and de-
`lay values reported in the RTCP packets and increasing
`the rate additively during no loss periods the scheme
`showed a very oscillatory behavior and on the average
`a rather low throughput [23]. Also, the authors in [19]
`state that if the round trip time is caused by queuing de-
`lays, or if the bottleneck router is shared among com-
`peting connections, this model is only of limited value.
`
`The LDA algorithm is a sender based adaptation scheme.
`It relies on the Real Time Transport Protocol (RTP) [20] for
`feedback information about the losses at the receivers and
`round trip time. We additionally enhanced RTP to estimate
`the bottleneck bandwidth of a connection. Based on these
`information, the sender calculates for each received feed-
`back message from receiver (i) the appropriate bandwidth
`to use for the reporting receiver (ri). During periods with-
`out losses the sender increases ri by an additive increase
`rate (AIRi) which is estimated using the loss, delay and bot-
`tleneck bandwidth values included in the feedback reports.
`The actual sending rate is then determined at equally spaced
`adaptation points as minri.
`We divide the algorithm description into three parts han-
`dling the feedback information, upon which the adaptation
`decisions are taken, the mechanisms for setting the adapta-
`tion parameters and the actual adaptation mechanisms.
`
`3.1 Feedback Information
`
`The loss-delay based adjustment algorithm is based upon
`the Real Time Transport Protocol (RTP) [20] designed
`within the Internet Engineering Task Force (IETF). RTP en-
`hances other transport protocols such as UDP with features
`needed by continuous media applications, such as the capa-
`bility of distinguishing between different media streams and
`keeping track of various statistics describing the quality of
`the transmission as seen by other members of the session.
`RTP sessions consist of two lower-layer data streams,
`namely a data stream for, say, audio or video and a stream of
`control packets (using the sub-protocol called RTCP). Each
`session member periodically sends RTCP control reports to
`all other session members. Senders transmit reports describ-
`ing the amount of data they have sent and a timestamp indi-
`cating the time the report was generated. For each incom-
`ing stream the receivers send a report indicating the fraction
`of lost data, the timestamp of the last received sender re-
`port (tLSR) for that stream and the time elapsed in between re-
`ceiving the last sender report and sending the receiver report
`(tDLSR). Knowing the arrival time t of the RTCP packet the
`sender can calculate the round trip time (RTT) as follows:
`
`(2)
`
`RTT = t (cid:0) tDLSR (cid:0) tLSR
`This calculation requires no synchronization between the
`clocks of the sender and receiver and is therefore rather ac-
`curate. With the LDA algorithm, we estimate the round trip
`propagation delay ( ) as the smallest measured RTT.
`In addition, we enhanced RTP with the ability to esti-
`mate the bottleneck bandwidth of a connection based on the
`
`Page 3 of 12
`
`

`

`ple, consider the case of a 1 Mb/s link shared between 2
`connections. An ideal sharing would be if each connec-
`tion received 500 kb/s. For this case, measurements [5]
`have shown that an appropriate AIR value that allows small
`convergence periods and only small oscillations would be
`50 kb/s. This value would, however, be inappropriate if the
`link was being shared among 100 connections for example.
`With the LDA algorithm the additive increase rate (AIR)
`is set dynamically based on the congestion state of the net-
`work. That is, AIR is set initially to a small value, and is
`then increased during periods without losses. If losses are
`reported by the receivers, the senders set the AIR back to
`the initial value. We have chosen an initial value of 10 kb/s,
`which is small enough to be used even in narrowband ISDN
`connections.
`If the received RTCP messages from receiver (i) indicate
`no losses the sender calculates an AIRi for this receiver as
`follows:
`
`AIRi = AIR + AIR Bf
`with Bf = (cid:0)
`with AIR as the additive increase value calculated for the en-
`tire session, r the current transmission rate, b the bottleneck
`bandwidth and Bf as a bandwidth factor that allows connec-
`tions with a low bandwidth share to use larger AIR values
`and thereby converge faster to their fair bandwidth share.
`The difference between AIR and AIRi will be explained in
`Sec. 3.3.
`Further, AIRi is limited maximally to the average rate in-
`crease (rincr) a TCP connection would utilize during periods
`without losses in a time interval equivalent to the interval be-
`tween two adaptation points (T ). So for a round trip prop-
`agation delay of and a packet size of M , a TCP connec-
`tion would increase its transmission window each by one
`packet size. Translated into transmission rate increase, rincr
`would be
`
`r b
`
`rincr =
`
`M   T
` + 
`
`
`:
`
`3.3 Congestion Control Mechanism
`
`With each received RTCP packet from a member i the
`sender calculates a transmission rate ri it would use if
`member i was the only member of the multicast session. ri
`is then saved into a data base. So, if no losses were indicated
`the sender can recalculate AIRi as was described in 3.2 and
`ri is then set to
`
`ri = r + AIRi
`
`with r as the transmission rate the sender is using for the en-
`tire session.
`
`packet pair approach described by Bolot [2]. The essential
`idea behind this approach is: If two packets can be caused
`to travel together such that they are queued as a pair at the
`bottleneck, with no packets intervening between them, then
`the inter-packet spacing will be proportional to the time re-
`quired for the bottleneck router to process the second packet
`of the pair.
`We added to the RTCP packets an application defined
`part including the source sequence number, the sequence
`number (SEQ) of a data packet that will start a stream of
`probe packets and the number (n) of probe packets that will
`be sent. Then, n data packets starting with packet numbered
`SEQ are sent at the access speed of the end system. At the
`receiver site, the bottleneck bandwidth b is calculated as:
`
`b =
`
`probe packet size
`gap between 2 probe packets
`
`Note, that we are using data packets as probe packets and
`therefore the additional bandwidth required for the bottle-
`neck probing is restricted to the increased size of the RTCP
`packets. Sending data packets as probe packets is appropri-
`ate when transmitting video streams as one can send a few
`packets belonging to the same video frame together without
`considerably altering the traffic characteristics. Also video
`packets tend to be large which increases their possibility of
`being buffered at the routers. For the case of audio streams
`with small packets and stringent timing requirements an-
`other solution might need to be considered.
`To estimate the average bottleneck bandwidth we need to
`further filter out incorrect estimates. We rely on an approach
`similar to that used in the BPROBE tool [6], namely clus-
`tering similar estimates into intervals, and choosing the av-
`erage of the interval with the highest number of estimates.
`The estimated value is then sent back to the sender with the
`next receiver report.
`Obviously, this approach has lots of drawbacks. We do
`not include the time between the transmission of two probe
`packets. But, as we send the packets at the sender’s network
`access speed which in our case was 10 Mb/s we can usu-
`ally ignore this time. Also, we do not consider packet drops
`or competing traffic. However, testing this approach on dif-
`ferent Internet connections we achieved results comparable
`to those estimated by a more complicated tool such as the
`PATHCHAR tool by LBL. Still, we need to further refine
`the estimates filtering method and do more measurements.
`
`3.2 Dynamical Determination of the Additive In-
`crease Rate (AIR)
`
`The appropriate value to use for AIR depends largely on
`the bandwidth share a connection could utilize. For exam-
`
` A tool for estimating the loss, delay and bandwidth characteristics of an
`Internet path. The tool is freely available from ftp://ftp.ee.lbl.gov/pathchar/
`
`Page 4 of 12
`
`

`

`Otherwise, the sender reduces ri in a rather TCP similar
`fashion, i.e., proportional to the indicated losses (l)
`
`ri = r  (cid:0) l Rf 
`with Rf as the reduction factor. This factor determines the
`degree of the reaction of the sender to losses. Higher values
`result in a faster reduction of the transmission rate but a more
`oscillatory behavior. Lower values, on the other hand, lead
`to more stable rate values but result in longer convergence
`periods. The different simulations we ran suggest that we
`get the best tradeoff between convergence time and stabil-
`ity for a reduction factor between 2 and 5. Throughout this
`study we use a factor of 3.
`Additionally, to favor connections with a low bandwidth
`share, we define a lower loss threshold (lth) below which a
`connection with lower transmission rate than an equivalent
`TCP connection can increase its transmission rate. Equiva-
`lent TCP connection indicates here a TCP connection with
`the same packet size, round trip delay as can be calculated
`using the timing information of the RTCP packets and hav-
`ing the same losses. In all of our simulations we set lth to
`1%.
`Instead of reacting to each RTCP packet as some adap-
`tation schemes suggest [5], the LDA algorithm is based on
`so called adaptation points. At each adaptation point, the
`sender searches the transmission rates data base for the min-
`imum value (rmin) and sets r to rmin. A rmin smaller than the
`current transmission rate r indicates that at least one mem-
`ber has reported losses. In this case, AIR is reinitialized to
`the initial value of 10 kb/s. If rmin was higher than the current
`transmission rate (r), then AIR is set to AIRi calculated for
`the member for which rmin was determined for.
`We have used a period of 5 seconds between two adap-
`tation points which is also the average value between gen-
`erating two RTCP packets at the same source. As the time
`between two RTCP packets might actually be larger than 5
`seconds, choosing this fixed value has the disadvantage that
`the adaptation decision might be taken based on the reports
`of only a subset of the session members and not all of them.
`However, the other alternative would be to wait for the re-
`ports from all members before adapting. While this might
`work for small groups, for larger groups the interval between
`two sent RTCP packets increases and thereby the adaptation
`decision will be taken less frequently and, thus, be less effec-
`tive.
`
`4 Performance of the Loss-Delay Based Ad-
`justment Algorithm
`
`When designing an adaptive control scheme, following
`goals need to be considered:
` the scheme should result in a low packet loss ratio,
`
` achieve high overall bandwidth utilization,
` fairly distribute bandwidth between competing con-
`nections,
` and scale for large multicast groups.
`In this section, we investigate the performance of the LDA
`algorithm with regard to those different requirements using
`both simulations as well as measurements on a real network.
`
`4.1 Fairness and Convergence of the LDA Algo-
`rithm
`
`Sender 4
`
`Receiver 4
`
`Sender 3
`
`Router
`
` b
`
`Router
`
`Receiver 3
`
`Sender 2
`
`Sender 1
`
`Receiver 2
`
`Receiver 1
`
`Figure 1. LDA performance testing topology
`
`As a first performance test of the LDA algorithm, we sim-
`ulated the topology depicted in Fig. 1. The model consists
`of 4 connections sharing a bottleneck router. All connec-
`tions deploy the LDA algorithm and are persistent sources.
`That is, they always have data to send with the maximum
`allowed rate. They all have the same round trip propaga-
`tion delay ( ) and are similar in their requirements and char-
`acteristics. The router is a random early drop (RED) gate-
`way as was proposed by Floyd and Jacobson [9]. A RED
`gateway detects incipient congestion by computing the av-
`erage queue size. When the average queue size exceeds a
`preset minimum threshold the router drops each incoming
`packet with some probability. Exceeding a second maxi-
`mum threshold leads to dropping all arriving packets. This
`approach not only keeps the average queue length low but
`ensures fairness and avoids synchronization effects. Based
`on results achieved in [10] the minimum drop threshold was
`set to 0.5 of the routers buffer and the maximum one to 0.95
`of the routers buffer. In our simulations we set the maximum
`queuing delay to 0.1 seconds and the bandwidth (b) of the
`bottleneck router to 1 and 10 Mb/s. The packet size is set
`to 1 kbyte, which is a typical video packet size. In the first set
`of simulations, we looked at the utilization, losses and fair-
`ness of the LDA algorithm with all connections having the
`same round trip time and different starting times. The band-
`width distribution and loss figures shown in Fig. 2 and 4 re-
`veal that after a convergence period the four connections re-
`ceived equal bandwidth shares with average losses between
`
`Page 5 of 12
`
`t
`

`

`Sender1
`Sender2
`Sender3
`Sender4
`
`200
`
`600
`400
`Time (sec)
`
`800
`
`1000
`
`(a) = 1 ms, b=1 Mb/s
`
`Sender1
`Sender2
`Sender3
`Sender4
`
`200
`
`600
`400
`Time (sec)
`
`800
`
`1000
`
`1000
`
`800
`
`600
`
`400
`
`200
`
`0
`
`0
`
`1000
`
`800
`
`600
`
`400
`
`200
`
`0
`
`0
`
`Rate (kb/s)
`
`Rate (Kb/s)
`
`Sender1
`Sender2
`Sender3
`Sender4
`
`200
`
`600
`400
`Time (sec)
`
`800
`
`1000
`
`(a) = 1 ms, b=10 Mb/s
`
`Sender1
`Sender2
`Sender3
`Sender4
`
`200
`
`600
`400
`Time (sec)
`
`800
`
`1000
`
`10000
`
`8000
`
`6000
`
`4000
`
`2000
`
`0
`
`0
`
`10000
`
`8000
`
`6000
`
`4000
`
`2000
`
`0
`
`0
`
`Rate (kb/s)
`
`Rate (kb/s)
`
`(b) = 100 ms, b=10 Mb/s
`
`(b) = 100 ms, b=1 Mb/s
`
`Figure 2. Rate of competing LDA connections
`with round trip propagation delay ( ) and link
`bandwidth (b)
`
`Figure 3. Rate of competing LDA connections
`with round trip propagation delay ( ) and link
`bandwidth (b)
`
`2% and 10%. The utilization was around 95% in all mea-
`surements. Note that the convergence period for the simula-
`tions with bottleneck rate of 10 Mb/s, see Figs. 2(a) and 2(b),
`might extend over more than 300 seconds. This, however,
`depends on the value of the chosen reduction factor (Rf ).
`We have used an Rf of 3. We could have achieved shorter
`convergence periods using higher values of Rf . This would,
`however, have lead to a more oscillatory adaptation behav-
`ior which is inappropriate for adaptive video as it would re-
`sult in a lower perceived quality due to the rapidly changing
`video quality.
`
`4.2 Interaction of TCP and LDA Traffic
`
`The simulations run in Sec. 4.1 showed that the LDA al-
`gorithm achieves a rather fair bandwidth distribution among
`similar connections. In this section, we investigate the fair-
`ness of the LDA algorithm towards other adaptive traffic
`such as TCP. As a simulation topology, we use the topol-
`ogy depicted in Fig. 1 but with one of the senders replaced
`
`by a TCP sender. We investigate the bandwidth distribution
`achieved for the case of a bottleneck rate of 10 Mb/s and dif-
`ferent round trip times.
`The bandwidth distribution results depicted in Fig. 6
`show that for the case of ( = ms) the TCP connec-
`tion receives the same bandwidth share as the LDA con-
`nections. For ( = ms) the TCP connection receives
`nearly 1.4 Mb/s which is around half of its fair share which
`we would expect in this case to be 2.5 Mb/s. Based on
`Eq. 1 for a TCP connection with a packet size of 1 kbyte,
`a round trip time of 0.1 seconds to reach a bandwidth share
`of 2.5 Mb/s we would need to maintain an average loss rate
`of less than 0.16%. Actually, as we should also consider
`the queuing delay the loss value is even smaller. However,
`the RTCP packets only include a field of 8 bits for the loss
`value. That is, the minimum loss value that can be reported
`is around 0.4%. With such a loss value a TCP connection
`would utilize for the delay and packet size at hand a band-
`width share of around 1.6 Mb/s which is only slightly more
`than the value depicted in Fig. 6. To achieve optimal fair
`
`Page 6 of 12
`
`

`

`Sender1
`Sender2
`Sender3
`Sender4
`
`100
`
`10
`
`1
`
`0.1
`
`0.01
`100 200 300 400 500 600 700 800 900 1000
`Time (sec)
`
`(a) = 1 ms, b=1 Mb/s
`
`Sender1
`Sender2
`Sender3
`Sender4
`
`100
`
`10
`
`1
`
`0.1
`
`Loss (%)
`
`Loss (%)
`
`Sender1
`Sender2
`Sender3
`Sender4
`
`100
`
`10
`
`1
`
`0.1
`
`0.01
`100 200 300 400 500 600 700 800 900 1000
`Time (sec)
`
`(a) = 1 ms, b=10 Mb/s
`
`Sender1
`Sender2
`Sender3
`Sender4
`
`100
`
`10
`
`1
`
`0.1
`
`Loss (%)
`
`Loss (%)
`
`0.01
`200 300 400 500 600 700 800 900 1000
`Time (sec)
`
`0.01
`100 200 300 400 500 600 700 800 900 1000
`Time (sec)
`
`(b) = 100 ms, b=10 Mb/s
`
`(b) = 100 ms, b=1 Mb/s
`
`Figure 4. Loss of competing LDA connections
`with round trip propagation delay ( ) and link
`bandwidth b
`
`Figure 5. Loss of competing LDA connections
`with round trip propagation delay ( ) and link
`bandwidth b
`
`bandwidth distribution with the TCP connection getting ex-
`actly the same share as the LDA connections we would need
`to enhance RTCP to carry finer granulated loss values. An-
`other approach would be to use the value of the cumulative
`number of lost packets (lcum) included in the RTCP packets
`instead of the loss fraction. This value indicates the number
`of packets lost since the beginning of reception. While this
`leads to more accurate loss indications it also increases the
`complexity of the scheme as the sender needs to maintain
`more state information to actually benefit from lcum.
`
`4.3 Scalability of the LDA Algorithm
`
`To investigate the behavior of the LDA algorithm when
`used in multicast groups of different sizes we tested a simple
`case of a sender transmitting data to n receivers with n set
`to 1, 4, 64 and 320. As Fig. 7(a) shows the bottleneck router
`is shared between the sender and an on-off connection intro-
`ducing bursty back ground traffic, see Fig. 7(b).
`From Fig. 7(c) we can notice, that with a session of 64
`
`or 320 members the reactions to the sudden loss peaks are
`larger than those for smaller groups. The reason for this
`is related to the RTCP scaling algorithm. To avoid the in-
`crease in control traffic with the increase in group size the
`RTCP traffic is restricted to 5% of the bandwidth utilized by
`the session. With 320 members in the session the interval
`between two successive RTCP messages is larger than the
`adaptation interval of 5 seconds that we are using. As the
`reported losses in an RTCP message is calculated over the
`entire time between the sending of two RTCP packets the ef-
`fects of a loss peak will be noticed over several adaptation
`points even though the congestion has actually cleared. As
`an example, consider the case depicted in Fig. 8. Between
`the first two adaptation points we measure a loss rate of 10%
`over a time period of say 2 seconds. Assume, that the RTCP
`intervals of two members are 10 seconds long and are over-
`lapping with one RTCP message arriving before the second
`adaptation point and the RTCP message of the other mem-
`ber arriving only before the third adaptation point. As the
`loss is averaged over the entire 10 seconds the first mem-
`
`Page 7 of 12
`
`

`

`Receiver n
`
`Bursty Sender
`
`Adaptive Sender
`
`Router
`
` 1 Mb/s
`
`10 msec
`
`Router
`
`(a) Multicast test topology for LDA
`
`Receiver 1
`
`Background traffic
`
`0 100 200 300 400 500 600 700 800 9001000
`Time (sec)
`
`(b) Background traffic
`
`1 Sender
`4 Senders
`64 Senders
`320 Senders
`
`0 100 200 300 400 500 600 700 800 9001000
`Time (sec)
`
`(c) Sender transmission rate
`
`1e+07
`
`1e+06
`
`100000
`
`10000
`
`1000
`
`100
`
`10
`
`1
`
`0.1
`
`1400
`
`1200
`
`1000
`
`800
`
`600
`
`400
`
`200
`
`0
`
`Rate (b/s)
`
`Rate (Kb/s)
`
`Figure 7. Testing the scalability of the LDA
`algorithm
`
`As TCP’s throughput is inversely proportional to the
`round trip time we would expect TCP to be unfair under
`these conditions, that is two similar connections sharing
`the same bottleneck but having different round trip delays
`would not receive the same bandwidth share. We use the
`same topology depicted in Fig. 1 but with all senders start-
`ing to send data at the same time and the connections from
`senders 3 and 4 to the receivers 3 and 4 having a round trip
`time 10 times as large as the other two connections. The
`bandwidth distribution results depicted in Fig. 9 suggest that
`
`Sender1
`Sender2
`Sender3
`TCP Sender
`
`200
`
`600
`400
`Time (sec)
`
`800
`
`1000
`
`(a) = 1 ms, b=10 Mb/s
`
`Sender1
`Sender2
`Sender3
`TCP Sender
`
`500
`
`1000
`Time (sec)
`
`1500
`
`2000
`
`10000
`
`8000
`
`6000
`
`4000
`
`2000
`
`0
`
`0
`
`10000
`
`8000
`
`6000
`
`4000
`
`2000
`
`0
`
`0
`
`Rate (kb/s)
`
`Rate (kb/s)
`
`(b) = 100 ms, b=10 Mb/s
`
`Figure 6. Bandwidth of LDA connections com-
`peting with a TCP connection with round trip
`propagation delay ( ) and link bandwidth b
`
`ber would report a loss of 2%. Th

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket