`
`Biswaroop Mukherjee and Tim Brecht
`Department of Computer Science, University of Waterloo, Waterloo, Ontario, Canada N2L 3G1.
`bmukherj@shoshin.uwaterloo.ca,brecht@cs.uwaterloo.ca
`
`Abstract
`
`This paper introduces Time-lined TCP (TLTCP). TLTCP
`is a protocol designed to provide TCP-friendly delivery of
`time-sensitive data to applications that are loss-tolerant,
`such as streaming media players. Previous work on unicast
`delivery of streaming media over the Internet proposes us-
`ing UDP and performs congestion control at the user level
`by regulating the application’s sending rate (attempting to
`mimic the behavior of TCP in order to be TCP-friendly).
`TLTCP, on the other hand, is intended to be implemented at
`the transport level, and is based on TCP with modifications
`to support time-lines. Instead of treating all data as a byte
`stream TLTCP allows the application to associate data with
`deadlines. TLTCP sends data in a similar fashion to TCP
`until the deadline for a section of data has elapsed; at which
`point the now obsolete data is discarded in favor of new
`data. As a result, TLTCP supports TCP-friendly delivery
`of streaming media by retaining much of TCP’s congestion
`control functionality. We describe an API for TLTCP that
`involves augmenting the recvmsg and sendmsg socket
`calls. We also describe how streaming media applications
`that use various encoding schemes like MPEG-1 can as-
`sociate data with deadlines and use TLTCP’s API. We use
`simulations to examine the behavior of TLTCP under a
`wide range of networks and workloads. We find that it
`indeed performs time-lined data delivery and under most
`circumstances bandwidth is shared equally among compet-
`ing TLTCP and TCP flows. Moreover, those scenarios un-
`der which TLTCP appears to be unfriendly are those under
`which TCP flows competing only with other TCP flows do
`not share bandwidth equitably.
`
`1. Introduction
`
`It is widely believed [1] [23] [8] that congestion con-
`trol mechanisms are critical to the stable functioning of the
`Internet. Presently, the vast majority (90-95%) of Internet
`traffic uses the TCP protocol [3] which incorporates conges-
`tion control [11] [27]. However, due to the growing popu-
`larity of streaming media applications and because TCP is
`
`not suitable for the delivery of time-sensitive data, a grow-
`ing number of applications are being implemented using
`UDP [15].
`Since UDP does not implement congestion control, pro-
`tocols or applications that are implemented using UDP
`should detect and react to congestion in the network. Ide-
`ally, they should do so in a fashion that ensures fairness
`when competing with existing Internet traffic (i.e., they
`should be TCP-friendly). Otherwise such applications may
`obtain larger portions of the available bandwidth than TCP-
`based applications. Moreover, the wide-spread use of proto-
`cols that do not implement congestion control or avoidance
`mechanisms could result in a congestive collapse of the In-
`ternet [8] similar to the collapse that occurred in October,
`1986 [11].
`The work described here is motivated by these concerns.
`From the perspective of the application there is a need
`for a protocol that is designed for transporting data with
`deadlines over a network that provides no quality of ser-
`vice (QoS) guarantees. From the perspective of the net-
`work there is a need for a protocol that generates streams
`that compete fairly with the existing traffic and performs
`congestion control using robust mechanisms. To this end
`we have created a new protocol, called time-lined TCP
`(TLTCP) designed to support the TCP-friendly delivery of
`time-sensitive data over the Internet.
`Contributions
`
`(cid:15) We have created a new transport protocol, called time-
`(cid:15) TLTCP provides an interface that is more suited to
`
`lined TCP (TLTCP), for delivering time-sensitive data
`over the Internet. We have devised a way for TLTCP
`to use the robust window-based congestion control of
`TCP without requiring that the data be delivered reli-
`ably. As a result, TLTCP competes fairly with TCP
`flows (and is TCP-friendly) over a wide range of net-
`work conditions. TLTCP associates each section of
`data with a deadline and uses a novel time-lined data
`delivery mechanism in TLTCP that uses these dead-
`lines to keep track of the sections of data that are ob-
`solete and ensures that no obsolete data is sent.
`
`continuous media applications than a simple end-to-
`
`1
`
`VIMEO/IAC EXHIBIT 1023
`VIMEO ET AL., v. BT, IPR2019-00833
`
`
`
`end byte stream. We propose augmenting the present
`socket API that allows a sending application to specify
`a deadline when handing a section of data to TLTCP.
`The API also allows TLTCP to inform the receiving
`application of the gaps in the data being delivered. The
`proposed changes do not alter but extend the semantics
`of the present socket API.
`
`(cid:15) We have performed extensive simulation experiments
`
`to evaluate TLTCP. The experiments show that TLTCP
`indeed performs data delivery in a time-lined fash-
`ion. Furthermore, using data from our simulations we
`have quantified the effect of TLTCP flows on com-
`peting TCP flows. Our simulation results show that
`TLTCP is indeed TCP-friendly over a wide range of
`network conditions.
`In addition, the circumstances
`where TLTCP seems to be TCP-unfriendly are those
`under which TCP is unable to share bandwidth equi-
`tably.
`
`The remainder of this paper is organized as follows. In Sec-
`tion 2 we describe related work and in Section 3 we ex-
`plain our approach to the problem. Section 4 describes how
`our protocol would be used in conjunction with streaming
`media applications. How the TLTCP protocol operates is
`described in Section 5. We report the results from our sim-
`ulation experiments using the ns-2 network simulator [29]
`in Section 6. This is followed by conclusions in Section 7.
`
`2. Related work
`
`Previous work [4] [25] [18] [23] has examined rate-
`based algorithms for implementing TCP-friendly conges-
`tion control.
`In each case the sender throttles the rate at
`which it injects packets into the network in order to perform
`congestion control. To compete fairly with TCP, the send-
`ing rate is regulated, thus attempting to achieve the same
`throughput as a TCP-stream would if operating under the
`same conditions. These approaches are based on models
`that attempt to characterize TCP congestion control mecha-
`nisms [12] [14] [17]. As data is sent, the application mea-
`sures or estimates values for the parameters of the model;
`such as packet loss rates, round-trip times, and timeout val-
`ues. Using these parameters and the model the application
`periodically recomputes the appropriate sending rate. The
`proposed schemes differ primarily in the complexity and ac-
`curacy of the model used.
`RAP [23] employs a relatively simple additive-increase,
`multiplicative-decrease (AIMD) model of TCP’s conges-
`tion control mechanisms and is able to obtain relatively
`TCP-friendly behavior when competing for bandwidth with
`TCP Sack flows [6]. While it is targeted towards future In-
`ternet scenarios in which TCP Sack and RED [9] are widely
`
`2
`
`deployed, it is not able to share bandwidth fairly with com-
`mon implementations of TCP [6], TCP Tahoe or TCP Reno
`[22]. A significant advantage of TLTCP is that it is based
`on and is TCP-friendly with TCP Reno implementations,
`which is the most widely used TCP implementation in the
`Internet today [19] [20].
`Sisalem et al. [25] propose a rate based equation that is
`designed to be used with RTP/RTCP. Their scheme dynam-
`ically computes an additive increase rate and also performs
`backoff. Experiments conducted with RED gateways are
`reported and show that their scheme does not share band-
`width equally under situations with low loss rates. We ex-
`pect TLTCP to be more stable and share bandwidth equally
`under conditions with low loss rates.
`Padhye et al. [18] describe and evaluate a rate control
`protocol based on a more detailed model of TCP through-
`put [17]. Although they are able show that their protocol is
`TCP-friendly under a variety of network configurations and
`conditions, the recomputation interval (the time between
`rate adjustments) must be chosen carefully. As can be seen
`from their simulation results the best recomputation inter-
`val may vary across different network conditions, making
`it difficult to use one recomputation strategy under a vari-
`ety of circumstances. They also point out that they do not
`share bandwidth fairly with TCP streams when bottleneck
`link delays are small or large because it makes it difficult to
`obtain accurate estimates of loss rates.
`Ramesh et al. [21] describe a number of potential draw-
`backs of model based approaches. In particular, they point
`out that several factors can result in inaccurate packet
`loss estimates in the model developed by Padhye et al.
`[17]. These inaccurate estimates can lead to under or over-
`allocation of bandwidth to non TCP flows. TLTCP is not
`model based but ACK-clocked and thus is not impacted by
`these drawbacks.
`Cen et al. describe a streaming control protocol (SCP)
`[4], that uses a congestion window based policy for conges-
`tion avoidance. While their approach is similar to TCP they
`they do not perform retransmission and are not faithful to
`TCP in order to improve smoothness in streaming. The ex-
`perimental results reported using an implementation of SCP
`on top of UDP show that the packet rates of competing SCP
`and TCP sessions differ significantly under a variety of net-
`work configurations.
`Another scheme reported by Jacobs et al. [10] attempts
`to mimic TCP’s congestion window in user space. The win-
`dow size is used to estimate bandwidth which is then used to
`drive a media pump at the sender that uses UDP to send data
`to the receiver. Attempting to mimic the congestion win-
`dow of TCP at the user level is likely to be inaccurate. This
`is because, the fact a message is written to the UDP socket
`does not mean that the packet has been released into the net-
`work. A mechanism in the user space would have no means
`
`
`
`of knowing if the message or its acknowledgement is wait-
`ing in the kernel buffers or traversing a link. TLTCP does
`not use a media pump to regulate its data sends but instead it
`uses a sliding window protocol like TCP. TLTCP also does
`not use UDP and is meant to be implemented in the kernel
`by making changes to the TCP stack. Furthermore unlike
`the schemes proposed in past, TLTCP uses the time-lined
`nature of continuous media to drive its data sends. Details
`of the scheme proposed by Jacobs et al. [10] are not pro-
`vided and it is unclear how TCP-friendly such an approach
`would be.
`
`3. Proposed approach
`
`Unlike previous work, our approach is not based on mod-
`els of TCP. Instead we propose a new protocol, called time-
`lined TCP (TLTCP), that is intended to be implemented
`at the transport level, and is based on TCP with modifi-
`cations to support time-lines. Time-lines are used for the
`delivery of time-sensitive data to loss-tolerant applications
`such as streaming media players. Such applications are
`time-sensitive because data that arrives after the deadline
`by which it was meant to be played is not useful and will
`simply be ignored. Although using TCP will ensure that an
`application is TCP-friendly, TCP is unsuitable for such data
`transfer because it will potentially send obsolete data that
`would no longer be useful to the receiving application.
`When using TLTCP, in addition to specifying the data
`and its size, an application includes the deadline after which
`the transport protocol should stop trying to send that data.
`TLTCP attempts to send the data until the deadline has ex-
`pired, at which point it is presumed that the data would
`be obsolete by the time it would reach the receiver. Once
`a deadline has expired TLTCP abandons the obsolete data
`in favor of new data that is associated with later deadlines.
`Note that deadlines are defined to be relative to the sender.
`For best-effort service, the present scheme could be eas-
`ily extended to make the deadlines relative to the receiver
`by factoring in the RTT estimates to the deadlines. TLTCP
`is intended to be implemented in the transport level of the
`kernel. Since TLTCP is ACK-clocked, it is able to mimic
`the behavior of TCP over a wide range of conditions. As
`TCP continues to evolve [11] [27] [2] [6] we believe that
`it would be relatively easy to implement a time-lined ver-
`sion of the protocol. However, we expect that it will be
`relatively difficult to produce accurate models and develop
`TCP-friendly protocols for each future variation of or mod-
`ification to TCP.
`
`4. Applications
`
`The continuous media application that uses TLTCP is
`expected to handle the encoding scheme specific functions,
`
`3
`
`while relying on TLTCP to perform congestion control and
`best effort data delivery. The sending application would
`typically calculate a schedule for the transmission of its
`data. Each section of data being sent (e.g., sequence of
`video frames, layers of video, or audio samples) would
`be assigned a deadline that is determined by the schedule
`(which would account for buffering and delay characteris-
`tics of the encoding and decoding schemes). The receiver
`application would begin playback after first receiving and
`buffering some portion of the data. During playback por-
`tions of data are decoded and presented to the user. If the
`sender is not be able to send all or even portions of a section
`before the deadline associated with the section expires, the
`receiver may be able to continue with a lower quality play-
`back, depending on the application’s ability to tolerate lost
`data.
`For example MPEG-1 video [24] that has frames with
`varying degrees of importance for the playback application,
`I, P and B respectively. Roughly speaking, the I frames can
`be displayed independently while the P frames can only be
`displayed if the previous I or P frames has arrived. The
`B frames are bidirectionally encoded and cannot be dis-
`played unless the previous non-bidirectionally encoded (I
`or P) frame as well as the next non-bidirectionally encoded
`(I or P) frame are delivered. Because of the bidirectional
`dependencies, the display order of frames differs from the
`order in which they are stored in a file or transported. For
`instance the display order of an MPEG-1 video may be,
`
`in which this sequence is stored in an MPEG file will be
`
`TLTCP sections are created from an MPEG-1 file in the
`same order as they are stored but the deadlines are assigned
`according to the order of display. The same deadline is as-
`signed to an I frame, the P frames directly dependent on the
`I frame, the P frames that are dependent on the P frames that
`depend on the I frame an so on. The B frames are assigned
`the same deadlines as the earlier frames they depend upon,
`but they are sent after the frames they depend upon. Thus
`in the example above the deadline assignments would be as
`
`f 11;11;B11;12; 21;21;B21; 31;:::g. However, the order
`f 11;11;12; 21;B11;21; 31;B21;:::g.
`follows.ff 11;11;12:d1g;f 21:d2g;fB11:d1g;f21:
`d2g;f 31:d3g;fB21:d2g;:::g. The sending application
`f 11;11;12g TLTCP may discard12 at the expiry ofd1
`and start sending the more important frame, 21 since it is
`associated with the later deadlined2
`
`would start by writing the encoded frames to the socket as
`described above and TLTCP would try to deliver the sec-
`tions in the order they were written. However, if the avail-
`able bandwidth is insufficient to deliver all of the section
`
`. In other words, if the
`bandwidth is insufficient TLTCP will discard the less im-
`portant data and instead attempt to deliver the more impor-
`tant data that still has a chance of reaching the receiver in
`time for playback. Note that, if the available bandwidth de-
`
`
`
`ff 11;11;12:d1g;f 21:d2g;f21:d2g;f 31:d3g;:::g
`andff 11:d1g;f 21:d2g;f 31:d3g;:::g respectively. The
`
`creases further (due to congestion), the sending application
`upon receiving feedback from the playback application may
`decide to change its transmission schedule and just send the
`I and P frames or even just the I frames so that the impor-
`tant frames have more time to get delivered. Reusing the
`example, the data sections handed to TLTCP in these two
`reduced bandwidth cases described above would look like
`
`MPEG receiver on the other hand, will be able to continue
`playback but the quality of playback would worsen as more
`frames are skipped.
`
`The API
`
`The API for TLTCP has two main functions. First, the
`sending application needs to be able to specify to TLTCP
`segments of data along with their associated deadlines. Sec-
`ond, the receiving end needs to be able to deliver to the
`client application the received data along with information
`about where gaps are located. We propose augmenting the
`UNIX socket calls of recvmsg and sendmsg [28] for this
`purpose.
`
`To see how the API would be used consider the following
`example. The server process first creates a SOCK STREAM
`socket and connects it to the receiver to establish the data
`connection. Then the various fields of the msg header
`structure are filled in before calling sendmsg with a
`MSG TL flag used to indicate time-lined data. Pointers for
`each of the data sections to be sent by TLTCP are stored
`in an array of msg iov structures. These are made up
`of a pointer to the data, iov base and the size of the
`data, iov len. The size of the msg iov array is equal
`to the number of sections being written and is stored in the
`msg iovlen field of the msg header. Deadlines corre-
`sponding to the data sections are provided using an ancil-
`lary data message. The value of the deadlines are stored
`in msg control field of msg header, with the mes-
`sage type (cmsg type) specified as, TL DEADLINE. The
`length cmsg len, is again equal to the number of data sec-
`tions.
`
`the receiver end when recvmsg is called the
`At
`MSG TL flag indicates that the data received is time-lined.
`The receiver can then read the ancillary data pointed to
`by msg control, in order to distinguish between the
`data and gaps.
`If a field in the ancillary data contains
`TL DATA then the corresponding field of the msg iov
`structure points to valid data and the application can store
`the pointer in order to retrieve the data later. On the other
`hand the ancillary data contains TL GAP then the applica-
`tion needs to make a note of the size and location of the gap
`and take this into account during playback.
`
`5. Functioning of TLTCP
`
`As discussed previously, except for the additional mech-
`anisms to support time-lines, the functionality and thus the
`data sending characteristics of TLTCP are similar to TCP.
`The following description of TLTCP is based on TCP-Reno.
`We assume that the reader is familiar with TCP-Reno and
`we use TCP to refer to TCP-Reno.
`
`5.1. The Sender
`
`The TLTCP sender accepts time-sensitive data from the
`application via the TLTCP API. Each section of data is as-
`sociated with a deadline by which it should be sent. The
`sender maintains a linked list, called time-line list, that
`stores the deadlines for the time-lined data. A node in this
`list that stores the deadline and starting sequence number
`for the associated section of data. Note that the data it-
`self is stored in the kernel buffers as TCP and the low-
`est seqno field of the list node points to the first data
`byte of a section in the buffer.
`The sender performs data sends as a normal TCP sender
`would until the expiry of the lifetime timer which indicates
`that the deadline for the current section of data has expired.
`It then selects the next section of data to be sent from the list
`and sets the lifetime timer to the deadline for this section.
`All of the data up to the lowest sequence number of the new
`section of data is discarded.
`
`5.2. Lifetime Timer
`
`In addition to the TCP timers, TLTCP has a timer called
`the lifetime timer. This new timer keeps track of the dead-
`lines associated with the oldest data in the sending window
`(the minimum of the receiver’s advertised window and the
`congestion window). The lifetime timer counts down in the
`same fashion as the TCP timers. When a lifetime timer
`expires any data associated with that deadline that has not
`already been sent is considered obsolete and is discarded
`from the sending window. In other words, in response to
`a deadline expiry the sending window is moved forward to
`sequence numbers that are not obsolete. TLTCP then at-
`tempts to send the data associated with the next deadline
`and the lifetime timer is set to that deadline. Furthermore,
`upon expiry of the lifetime timer the time-line list is updated
`to contain only entries for the data sections that are not ob-
`solete. Figure 1 shows the sequence of actions that are taken
`after expiry of the lifetime timer. Due to expiry of the dead-
`lines some data sections may not be delivered completely
`leaving gaps in the sequence of bytes that is delivered to the
`receiver.
`Let us consider an example that illustrates how a TLTCP
`sender transports continuous media data to a receiver. Sup-
`
`4
`
`
`
`if ( Lifetime_tmr has EXPIRED ) {
`rem_expired_data(timeline_list, &buf);
`if (!timeline_list_empty()) {
`cur_node=get_cur_node(timeline_list);
`store_unacked_seq();
`move_window(cur_node.lowest_seq);
`
`set_lifetime_tmr(cur_node.deadline);
`
`}
`}
`
`Figure 1. Pseudo code of the actions taken on
`the expiry of lifetime timer.
`
`pose that the sender has a send window size of 10 bytes.
`For simplicity assume single byte payload for all packets.
`The sender can then send 10 consecutive packets. Further
`assume that an application has specified the deadlines for
`
`sequence numbers 10 to 19 and 20 to 29, asd1 andd2 re-
`spectively, whered2>d1 (i.e., the deadline for packets
`29). TLTCP sets the lifetime timer to the deadlined1 and
`commences sending. Now suppose that when deadlined1
`set the lifetime timer tod2 and continue to keep track of
`
`10 to 19 will expire before the deadline for packets 20 to
`
`expires only packets 10 to 14 have been sent. At this point
`TLTCP will abandon the sending of all the sequences from
`10 to 19 and 20 will be the next packet to send. It will also
`
`the unacknowledged packets from the obsolete data. This
`is done in order to preserve the semantics of the congestion
`window mechanism (for a detailed explanation see Section
`5.4).
`
`5.3. The Receiver
`
`Upon expiration of the lifetime timer the sender discards
`all data associated with the current deadline that has not yet
`been sent. However, if the receiver is not informed of this it
`would consider the discarded data to be lost and reject pack-
`ets from the new section because they are beyond its receive
`window. The receiver would continue to acknowledge the
`last received sequence number, which is now obsolete. On
`the other hand, since the sender has already discarded the
`obsolete data it would continue to send the current data and
`a deadlock would result. In order to prevent this deadlock,
`when data is discarded the TLTCP sender explicitly noti-
`fies the receiver of the change in its next expected sequence
`number. The expected sequence number update notifica-
`tions also allow the receiver to keep track of the gaps in
`the stream. Information about where the gaps are located
`(along with the data) will eventually be passed to the appli-
`cation when it attempts to read the data.
`Expected sequence number notifications are included
`with every packet by using 32-bits of the available TCP-
`options. We call this 32-bit field, seq update. The re-
`
`5
`
`ceiver knows that it needs to skip sequence numbers when-
`ever it receives a packet containing a seq update value
`that is greater than its next expected sequence number and
`adjusts its next expected sequence number to the sequence
`number contained in the field seq update.
`
`5.4. ACKs for Obsolete Data
`
`the deadlined1 expires, packets 10 to 14 have already been
`sending data associated with the next deadlined2. Pack-
`could be thought of asf10, 11, 12, 13, 14, 20, 21, 22, 23,
`24g. When ACKs for obsolete data arrive, the sender’s win-
`
`The sender needs to keep track of acknowledgments for
`obsolete data, in order to ensure that the send window is
`correctly sized and is permitted to advance as ACKs arrive
`for the obsolete data.
`Reconsider the example described in Section 5.2, when
`
`sent. At this point TLTCP keeps track of the fact that it
`might receive ACKs for packets 10 to 14 and removes pack-
`ets 10 to 19 from its buffer. The sender then continues by
`
`ets 20, 21, 22, 23 and 24 are sent and the send window is
`full. Once the window is full, no more data can be sent un-
`til outstanding ACKs arrive. One way to logically view the
`current situation is to imagine the obsolete data occupying
`slots in the current send window. Thus the send window
`
`dow is moved by the amount of data that is ACKed, thus
`allowing new sends. For example, if an ACK is received
`for sequence number 12 the window will move ahead by
`3 sequence numbers (since ACKs are cumulative) and the
`sender may send three new packets 25, 26, 27. Thus keep-
`ing track of ACKs for obsolete data is necessary because
`these ACKs allow the window to move forward. In the ex-
`ample above, the logical window moves forward upon the
`receipt of the ACK for sequence number 12.
`In order to recognize ACKs for obsolete data, TLTCP
`uses a vector to store the highest sequence sent and the last
`ACK received for each obsolete section that has unacknowl-
`edged data. The size of the vector is bounded by the win-
`dow size. As the ACKs for obsolete data arrive the entries
`in the vector are freed and as more unacknowledged data
`becomes obsolete, new entries are added. Note that even
`though TLTCP keeps track of the sequence numbers of the
`unacknowledged data that is obsolete, it sends data from
`new sections instead of retransmitting obsolete data.
`
`5.5. Handling Lost Packets
`
`If a lost packet is detected prior to the deadline expiry
`for that data TLTCP will retransmit the lost packet. Thus,
`TLTCP attempts to reliably deliver data prior to the expiry
`of the deadline associated with the data. On the other hand,
`if the lost packet is obsolete, TLTCP sends the lowest unac-
`knowledged packet that is current. This is similar to the ac-
`
`
`
`tions that would be taken by TCP, except that TLTCP would
`transmit current data rather than retransmit (possibly) obso-
`lete data as in the case of TCP.
`To clarify how this works reconsider the above example
`but now suppose that the window size is 5. Assume that
`packets 10 to 14 have been sent and then due to a dead-
`line expiry packets 10 to 19 are deemed obsolete. Now
`imagine that packet 10 is lost and this is detected by the
`sender either because of three duplicate ACKs or a retrans-
`mit timeout. The TLTCP sender would then send the next
`unacknowledged packet, in this case 20. This may result in
`behavior that is close to but not identical to TCP. In order to
`further illustrate this scenario we now compare the actions
`that TLTCP would take with those of TCP under the same
`conditions. The scenario is depicted in Figure 2. If this is
`the first time that packet 20 is sent then TLTCP behaves the
`same as TCP. When we say that TLTCP behaves the same
`as TCP, we mean that it sends a packet when TCP does.
`However, the sequence number of the data being sent may
`be different in each case. If in the case of TLTCP, the packet
`sent and ACK for the sequence number 20 are not lost and
`if in the case of TCP, the packet that TCP resends and its
`ACK are not lost then TLTCP’s ACK for 20 would arrive at
`the same time as TCP’s ACK for 10. These ACKs would
`clock the subsequent sends at the same time for both TCP
`and TLTCP .
`
`Sender
`
`TCP
`
`Reciever
`
`Sender
`
`TLTCP
`
`Reciever
`
`10
`
`14
`
`10 ACK
`
`10
`obsolete
`data retransmistted
`
`10
`
`14
`
`10 ACK
`
`20
`current
`data transmitted
`
`Deadline
`expiry
`
`Loss
`detected
`
`16 ACK
`
`21 ACK
`
`Figure 2. Example of a loss in obsolete data.
`
`However, as shown in Figure 3, if packet 20 has already
`been sent (because of a window size greater than 5) and the
`ACK for it has not been received, TLTCP sends it again.
`We refer to this as a pseudo-retransmission since TLTCP
`is retransmitting data that may not require retransmission
`in order to ensure that a packet is sent when TCP would
`send a packet. If the ACK for the original send of packet 20
`arrives prior to an ACK for the pseudo-retransmission then
`that ACK will clock TLTCP’s subsequent send sooner than
`it would be clocked with TCP.
`Deviation from the behavior of TCP may also occur be-
`cause of pseudo-retransmissions and a seq update mes-
`sage. The loss of an obsolete packet, besides trigger-
`ing a pseudo-retransmission, could cause subsequent losses
`
`6
`
`Sender
`
`TCP
`
`Reciever
`
`Sender
`
`TLTCP
`
`Reciever
`
`10
`
`14
`
`15
`
`10
`
`10
`
`14
`
`20
`
`10 ACK
`
`20
`pseudo
`retransmit
`
`21 ACK
`
`21
`
`21 ACK
`
`10 ACK
`
`10 ACK
`
`16 ACK
`
`Loss
`detected
`
`TLTCPs
`next send
`
`TCPs
`next send
`
`Example
`3.
`Figure
`retransmission.
`
`of
`
`a
`
`pseudo-
`
`of obsolete packets to be ignored as shown in Figure 4.
`Suppose in the original example of Section 5.2, packet 14
`
`Sender
`
`TCP
`
`Reciever
`
`Sender
`
`TLTCP
`
`Reciever
`
`10
`
`14
`
`10
`
`14
`
`10
`resend lost packet
`
`14 ACK
`
`14
`resend lost packet
`
`Loss
`of 10
`detected
`
`TCP detects
`loss of 14
`but TLTCP
`misses it
`
`20
`current
`data transmistted
`21 ACK
`
`normal sends continue
`
`Figure 4. Example of a TLTCP missing a
`packet loss in the obsolete data.
`
`is lost in addition to packet 10. Under this scenario TCP
`would retransmit the lost packet and reduce its rate of send-
`ing by halving ssthresh [27] as a result of three dupli-
`cate ACKs or by reducing its congestion window due to a
`timeout. However, TLTCP’s pseudo-retransmission would
`include a seq update that would cause the receiver to
`move its receive window beyond packets 10 to 19 and re-
`quest packets 20 and beyond, therefore missing the fact that
`packet 14 is lost. In general, if before a packet loss is de-
`tected a new seq update is received at the receiver, the
`receiver will ignore the missing data and request for data
`seq update onwards. As a consequence, as shown in the
`example, TLTCP would be unable to detect the loss of pack-
`ets subsequent to a pseudo-retransmission and would not
`experience the second slowdown.
`
`6. Simulations
`
`In this section we evaluate the behavior of TLTCP us-
`ing simulations. There are several reasons why simula-
`tion experiments are more suitable than live Internet experi-
`ments for our purposes. In order to quantify TLTCP’s TCP-
`friendliness we need to measure the effect of TLTCP traffic
`on TCP streams, discounting the impact of all other factors
`
`
`
`such as background traffic. In a live Internet scenario these
`factors are beyond our control and in most cases would add
`significant noise to the experimental results, while with sim-
`ulations impact due to the other factors can be eliminated
`or factored into the results. Furthermore, for the measure-
`ments obtained in the baseline case (control experiment) to
`be meaningful the experiments must be run under the same
`conditions as the original experiment. Because the condi-
`tions of a simulation are reproducible, the baseline experi-
`ments can be run and valid measurements for comparison
`can be easily obtained. TLTCP is a new protocol and in or-
`der to test it thoroughly we need to vary several network pa-
`rameters in a controlled fashion. Using simulations we are
`able to study the effect of varying several parameters over
`a wide range, one at a time, in order to quantify the effect
`of each one of them. In a live Internet experiment most of
`the network parameters, such as the number of flows com-
`peting at the bottleneck, are beyond our control while oth-
`ers like link delays and bottleneck bandwidth are difficult
`to vary. We have implemented TLTCP in the ns-2 simula-
`tor [29] and have conducted several experiments to study
`TLTCP’s time-lined data transport behavior and to quantify
`its TCP-friendliness.
`
`6.1. Time-lined Data Transfer
`
`Using a simulated network as shown in Figure 6, we be-
`gin two simultaneous data transfer sessions between a TCP
`sender and receiver and a TLTCP sender and receiver. We
`keep track of packet arrivals of both the streams in order
`to compare their data sending characteristics. For the sake
`of clarity in Figure 5, we use constant sized data sections
`of 700,000 bytes each associated with constant deadlines of
`1 second to ensure that the whole section cannot be deliv-
`ered within the given deadline. The other parameters used
`in this simulation are shown in Table 1 and justification for
`the values is provided in the next section.
`
`Sequence Number x 103
`
`Sequence Number Plots
`
`6.00
`
`5.50
`
`5.00
`
`4.50
`
`4.00
`
`3.50
`
`3.00
`
`2.50
`
`2.00
`
`1.50
`
`1.00
`
`0.50
`
`0.00
`
`TLTCP
`
`TCP
`
`0.00
`
`5.00
`
`10.00
`
`15.00
`
`Time (seconds)
`
`Figure 5. Data sending characteristics of
`TLTCP as compared to TCP.
`
`Shown in Figure 5 is a plot of sequence number verses
`
`7
`
`time where each sequence nu