throbber
A Reliable Dissemination Protocol for
`
`Interactive Collaborative Applications
`
`Rajendra Yavatkar, James Griffioen, and Madhu Sudan
`Department of Computer Science
`University of Kentucky
`Lexington, KY 40506
`{raj,griff,madhu}@dcs.uky.edu
`(606) 257-3961
`
`ABSTRACT
`
`The widespread availability of networked multimedia
`workstations and PCs has caused a significant interest
`in the use of collaborative multimedia applications. Ex-
`amples of such applications include distributed shared
`whiteboards, group editors, and distributed games or
`simulations. Such applications often involve many par-
`ticipants and typically require a specific form of mul-
`ticast communication called dissemination in which a
`
`single sender must reliably transmit data to multiple
`receivers in a timely fashion. This paper describes
`the design and implementation of a reliable multicast
`transport protocol called TMTP (Tree-based Multicast
`Transport Protocol). TMTP exploits the efficient best-
`effort delivery mechanism of IP multicast for packet
`routing and delivery. However, for the purpose of scal-
`able flow and error control, it dynamically organizes the
`participants into a hierarchical control tree. The control
`tree hierarchy employs restricted nooks with suppression
`and an expanding ring search to distribute the functions
`of state management and error recovery among many
`members, thereby allowing scalability to large numbers
`of receivers. An Mbone—based implementation of TMTP
`spanning the United States and Europe has been tested
`and experimental results are presented.
`KEYWORDS
`
`Reliable Multicast, Transport Protocols, Mbone, In-
`teractive Multipoint Services, Collaboration
`INTRODUCTION
`
`Widespread availability of IP multicast [6, 2] has sub-
`stantially increased the geographic span and portability
`of collaborative multimedia applications. Example ap-
`
`plications include distributed shared Whiteboards [15],
`group editors [7 , 14], and distributed games or simula-
`tions. Such applications often involve a large number of
`participants and are interactive in nature with partici-
`pants dynamically joining and leaving the applications.
`For example, a large-scale conferencing application (e.g.,
`an IETF presentation) may involve hundreds of people
`who listen for a short time and then leave the conference.
`
`These applications typically require a specific form of
`multicast delivery called dissemination. Dissemination
`involves 1xN communication in which a single sender
`must reliably multicast a significant amount of data to
`multiple receivers.
`IP multicast provides scalable and
`efficient routing and delivery of IP packets to multiple
`receivers. However, it does not provide the reliability
`needed by these types of collaborative applications.
`Our goal is to exploit the highly eflicient best-effort
`delivery mechanisms of IP multicast to construct a seal-
`able and efficient protocol for reliable dissemination.
`Reliable dissemination on the scale of tens or hundreds
`
`of participants scattered across the Internet requires
`carefully designed flow and error control algorithms that
`avoid the many potential bottlenecks. Potential bottle-
`necks include host processing capacity [18] and network
`resources. Host processing capacity becomes a bottle-
`neck when the sender must maintain state information
`
`and process incoming acknowledgements and retrans-
`mission requests from a large number of receivers. Net-
`work resources become a bottleneck unless the frequency
`and scope of retransmissions is limited. For instance,
`loss of packets due to congestion in a small portion of
`the IP multicast tree should notlead to retransmission
`
`of packets to all the receivers. Frequent multicast re-
`transmissions of packets also wastes valuable network
`bandwidth.
`
`This paper describes the design and implementation
`of a reliable dissemination protocol called TMTP (Tree-
`based Multicast Transport Protocol) that includes the
`following features:
`
`1. TMTP takes advantage of IP multicast for efficient
`
`IPR2016-00726 -ACTIVISION, EA, TAKE-TWO, 2K, ROCKSTAR,
`Ex. 1102, p. 1305 of 1442
`
`IPR2016-00726 -ACTIVISION, EA, TAKE-TWO, 2K, ROCKSTAR,
`Ex. 1102, p. 1305 of 1442
`
`

`
`packet routing and delivery.
`
`2. TMTP uses an ezpanding ring search to dynam-
`ically organize the dissemination group members
`into a hierarchical control tree as members join and
`leave a group.
`
`3. TMTP achieves scalable reliable dissemination via
`the hierarchical control tree used for flow and er-
`ror control. The control tree takes the flow and
`
`error control duties normally placed at the sender
`and distributes them across several nodes. This
`
`distribution of control also allows error recovery to
`proceed independently and concurrently in different
`portions of the network.
`
`4. Error recovery is primarily driven by receivers who
`use a combination of restricted negative acknowl-
`edgements with nack suppression and periodic posi-
`tive acknowledgements. In addition, the tree struc-
`ture is exploited to restrict the scope of retransmis-
`sions to the region where packet loss occurs; thereby
`insulating the rest of the network from additional
`traffic.
`
`We have completed a user-level implementation of
`TMTP based on IP/UDP multicast and have used it
`for a systematic performance evaluation of reliable dis-
`semination across the current Internet Mbone. Our ex-
`
`periments involved as many as thirty group members
`located at several sites in the US and Europe. The re-
`sults are impressive; TMTP meets our objective of scal-
`ability by significantly reducing the sender’s processing
`load, the total number of retransmissions that occur,
`and the end-to-end latency as the number of receivers is
`increased.
`
`Background
`A considerable amount of research has been reported
`in the area of group communication. Several systems
`such as the ISIS system [1], the V kernel [4], Amoeba,
`the Psynch protocol [17], and various others have pro-
`posed group communication primitives for constructing
`distributed applications. However, all of these systems
`support a general group communication model (NxN
`communication) designed to provide reliable delivery
`with support for atomicity and/or causality or to sim-
`ply support an unreliable, unordered multicast delivery.
`Similarly, transport protocols specifically designed to
`support group communication have also been designed
`before [13, 5, 3, 19, 9]. These protocols mainly concen-
`trated on providing reliable broadcast over local area
`networks or broadcast links.
`Flow and error control
`
`mechanisms employed in networks with physical layer
`multicast capability are simple and do not necessarily
`scale well to a wide area network with unreliable packet
`delivery.
`Earlier multicast protocols used conventional flow
`and error control mechanisms based on a sender-
`
`initiated approach in which the sender disseminates
`packets and uses either a Go-Baclc—N or a selective repeat
`mechanism for error recovery. If used for reliable dissem-
`ination of information to a large number of receivers,
`this approach has several limitations. First, the sender
`must maintain and process a large amount of state in-
`formation associated with each receiver. Second, the
`approach can lead to a packet implosion problem where
`a large number of ACKs or NACKs must be received and
`processed by the sender over a short interval. Overall,
`this can lead to severe bottlenecks at a sender resulting
`in an overall decrease in throughput [18].
`An alternate approach based on receiver-initiated
`methods [19, 15] shifts the burden of reliable delivery to
`the receivers. Each receiver maintains state information
`
`and explicitly requests retransmission of lost packets by
`sending negative acknowledgements (NACKS). Under
`this approach, the receiver uses two kinds of timers. The
`first timer is used to detect lost packets when no new
`data is received for some time. The second timer is used
`
`to delay transmission of NACKS in the hope that some
`other receiver might generate a NACK (called nack sup-
`pression).
`It has been shown that the receiver-initiated ap-
`proach reduces the bottleneck at the sender and pro-
`vides substantially better performance [18]. However,
`the receiver-initiated approach has some major draw-
`backs. First, the sender does not receive positive confir-
`mation of reception of data from all the receivers and,
`therefore, must continue to buffer data for long periods
`of time. The second and most important drawback is
`that the end-to-end delay in delivery can be arbitrarily
`large as error recovery solely depends on the timeouts at
`the receiver unless the sender periodically polls the re-
`ceivers to detect errors [19]. If the sender sends a train of
`packets and if the last few packets in the train are lost,
`reoeivers take a long time to recover causing unneces-
`sary increases in end-to-end delay. Periodic polling of
`all receivers is not an efficient and practical solution in
`a wide area network. Third, the approach requires that
`a NACK must be multicast to all the receivers to allow
`
`suppression of NACKs at other receivers and, similarly,
`all the retransmissions must be multicast to all the re-
`
`ceivers. However, this can result in unnecessary propa-
`gation of multicast traffic over a large geographic area
`even if the packet losses and recovery problems are re-
`stricted to a distant but small geographic area‘. Thus,
`the approach may unnecessarily waste valuable band-
`width.
`
`In this paper we present an altemative approach that
`achieves scalable reliable dissemination by reducing the
`processing bottlenecks of sender-initiated approaches
`
`‘Assume that only a distant portion of the Internet is congested
`resulting in packetloss in the area. One or more receivers in this
`region may multicast repeated NACKS that must be processed
`by all the receivers and the resulting retransmissions must also be
`forwarded to and processed by all the receivers.
`
`IPR2016-00726 -ACTIVISION, EA, TAKE-TWO, 2K, ROCKSTAR,
`Ex. 1102, p. 1306 of 1442
`
`IPR2016-00726 -ACTIVISION, EA, TAKE-TWO, 2K, ROCKSTAR,
`Ex. 1102, p. 1306 of 1442
`
`

`
`and avoiding the long recovery times of receiver-initiated
`approaches.
`
`OVERVIEW OF OUR APPROACH
`
`Under the TMTP dissemination model, a single
`sender multicasts a stream of information to a dissem-
`
`ination group. A dissemination group consists of pro-
`cesses scattered throughout the Internet, all interested
`in receiving the same data feed. A session directory ser-
`vice (similar to the session directory sd from LBL [12])
`advertizes all active dissemination groups.
`Before a transmitting process can begin to send its
`stream of information, the process must create a dissem-
`ination group. Once the dissemination group has been
`formed, interested processes can dynamically join the
`group to receive the data feed. The dissemination pro-
`tocol does not provide any mechanism to insure that all
`receivers are present and listening before transmission
`begins. Although such a mechanism may be applicable
`in certain situations, we envision a highly dynamic dis-
`semination system in which receiver processes usually
`join a data feed already in progress and /or leave a data
`feed prior to its termination. Consequently, the protocol
`makes no effort to coordinate the sender and receivers,
`and an application must rely on an external synchro-
`nization method when such coordination is necessary.
`For the purposes of flow and error control, TMTP or-
`ganizes the group participants into a hierarchy of sub-
`nets or domains. Typically, all the group members in
`the same subnet belongto a domain and a single do-
`main manger acts as a representative on behalf of the
`domain for that particular group. The domain manager
`is responsible for recovering from errors and handling 10-
`cal retransmissions if one or more of the group members
`within its domain do not receive some packets.
`In addition to handling error recovery for the local
`domain, each domain manager may also provide error
`recovery for other domain managers in its vicinity. For
`this purpose, the domain managers are organized into
`a control tree as shown in Figure 1. The sender in a
`dissemination group serves as the root of the tree and
`has at most K domain managers as children. Similarly,
`each domain manager will accept at most K other do-
`main managers as children, resulting in a tree with max-
`imum degree K. The value of K is chosen at the time of
`group creation and registration and does not include lo-
`cal group members in a domain (or subnet). The degree
`of the tree (K) limits the processing load on the sender
`and the internal nodes of the control tree. Consequently,
`the protocol overhead grows slowly, proportional to the
`LogK(Number_Of_Receivers).
`Packet transmission in TMTP proceeds as follows.
`When a sender wishes to send data, TMTP uses IP
`multicast to transmit packets to the entire group. The
`transmission rate is controlled using a sliding window
`based protocol described later. The control tree en-
`sures reliable delivery to each member. Each node of
`
`
`
`Figure 1: An example control tree with the maximum
`degree of each node restricted to K. Local group mem-
`bers within a domain are indicated by GM. There is no
`restriction on the number of local group members within
`a domain.
`
`the control tree (including the root) is only responsi-
`ble for handling the errors that arise in its immediate
`K children. Likewise, children only send periodic, posi-
`tive acknowledgments to their immediate parent. When
`a child detects a missing packet, the child multicasts a
`NACK in combination with nack suppression. On the
`receipt of the NACK, its parent in the control tree mul-
`ticasts the missing packet. To limit the scope of the mul-
`ticast NACK and the ensuing multicast retransmission,
`TMTP uses the Time- To-Live (TTL) field to restrict the
`transmission radius of the message. As a result, error
`recovery is completely localized. Thus, a dissemination
`application such as a world-wide IETF conference would
`organize each geographic domain (e.g., the receivers in
`California vs.
`all the receivers in Australia) into sep-
`arate subtrees so that error recovery in a region can
`proceed independently without causing additional traf-
`fic in other regions. TMTP’s hierarchical structure also
`reduces the end-to-end delay because the retransmission
`requests need not propagate all the way back to the orig-
`inal sender.
`In addition, locally retransmitted packets
`will be received quickly by the affected receivers-
`The control tree is self-organizing and does not rely
`on any centralized coordinator, being built dynamically
`as members join and leave the group. A new domain
`manager attaches to the control tree after discovering
`the closest node in the tree using an expanded ring
`search. Note that the control tree is built solely at the
`transport layer and thus does not require any explicit
`support from, or modification to,
`the IP multicast in-
`frastructure inside the routers.
`The following sections describe the details of the
`TMTP protocol.
`
`GROUP MANAGEMENT
`
`The session directory provides the following group
`management primitives:
`
`CreateGroup(GName,Comm'I‘ype): A sender cre-
`ates a new group (with identifier GName) using the
`CreateGroup routine. Comm Type specifies the type
`of communication pattern desired and may be ei-
`
`IPR2016-00726 -ACTIVISION, EA, TAKE-TWO, 2K, ROCKSTAR,
`Ex. 1102, p. 1307 of 1442
`
`IPR2016-00726 -ACTIVISION, EA, TAKE-TWO, 2K, ROCKSTAR,
`Ex. 1102, p. 1307 of 1442
`
`

`
`If successful, Cre-
`ther dissemination or cancastz.
`ateGroup returns an IP multicast address and a
`port number to use when transmitting the data.
`
`JoinGroup(Gname): Processes that want to receive
`the data feed represented by GName call JoinGroup
`to become a member of the group. Join returns the
`transport level address (IP multicast address and
`port number ) for the group which the new process
`uses to listen to the data feed.
`
`LeaveGroup(Gname): Removes the caller from the
`dissemination group GName.
`
`DeleteGroup(GName): When the transmission is
`complete, the sending process issues a DeleteGroup
`request to remove the group GName from the sys-
`tem. DeleteGroup also informs all participants, and
`domain managers that the group is no longer active.
`
`Hhile (NotDone) {
`Hulticast a SEARCH_FOR_PARENT msg
`Collect responses
`If (no responses)
`Increment TTL /* try again #/
`Else
`
`Select closest respondent as parent
`Send JOIN_REQUEST to parent
`Wait for JOIN_CONFIRH reply
`If (JOIN_CONFIRM received)
`Notbone = False
`
`Else
`
`/t try again #/
`
`(A) New Domain Manger Algorithm
`
`‘
`Receive request nmssage
`If (request is SE.ARCE_I-‘0R_PAREN'l')
`If (MAX_CHILDREN not exceeded)
`Send HILLING_T0_BE_PARl':‘.N'l' msg
`
`Else
`
`/# Do not respond #/
`Else If (request is JOIN_R£QUEST)
`Add child to the tree
`
`Send JOIN_ClJNFIRl'l rnsg
`
`(B) Existing Domain Manger Algorithm
`
`Figure 2: The protocol used by domain managers to
`join the control tree. A new domain manager performs
`algorithm (A) while all other existing managers execute
`algorithm (B).
`
`CONTROL TREE MANAGEMENT
`
`Each dissemination group has an associated control
`tree consisting of domain managers. Over the lifetime
`of the dissemination group, the control tree grows and
`shrinks dynamically in response to additions and dele-
`tions to and from the dissemination group membership.
`Specifically, the tree grows whenever the first process
`in a domain joins the group (i.e., a domain manager is
`created) and shrinks whenever the last process left in a
`domain leaves the group (i.e., adomain manager termi-
`nates).
`There are only two operations associated with con-
`trol tree management: Join Tree and LeaveTree. When a
`new domain manager is created, it executes the Join'I\‘ee
`protocol to become a member of the control tree. Like-
`wise, domain managers that no longer have any local
`processes to support may choose to execute the Leave-
`Tree protocol.
`Figures 2 and 3 outline the protocols for joining
`and leaving the control tree. The join algorithm em-
`ploys an expanding ring search to locate potential con-
`nection points into the control tree. A new domain
`manager begins an expanding ring search by mul-
`ticasting a SEARCH_FOR_PARENT request message
`with a small time-to-live value (TTL). The small TTL
`value restricts the scope of the search to nearby con-
`trol nodes by limiting the propagation of the multi-
`cast message.
`If the manager does not receive a re-
`sponse within some fixed timeout period,
`the man-
`ager resends the SEARCH_FOR_PARENT message us-
`ing a larger TTL value. This process repeats until the
`manager receives a WILLING_TO_BE_PAR.ENT mes-
`sage from one or more domain managers in the con-
`trol tree. All existing domain managers that receive the
`SEARCH_FOR,_PARENT message will respond with a
`
`“Although this paper focuses on dissemination, TMTP also
`supports efficient concast style communication[10].
`
`IPR2016-00726 -ACTIVISION, EA, TAKE-TWO, 2K, ROCKSTAR,
`Ex.1102,p.1308of1442
`
`IPR2016-00726 -ACTIVISION, EA, TAKE-TWO, 2K, ROCKSTAR,
`Ex. 1102, p. 1308 of 1442
`
`

`
`If (I_am_a_1eaf_manager)
`Send LEAVE_TREE request
`to parent
`Receive LEAVE_CONFIRH
`Terminate
`
`Else /# I am an internal manager t/
`Fulfill all pending obligations
`Send FIND_NEH_PAR£NT message to children
`Receive FIND_NEH_PAR£NT reply from all children
`Send LEAVE_TREE request to parent
`Receive LE1lVE_CONl-'IB.H
`Terminate
`
`Figure 3: The algorithm used to leave the control tree
`after the last local group member terminates.
`
`WILLIl\'G.TO_BE_PARENT message unless they al-
`ready support the maximum number of children. The
`new domain manager then selects the closest domain
`manager (based on the TTL values) and directly con-
`tacts the selected manager to become its child. For each
`domain,
`its manager maintains a multicast radius for
`the domain, which is the TTL distance to the farthest
`child Within the domain. The domain manager keeps
`the children informed of the current multicast radius.
`As described later in the description of the error control
`part of TMTP, both parent and its children in a domain
`use the current multicast radius to restrict the scope of
`their multicast transmissions.
`
`Before describing the Leave'Iree protocol, note that
`a domain manager typically has two types of children.
`First, a domain manager supports the group members
`that reside within its local domain. Second, a domain
`manager may also act as a parent to one or more children
`domain managers. We say a manager is an internal
`manager of the tree if it has other domain managers as
`children. We say a manager is a leaf manager if it only
`supports group members from its local domain.
`A domain manager may only leave the tree after its
`last local member leaves the group. At this point, the
`domain manager begins executing the LeaveTree proto-
`col shown in Figure 3. The algorithm for leaf managers
`is straightforward. However, the algorithm for inter-
`nal managers is complicated by the fact that internal
`managers are a crucial link in the control tree, contin-
`uously servicing flow and error control messages from
`other managers, even when there are no local domain
`members left. In short, a departing internal node must
`discontinue service at some point and possibly coordi-
`nate children with the rest of the tree to allow seam-
`
`less reintegration of children into the tree. Several al-
`ternative algorithms can be devised to determine when
`and how service will be cutoff and children reintegrated.
`The level of service provided by these algorithms could
`range from “unrecoverable interrupted service” to “tem-
`porarily interrupted service” to “uninterrupted service”.
`Our current implementation provides “probably unin-
`
`terrupted service” which means children of the depart-
`ing manager continue to receive the feed while they rein-
`tegrate themselves into the tree. However, errors that
`arise during the brief reintegration time might not be
`correctable. We are still investigating alternatives to
`this approach.
`After a departing manager has fulfilled all obliga-
`tions to its children and parent, the departing manager
`instructs its children to find a new parent. The chil-
`dren then begin the process of joining the tree all over
`again. Although we investigated several other possible
`algorithms, we chose the above algorithm for its sim-
`plicity. Other, more static algorithms, such as requiring
`orphaned children to attach themselves to their grand-
`parents, often result in poorly constructed control trees.
`Forcing the children to restart the join procedure en-
`sures that children will select the closest possible con-
`nection point. Other more complex dynamic methods
`can be used to speed up the selection of the closest con-
`nection point but, in our experience, the performance of
`our simple algorithm has been acceptable.
`DELIVERY MANAGEMENT
`
`TMTP couples its packet transmission strategy with
`a unique tree-based error and flow control protocol to
`provide efficient and reliable dissemination. Conven-
`tional flow and error control algorithms employ a sender-
`or receiver-initiated approach. However, using the con-
`trol tree, TMTP is able to combine the advantages of
`each approach while avoiding their disadvantages. Log-
`ically, TMTP’s delivery management protocol can be
`partitioned into three components: data transmission,
`error handling, and flow control. The following sections
`address each of these aspects.
`The Transmission Protocol
`
`The basic transmission protocol is quite simple and is
`best described via a simple example. Assume a sender
`process S has established a dissemination group X and
`wants to multicast data to group X. S begins by multi-
`casting data to the (IP-multicast.addr, port-no) repre-
`senting group X. The multicast packets travel directly
`to all group members via standard IP multicast. In ad-
`dition, all the domain managers in the control tree listen
`and receive the packets directly.
`As in the sender-initiated approach, the root S ex-
`pects to receive positive acknowledgments in order to
`reclaim buffer space and implement flow control. How-
`ever, to avoid the ac]: implosion problem of the sender
`initiated approach, the sender does not receive acknowl-
`edgments directly from all the group members and, in-
`stead, receives ACKS only from its K immediate chil-
`dren. Once a domain manager receives a multicast
`packet from the sender, it can send an acknowledgment
`for the packet to its parent because the branch of the
`tree the manager represents has successfully received the
`packet (even though the individual members may not
`have received the packet). That is, a domain manager
`
`IPR2016-00726 -ACTIVISION, EA, TAKE-TWO, 2K, ROCKSTAR,
`Ex. 1102, p. 1309 of 1442
`
`IPR2016-00726 -ACTIVISION, EA, TAKE-TWO, 2K, ROCKSTAR,
`Ex. 1102, p. 1309 of 1442
`
`

`
`does not need to wait for ACKs from its children in or-
`
`In addition, each
`der to send an ACK to the parent.
`domain manager only periodically sends such ACKs to
`its parent. This feature substantially reduces ACK pro-
`cessing at the sender (and each domain manager).
`Error Control
`
`Before describing the details of TMTP’s error control
`mechanism we must define an important concept called
`limited scope multicast messages. A limited scope multi-
`cast restricts the scope of a multicast message by setting
`the TTL value in the IP header to some small value
`which we call the multicast radius. The appropriate
`multicast radius to use is obtained from the expanding
`ring search that domain managers use to join the tree.
`Limited scope multicast messages prevent messages tar-
`geted to a particular region of the tree from propagating
`throughout the entire Internet.
`TMTP employs error control techniques from both
`sender and receiver initiated approaches.
`Like the
`sender
`initiated approach,
`a TMTP traffic source
`(sender) requires periodic (unicas_t) positive acknowl-
`edgements and uses timeouts and (limited scope mul-
`ticast) retransmissions to ensure reliable delivery to all
`its immediate children (domain managers). However, in
`addition to the sender, the domain managers in the con-
`trol tree are also responsible for error control after they
`receive pa.ckets from the sender. Although the sender
`initially multicasts packets to the entire group, it is the
`domain manager's responsibility to ensure reliable deliv-
`ery. Each domain manager also relies on periodic posi-
`tive ACKs (from its immediate children), timeouts, and
`retransmissions to ensure reliable delivery to its chil-
`dren. When a retransmission timeout occurs, the sender
`(or domain manager) assumes the packet was lost and
`retransmits it using IP multicast (with a small TTL
`equal to the multicast radius for the local domain so
`that it only goes to its children).
`In addition to the sender initiated approach, TMTP
`uses restricted NACKs with NACK suppression to re-
`spond quickly to packet losses. When a receiver notices
`a missing packet, the receiver generates a negative ac-
`knowledgment that is multicast to the parent and sib-
`lings using a restricted (small) TTL value. To avoid mul-
`tiple receivers generating a NACK for the same packet,
`each receiver delays a random amount of time before
`transmitting its NACK. If the receiver hears a NACK
`from another sibling during the delay period, it sup-
`presses its own NACK. This technique substantially re-
`duces the load imposed by NACKs. When a domain
`manager receives a NACK, it immediately responds by
`multicasting the missing packet to the local domain us-
`ing a limited scope multicast message.
`Flow Control
`
`TMTP achieves flow control by using a combination
`of rate-based and window-based techniques. The rate-
`based component of the protocol prohibits senders from
`
`'
`
`transmitting data faster than some predefined maxi-
`mum transmission rate. The maximum rate is set when
`
`the group is created and never changes. Despite its
`static nature, a fixed rate helps avoid congestion aris-
`ing from bursty traffic and packet loss at rate—dependent
`receivers while still providing the necessary quality-of-
`service without excessive overhead.
`
`TMTP’s primary means of flow control consists of
`a window-based approach used for both dissemination
`from the sender and retransmission from domain man-
`
`agers. Within a window, senders transmit at a fixed
`rate.
`
`TMTP’s window-based flow control differs slightly
`from conventional point-to-point window-based flow
`control. Note that retransmissions are very expensive
`because they are multicast. In addition, transient traf-
`fic conditions or congestion in one part of the network
`can put backpressure on the sender causing it to slow
`the data flow. To oversimplify, TMTP avoids both of
`these problems by partitioning the window and delay-
`ing retransmissions as long as possible. This increases
`the chance of a positive acknowledgement being received
`and it also allows domain managers to rectify transient
`behavior before it begins to cause backpressure.
`TMTP uses two different timers to control the win-
`dow size and the rate at which the window advances.
`
`T,e,,,,,,, defines a timeout period that begins when the
`first packet in a window is sent. Since the transfer rate
`is fixed, T,e¢,,,,,,, also definesthe window size. _ A sec-
`ond timer, Tack, defines the periodic interval at which
`each receiver is expected to unicast a positive ACK to
`its parent.
`The sender specifies the value of TM, based on the
`RTT to its farthest child. Tm,,,,., is chosen such that
`T,m.,,,, = n. x Tack, where n is an integer, n 2 2. Both
`T,,,;,a,,, and Tack are fixed at the beginning of transmis-
`sion and do not change. A sender must allocate enough
`buffer space to hold packets that are transmitted over
`the Tretrans period-
`Figure 4 illustrates the windowing algorithm graphi-
`cally. The sender starts a timer and begins transmitting
`data (at a fixed rate). Consider the packets transmit-
`ted during the first Tack interval. Although the sender
`should see a positive ACK at time Tack, the sender does
`not require one until time T,¢,,a,,,. Instead, the sender
`continues to send packets during the second and third
`interval. After T,ma,,, amount of time, the timer ex-
`pires. At this point, the sender retransmits all unACK’d
`packets that were sent during the first Tad, interval. Re-
`transmissions continue until all packets in the T,,,_.,, in-
`terval are acknowledged at which point the window is
`advanced by Tack. On the receiving end, packets con-
`tinue to arrive without being acknowledged until Tack
`amount of time has expired3.
`
`3However, a receiver may generate a restricted NA CK as soon
`as it detects a missing packet.
`
`IPR2016-00726 -ACTIVISION, EA, TAKE-TWO, 2K, ROCKSTAR,
`Ex. 1102, p. 1310 of 1442
`
`IPR2016-00726 -ACTIVISION, EA, TAKE-TWO, 2K, ROCKSTAR,
`Ex. 1102, p. 1310 of 1442
`
`

`
`Time
`
`ack
`Tack Tack Tack
`Tack
`...,....
`..
`¢.
`.-..:°---s.-p-o..n- .
`-_-..¢.-u..-.._
`i
`E
`.=
`2
`
`3 m
`
`e,=
`Is)
`-1
`'1»
`
`Three retransmission Intervals where T_ren-ans - 3 ' T__ack
`
`At the end of the first interval, packets sent during
`the first T_ack period are retransmitted. At the end
`of the second interval. packets sent during the
`second T_ack period are retransmitted. At the end
`of the third interval. packets sent during the third
`
`T_ack period are retransmitted.
`
`Figure 4: Different Stages in Sending Data
`
`A domain manager must continue to hold packets
`in its buffer until all of its children have acknowledged
`them. If the children fail to acknowledge packets, the do-
`main ma.nager’s window will not advance and its buffers
`will eventually fill up. As a result, the domain manager
`will drop and not acknowledge any new data from the
`sender, thereby causing backpressure to propagate up
`the tree which ultimately slows the flow of data.
`There are three reasons for using multiple Tack inter-
`vals during a retransmission timeout interval (Tm,,,,,,,).
`’ First, by requiring more than one positive ACK-during
`the retransmission interval, TMTP protects itself from
`spurious retransmissions arising from lost ACKs. First,
`by requiring more than one positive ACK during the
`retransmission interval, TMTP protects itself from spu-
`rious retransmissions arising from lost ACKs. Second,
`a larger retransmission interval gives receivers sufficient
`time to recover missing packets using receiver-initiated
`recovery when only one (or a few) packets in a window
`are lost. This avoids unnecessary multicast retransmis-
`sions of a window full of data. Third, multiple Tack in-
`tervals during the retransmission interval provide suffi-
`cient opportunity for a domain manager to recover from
`transient network load in its part of the subtree without
`unnecessarily applying backpressure to the sender.
`We have chosen the value of the multiplying factor
`n to be 3 based on empirical evidence; the appropri-
`ate value depends on several factors including expected
`error rates, variance in RTT, and expected length of
`the intervals with transient, localized congestion. Fur-
`ther st

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket