throbber
Multimedia Systems (1994) 2:172--180
`
`M u l t i m e d i a S y s t e m s
`(cid:14)9 Springer-Verlag 1994
`
`Media scaling in a multimedia communication system
`
`Luca Delgrossi, Christian Halstrick, Dietmar Hehmann, Ralf Guido Herrtwich, Oliver Krone,
`Jochen Sandvoss, Carsten Vogt
`
`IBM European Networking Center, Distributed Multimedia Solutions, Vangerowstrasse 18, D-69115 Heidelberg, Germany
`
`Abstract. HeiTS, the Heidelberg Transport System, is a mul-
`timedia communication system for real-time delivery of dig-
`ital audio and video. HeiTS operates on top of guaranteed-
`performance networks that apply resource reservation tech-
`niques. To make HeiTS also work with networks for which
`no reservation scheme can be realized (for example, Ether-
`net or existing internetworks), we implement an extension to
`HeiTS which performs media scaling at the transport level:
`The media encoding is modified according to the bandwidth
`available in the underlying networks. Both transparent and
`nontransparent scaling methods are examined. HeiTS lends
`itself to implement transparent temporal and spatial scaling of
`media streams. At the HeiTS interface, functions are provided
`which report information on the available resource bandwidth
`to the application so that nontransparent scaling methods may
`be used, too. Both a continuous and discrete scaling solution
`for HeiTS are presented. The continuous solution uses feed-
`back messages to adjust the data flow. The discrete solution
`also exploits the multipoint network connection mechanism of
`HeiTS. Whereas the first method is more flexible, the second
`technique is better suited for multicast scenarios. The com-
`bination of resource reservation and media scaling seems to
`be particularly well suited to meet the varying demands of
`distributed multimedia applications.
`
`Key words: Media scaling- Multimedia networks - Transport
`systems
`
`1 Introduction
`
`The dispute of guaranteed vs nonguaranteed communication
`is an unresolved argument in the multimedia community (as
`shown, for example, by recurring discussions at the first three
`International Workshops on Network and Operating System
`Support for Digital Audio and Video from 1990 to 1992). It is
`a repetition of the classic end-to-end argument: One group says
`that all mechanisms to cope with network bottlenecks should
`be included in the application; the other group says that only
`the underlying system is able to prevent network overload. In
`
`Correspondence to: C. Halstrick
`
`this paper, we propose a solution between the two extremes
`that offers both possibilities in an actual system. We favor
`this approach because different multimedia applications have
`different requirements on the network: there is virtually no
`way to recover from audio transmission errors so that the end
`user will not notice them. For everyday (consumer-quality)
`video, on the other hand, it is fairly easy to live with network
`flaws and even with slight delay variations.
`
`HeiTS, the Heidelberg Transport System [6, 7], facilitates
`the transmission of digital audio and video from a single origin
`to multiple targets. The transport and network layer protocols
`of HeiTS, HeiTP [3] and ST-II [15] allow the client to negotiate
`quality-of-service (QOS) parameters such as throughput and
`end-to-end delay for multimedia connections. In its original
`form, HeiTS depends on some type of bandwidth allocation
`mechanism in the underlying network to provide a transport
`connection with a guaranteed QOS. Some networks such as
`FDDI (with its synchronous mode) and ISDN implement this
`reservation. Other networks such as Token Ring can be aug-
`mented with bandwidth allocation schemes [t 1]. However, not
`all kinds of networks support the reservation of bandwidth: as
`an example, Ethernet provides no guaranteed service at all due
`to the potential collisions of packets 1 . Hence, to use audio and
`motion video in such an "unfriendly" environment calls for ad-
`ditional techniques. When reservation is not available, audio
`and motion video should be transported on a best-effort basis.
`From the start, HeiTS has supported some kind of best-effort
`QOS [19] which, however, is only a less strict version of guar-
`anteed QOS. In this best-effort approach, resource capacities
`are reserved, but at the same time statistically multiplexed, that
`is, the sum of the portions of bandwidth allocated to the indi-
`vidual sessions is allowed to exceed the total resource capacity.
`Best-effort service with no reservation requires a different ap-
`proach, which can work, for example, in a dynamic feedback
`fashion. Here, the system monitors how well it currently ac-
`
`1 For this reason, some "multimedia" solutions for the Ethernet
`use a 10BaseT hub and dedicate single Ethernet links to pairs
`of communication partners. This approach requires changes in
`the network infrastructure and still leaves unsolved the problem
`of conflicting uses of the dedicated links by multiple concurrent
`multimedia applications on the same machine.
`
`Akamai Ex. 1036
`Akamai Techs. v. Equil IP Holdings
`IPR2023-00332
`Page 00001
`
`

`

`complishes the audiovisual data transport from one end to the
`other, then correspondingly determines the amount of audio
`visual data it forwards. We refer to this technique as " m e d i a
`scaling."
`
`Media scaling in different forms has been suggested and
`used in previous systems. Fluent, for example, bases its multi-
`media networking technology on a proprietary scaling scheme
`[16]. Tokuda et al. have developed a dynamic QOS manage-
`ment service, which is intended to be used in conjunction with
`scaling techniques [14]. Clark et al. with their "predicted ser-
`vice" approach also assume in their networking architecture
`that some form of media scaling exists [2]. Our approach is
`special and different in that it shows how to combine resource
`reservation and media scaling methods.
`
`This paper discusses several implementation alternatives
`for media scaling in HeiTS. Section 2 surveys scaling meth-
`ods, concentrating on digital video. Section 3 introduces two
`different scaling methods for HeiTS. Section 4 specifies the
`changes to protocols and interfaces in HeiTS required to ac-
`commodate scaling.
`
`2 Scaling methods
`
`Before describing the details of the HeiTS approach, we give a
`brief survey of scaling techniques. We assume that the reader
`is familiar with typical encoding schemes for digital media.
`"Scaling" means to subsample a data stream and only present
`some fraction of its original content. In general, scaling can
`be done at either the source or the sink of a stream. Frame rate
`reduction, for example, is usually performed at the source,
`whereas hierarchical decoding is a typical scaling method ap-
`plied by the sink. Since in the context of this paper scaling
`is intended to reflect bandwidth constraints in the underlying
`resources, it is useful to scale a data stream before it enters
`a system bottleneck; otherwise it is likely to contribute to the
`overload of the bottleneck resource. Scaling at the source is
`usually the best solution here: there is no need for transmitting
`data in the first place if it will be thrown away somewhere in
`the system. Scaling methods used in a multimedia transport
`system can be classified as follows:
`
`-
`
`T r a n s p a r e n t s c a l i n g methods can be applied independently
`from the upper protocol and application layers, that is, the
`transport system scales the media on its own. Transparent
`scaling is usually achieved by dropping some portions of
`the data stream. These portions - single frames or sub-
`streams - need to be identifiable by the transport system.
`
`- N o n - t r a n s p a r e n t s c a l i n g methods require an interaction of
`the transport system with the upper layers. In particular,
`this kind of scaling implies a modification of the media
`stream before it is presented to the transport layer. For the
`distribution of media captured in real time, nontransparent
`scaling typically requires modification of some parameters
`of the coding algorithm. Stored media can be scaled by re-
`coding a stream that was previously encoded in a different
`format.
`
`173
`
`-
`
`-
`
`In a multimedia system, scaling can be applied to a couple
`of different media types. Examples are video, audio, pointer
`device control streams, sensory information (e.g., data gloves).
`For pointer device control streams or sensory information,
`scaling can in general be achieved by simply reducing the
`sampling rate. Bandwidth requirements of these streams are
`usually low compared to audio and video streams; therefore,
`performance gains achieved by applying scaling mechanisms
`are rather small.
`For audio, scaling is usually difficult because presenting
`only a fraction of the original data is easily noticed by the
`human listener. Dropping a channel of a stereo stream is an
`example.
`For video stream users are typically much less sensitive to
`quality reductions. Therefore and because of their high band-
`width requirements, video streams are predestined for scal-
`ing. The applicability of a specific scaling method depends
`strongly on the underlying compression technique, as will be
`explained in Sect. 2.2. There are several domains of a video
`signal to which scaling can be applied:
`Temporal scaling reduces the resolution of the video
`stream in the time domain by decreasing the number of
`video frames transmitted within a time interval. Tempo-
`ral scaling is best suited for video streams in which in-
`dividual frames are self-contained and can be accessed
`independently, such as intrapictures or DC-coded pictures
`for MPEG-coded video streams [9]. Interframe compres-
`sion techniques are more difficult to handle because not all
`frames can be easily dropped.
`Spatial scaling reduces the number of pixels of each im-
`age in a video stream. For spatial scaling, hierarchical ar-
`rangement is ideal because it has the advantage that the
`compressed video is immediately available in various res-
`olutions. Therefore, the video can be transferred over the
`network using different resolutions without applying a "de-
`code ---+ scale down ~ encode" operation on each picture
`before finally transmitting it over the network.
`F r e q u e n c y scaling reduces the number of DCT coefficients
`applied to the compression of an image. In a typical picture,
`the number of coefficients can be reduced significantly
`before a reduction of image quality becomes visible.
`A m p l i t u d i n a l scaling reduces the color depths for each im-
`age pixel. This can be achieved by introducing a coarser
`quantization of the DCT coefficients, hence requiring a
`control of the scaling algorithm over the compression pro-
`cedure.
`Color s p a c e scaling reduces the number of entries in the
`color space. One way to realize color space scaling is to
`switch from color to gray-scale presentation.
`
`-
`
`-
`
`-
`
`Obviously, combinations of these scaling methods are possi-
`ble.
`Whether nontransparent scaling is possible depends strong-
`ly on the kind of data to be transmitted. For live video streams,
`it is easy to set all the coding parameters when an image is
`sampled at the source. For stored video, scaling may make a
`recoding of the stream necessary, especially if no hierarchical
`coding scheme is used.
`
`IPR2023-00332 Page 00002
`
`

`

`174
`
`The efficiency of a scaling algorithm strongly depends on
`the underlying compression technique. The format of the data
`stream produced by the coding algorithm determines which of
`the domains is appropriate for scaling. The following enumer-
`ation gives a short overview of the applicability of scaling to
`some state-of-the-art compression techniques.
`- M o t i o n J P E G . The distinguished feature of motion JPEG
`encoding (that is, the encoding of video as a sequence of
`JPEG flames [20]) is its robustness to transmission errors
`because of the independence of individual frames: a single
`error is not carried over from one frame to another. Obvi-
`ously, temporal scaling is suited best for this compression
`technique, as any frame can be left out without affecting its
`neighbors. Applying a hierarchical DCT-based compres-
`sion method on every picture [16] enables spatial scaling
`methods. However, few existing JPEG implementations
`realize this hierarchical mode.
`- M P E G . Since MPEG [9] is a context-sensitive compres-
`sion method, temporal scaling is subject to certain con-
`straints. Every compressed video stream consists of a se-
`quence of intra-coded, predicted-coded, and bidirectional-
`ly-coded pictures. Temporal scaling of an MPEG coded
`video stream can be realized by dropping predicted and
`bidirectionatly coded pictures. Assuming an intra-picture
`is inserted every 9th frame, this leads to a scaled frame rate
`of approximately 3 frames per second [13].
`
`The main improvement of MPEG-2 over the original
`MPEG scheme is the support for scalable media streams [3,
`5, 14]. It facilitates spatial and frequency scaling as well as
`temporal scaling methods. MPEG-2 uses a hierarchical com-
`pression technique which enables the compressed data stream
`to be demultiplexed into three substreams with different qual-
`ity. Scaling can be achieved by transmitting only some but not
`all of the substreams. This method is particularly useful for
`the discrete scaling approach described in Sect. 3.2.
`DVI. Just IikeMPEG, DVI [10] uses acombination ofintra-
`and intercoded frames. Thus, temporal scaling is restricted
`in the same way, as described for MPEG-coded streams.
`- H.261 (px64). The H.261 standard includes an amplitudi-
`nal scaling method on the sender side [ 17]. The coarseness
`of the quantization of the DCT coefficients determines the
`color depth of each image pixel. In addition to this, the
`intra-frame coding scheme, which is similar to the intra-
`coded pictures of MPEG, permits the easy use of temporal
`scaling.
`
`-
`
`3 Scaling in HeiTS
`
`HeiTS is a rate-controlled transport system. For every data
`stream passing through a HeiTS connection, the system is in-
`formed about its message rate by means of an associated QOS
`parameter set. At the transport level interface, this rate is given
`in terms of logical data units (for example, video frames) per
`time period. HeiTS can use this information for monitoring
`the arrival of the packets. The late arrival of a packet is an
`indication of some bottleneck in the system, in which case the
`
`target can inform the origin about the overload and cause it to
`scale down the stream. Once the overload situation has passed,
`the stream may be scaled up again.
`A scalable stream can be seen as composed of various sub-
`streams. For a spatially scaled stream this representation can,
`for example, consist of one substream with all odd/odd pix-
`els, one substream with even/even pixels, etc. As an alternative,
`one could use one substream for intra-coded frames and one or
`even several other streams for the remaining frames, which im-
`plies that there are streams of different degrees of importance.
`A splitting of MPEG video streams based on DCT coefficients
`has been described [12].
`In the scaling implementation of HeiTS, individual sub-
`streams are mapped onto different connections, each with its
`own set of QOS parameters. The transmission quality can then
`be adjusted either with fine granularity within a connection
`(substream) or with coarse granularity by adding and remov-
`ing connections (substreams). We refer to these approaches as
`continuous and discrete scaling. These approaches, together
`with a discussion of the monitoring functions needed, are dis-
`cussed in the following subsections.
`
`3. I M o n i t o r i n g
`
`The prerequisite for any scaling mechanism is a function that
`allows the system to detect network congestion. For HeiTS,
`this can be achieved by monitoring two QoS parameters: end-
`to-end delay and T S D U loss rate.
`
`3.1.1 End-to-end delay
`
`In HeiTS, each logical data unit is represented by a transport
`service data unit (TSDU). Each TSDU has an expected ar-
`rival time, and a TSDU arriving later than expected indicates
`congestion.
`There are several possibilities to define the expected arrival
`time of a TSDU. One could, for example, define its value
`simply as the actual arrival time of the previous TSDU plus
`the period of the message stream (that is, the reciprocal value
`of its rate). Alternatively, the arrival time of the first TSDU of
`the stream (or any other earlier packet) rather than the arrival
`time of the previous packet could be used as an "anchor" for the
`calculation. This helps to avoid false indications of congestions
`in cases where the previous TSDU happened to arrive early and
`the current TSDU has a "normal" delay.
`HeiTS calculates the expected arrival time as the "logical
`arrival time" of the previous packet plus the stream period_ The
`logical arrival time is the arrival time observed when bursts
`are smoothed out, that is, when early packets are artificially
`delayed (for example, in a leaky bucket fashion) such that the
`specified stream rate is not exceeded (see [19] for details).
`
`3.1.2 TSDU loss rate
`
`A problem arising when only monitoring end-to-end delay is
`the definition of a threshold value from when on congestion is
`assumed. A better indicator to detect congestion is the TSDU
`
`IPR2023-00332 Page 00003
`
`

`

`loss rate. In HeiTS, both parameters are monitored in paral-
`lel. To adjust the value o f the threshold value for the delay
`HeiTS continuously compares the mean-end-to-end delay to
`the corresponding value o f the mean loss rate.
`The lateness/loss of a single T S D U should not immedi-
`ately trigger the scaling down o f a stream, because the con-
`gestion may only be short. However, if a sequence o f packets is
`late (or some packets are missing because they were dropped
`due to buffer overflow), it can be assumed that the network
`is congested. In this case, the receiver initiates a scale-down
`operation.
`In HeiTS, the mean values o f loss-rate and end-to-end de-
`lay are calculated over a predefined measurement interval. The
`determination o f the interval length depends on the network
`characteristics and, therefore, has to be based on heuristics.
`
`3.2 Continuous scaling
`
`A major issue with the scaling procedure is the responsive-
`ness, that is, how rapidly the traffic adapts to the available
`bandwidth. We propose a scale-down scheme which consists
`of three stages.
`- The first reaction to a congestion is to throw away excess or
`late packets. This usually happens within the network dur-
`ing a buffer overflow or at the receiver station that detects
`the lateness o f a packet. A n appropriate mechanism for
`lateness detection is included in HeiTP. Scaling by drop-
`ping packets is immediate and local, that is, it does not
`affect the sender, which continues to send at its full rate.
`Hence, scaling up can also be done very quickly by sim-
`ply stopping to discard packets. As stated before, it makes
`sense not immediately to trigger the sender to scale down
`the stream, since the congestion may only be brief.
`- W h e n the number of late or lost packets exceeds a certain
`threshold, which can be defined heuristically, it is assumed
`that the congestion will last longer. In this case, the sender
`is triggered to throttle its traffic. As a first step, the sender
`reduces its sending r a t e - p o s s i b l y down to zero. (Reducing
`the rate to zero makes no sense if all data are sent over only
`one connection. I f continuous scaling is applied to one o f
`several substreams, this substream may temporarily carry
`no data at all and the receiver will still receive information.)
`The connection, however, remains intact, along with its
`resource reservations 2. This means that the resources can
`be temporarily used by other traffic, but the sender can
`scale the stream up immediately once the congestion is
`over.
`If the rate on a stream has been reduced to zero and the con-
`gestion is o f a longer duration, that is, if several attempts
`
`-
`
`2 Note that not only guaranteed connections but also best-effort
`connections in HeiTS may have resources reserved for them.
`However, the reservation for best-effort connections does not account
`for the worst possible case. Thus, in some situations the amount of
`reserved resources may not suffice, which will lead to congestions. On
`the other hand, best-effort conntions may temporarily use resources
`reserved for other connections as long as these connections do not
`need them.
`
`175
`
`of the sender to scale up the stream fail, the corresponding
`connection is terminated and all resources reserved for it
`are released. Since congestion typically occurs only at one
`bottleneck on the end-to-end connection (for example, on
`some subnetwork), the resources previously reserved on
`other subnetworks or nodes are made available for other
`connections. Scaling the stream up, however, requires the
`reestablishment o f the connection, w h i c h takes some time.
`
`This last step leads us directly to the discrete scaling approach
`which will be discussed in the next subsection.
`
`The monitoring of a stream provides the receiving sta-
`tion with hints at congestion situations. This monitoring can-
`not, however, yield any information about the termination o f a
`congestion. Assuming that the underlying network also does
`not give any explicit indication o f congestions, the decision
`whether to scale up a stream must be based on heuristics. The
`only practical heuristic known is to scale up the stream when
`a certain time span after the previous scale down has elapsed.
`
`A scale-up decision based on time spans can come either
`too early or too late. A scaling which is too late is not consid-
`ered harmful if it happens within the range o f a few seconds. If
`the transmission quality is temporarily reduced, a human user
`does not care much whether this lasts for 3 or 5 s. The effects
`of a scaling which is too early can be more severe. Scaling a
`stream up while the congestion situation is still present causes
`the receiver to trigger a new scale-down and, in the extreme
`case, an oscillation o f the system. This implies an increased
`overhead for both end-systems and network and, additionally,
`can extend the phase o f reduced quality longer than necessary.
`
`To avoid oscillation, the scaling procedure o f HeiTS scales
`up stepwise, as is done by other dynamic congestion control
`algorithms [1, 8]. After scaling down the stream, the sender
`transmits for a certain time span o r a certain amount o f data
`(for example, n packets for a fixed value of n) at the reduced
`rate. If after this period no scale-down message is obtained
`from the receiver, which means that, HeiTS could transfer the
`packets without any severe congestion, the sender increases its
`rate by some amount 3. This procedure is continued until the
`maximum throughput for this stream is reached or until the
`receiver requests to scale down the stream again.
`
`A simple example o f the protocol machine for continuous
`scaling is shown in Figs. 1 and 2. The source entity consists of
`three stages: In the O K state, the source transmits the media
`stream in best quality. If a scale-down message is received
`from the sink, there is a transition to the D O W N state. The
`machine remains in the D O W N state until no further scale-
`down message is received for a certain time span t~p. In this
`case, the protocol machine goes to the UP state and tries to
`scale up the stream. I f the maximum quality is reached the
`protocol machine returns to the O K state.
`
`3 Note the difference between our scheme and the slow-
`start algorithm in TCP [8]. TCP's slow-start algorithm uses
`acknowledgements returned by the receiver to increase the traffic rate,
`whereas our scheme increases the rate in the absence of scale-down
`messages.
`
`IPR2023-00332 Page 00004
`
`

`

`176
`
`~ -
`IF quality=max_qual=~ "
`~
`~
`
`' ~
`OK ) - -
`~
`
`
`IF feedback(loss_rate)
`THEN scaledown(- - quality)
`seLtimer(t up)
`
`/
`[
`
`IF feedback(loss_rate) . . .
`\
`THEN sealedown(- - quality) I
`
` pi;
`
`THEN scale_up(++quality) IF feedback(loss_rate)
`IF (t up expired
`&& quality < max_qual)
`THEN scaledown(- - quality)
`THEN scale_up(++quality)
`set_timer(t up)
`
`Fig. 1, State diagram source
`
`<= threshold
`
`IF loss rate (cid:12)9 threshold
`THEN ~eedback(Ioss._.rate)
`Iio~s-;atte-~ =
`
`)
`
`ss_rate
`
`IF Iossrate <= ~
`threshold
`
`J
`" ~ ' ~
`~ -
`FEEDBACK )4"-~-
`
`~ " ~
`
`I IF loss_rate > loss_rate_old
`U THEN ~k(l~ds~-r~e~ rate
`
`Fig.2. State diagram sink
`
`The protocol machine on the sink side consists of only two
`states: The O K state is left if the T S D U loss rate/delay exceeds
`the threshold value. During the transition to the F E E D B A C K
`state, a scale-down message is sent to the source entity. The
`F E E D B A C K state ensures that the sink entity waits at least for
`the time until the media stream is adjusted according to the
`last scale-down message, before a new scale-down message
`
`can be sent.
`
`3.3 Discrete scaling
`
`The advantages of the continuous scaling technique are that
`scaling can be done at fine granularity and that in principle only
`one connection is required per stream. There are, however,
`some problems with this approach because it does not take
`into account two special features of HeiTS.
`- HeiTS supports multicast. This implies that continuous
`scaling may lead to the following problem. If a receiver
`triggers the sender to scale down the rate, all receivers from
`that point on get data at the lower rate, that is, a multimedia
`stream of worse quality. This approach is "all-worst" (or
`socialistic, to use a historical term), since the worst path in
`the multicast tree determines the quality for every receiver.
`
`- HeiTS supports different connection types. HeiTS has
`guaranteed connections for which all required resources
`are reserved in advance, and hence the requested through-
`put can be guaranteed. Additionally, HeiTS supports best-
`effort connections, in which no resources or only part o f
`the resources required are reserved in advance; thus con-
`
`gestion is possible.
`
`The discrete scaling technique discussed in the following is
`based on splitting a multimedia stream into a set o f substreams,
`as described in the beginning o f Sect. 3. This technique can be
`used in a multicast environment and supports different rates
`for different receivers. It works in an "individual best" (capi-
`talistic) fashion.
`
`For each of the different substreams a separate network
`layer connection is established. ST-II, the network protocol of
`HeiTS, in principle treats each o f these substreams indepen-
`dently. However, the "stream group identifier" o f ST-II can be
`used to indicate that several network connections belong to a
`single transport connection. The system can then try to achieve
`roughly the same delay for each of these network connections
`that facilitates reassembly of the substreams as packets reach
`the target with approximately the same transit time.
`
`For establishing a set of substreams, an application speci-
`fies the percentage of data, which has to be transmitted to the
`receiver under any circumstance. If fewer data are transferred,
`a receiver cannot decode any useful information. These data
`are transferred over a guaranteed connection, if possible. If
`no guaranteed connections can be supported (for example, be-
`cause there is an Ethernet in between), a best-effort connection
`is also used for this portion o f the stream.
`
`The rest of the stream is transferred over one or more best-
`effort connections. How many connections of this kind are
`required depends on the granularity of the data stream: Each
`part that provides a useful increase in quality is transferred
`over a separate best-effort connection.
`
`Example: A video data stream is sent with 24 frames per second
`(fps). The sender decides that 6 fps have to be transferred under
`any circumstances to the receivers. These data are sent over the
`basic connection. The remaining 18 fps might be sent over two
`best-effort connections: 6 fps over the first and 12 fps over the
`second. In this example, the two best-effort connections have
`different throughput requirements. The video frames are then
`sent in the following order over the different connections:
`
`. . .
`1
`2
`3
`4
`5
`6
`7
`8
`9
`bas be2 be1 be2 bas be2 be1 be2 bas . . .
`
`bas: sent over basic connection (guaranteed or best-effort)
`be1 : first best-effort connection
`be2: second best-effort connection
`
`If a receiver detects some congestion on any o f these connec-
`tions, it closes the least important connection (that is, be2 in the
`example). If we have a multicast connection, this disconnect
`does not necessarily imply a termination o f the whole connec-
`tion, but only of its last hop to the receiver. This means that
`the other receivers can still receive the stream in full quality.
`
`IPR2023-00332 Page 00005
`
`

`

`177
`
`4.1 Extensions to HeiTP functions
`
`To support scaling, three major extensions were introduced
`to HeiTP. These additional features, which will be described
`in the next subsections, reflect the three stages of the scaling
`procedure described in Sect. 3.1.
`
`The first step of congestion handling is to throw away pack-
`ets based on importance parameters that HeiTP associates with
`the packets. In the second step, the receiver, having detected
`the congestion, triggers the sender to reduce the transmission
`rate. Reducing the rate is based on an extended HeiTP rate con-
`trol mechanism. The third step in media scaling is the dynamic
`termination and reestablishment of network connections. This
`is provided by a call management mechanism that additionally
`helps the user to manage the transmission of a media stream
`over a group of substreams.
`
`4.1.1 Importance
`
`As a user should have some influence on the order in which in-
`dividual packets are thrown away, HeiTP now includes impor-
`tance parameters for packets. In case of a congestion, packets
`are discarded in the order of their importance.
`
`4.1.2 Rate control
`
`In HeiTP, on the one hand, rate control mechanisms affect the
`transmission of data between peer transport entities and, on
`the other transmission across the user interfaces between the
`transport service users and transport entities.
`
`Rate control for transmission between transport entities
`is done in two different ways, depending on the type of the
`connection.
`
`-
`
`-
`
`For connections with guaranteed bandwidth, rate control
`mechanisms are realized locally within the transport entity
`on the sender side. The rate is static and determined at con-
`nection establishment time according to the application's
`requirements.
`
`In case of best-effort connections, a distributed mechanism
`with dynamic rate adjustment is required. Congestion de-
`tection is done at the transport entity on the target side as
`described in Sect. 3. When the number of late or lost pack-
`ets exceeds a certain threshold, a scale-down message is
`sent to the transport entity on the sender side which low-
`ers in response its transmission rate. Scaling up is done as
`discussed in Sect. 3.
`
`Rate control for data transmission across the transport ser-
`vice interface is strongly coupled with transparent and non-
`transparent scaling. The application can decide which scaling
`method should be used by activating or deactivating this rate
`control.
`
`- Transparent scaling is realized by switching off the rate
`control at the service interface. If the data rate of the ap-
`plication exceeds the transmission rate supported by the
`connection, packets will be thrown away by the transport
`entity on the sender side without any indication to the ap-
`plication.
`
`~Receiver, 3
`
`(Receiver2)
`
`~- basic connection
`first best-effort connection
`second best-effort connection
`
`~ . ~
`
`Fig. 3. Discrete scaling with substreams
`
`Example: In Fig. 3, Receiver 2 cannot keep up with the speed
`of the data stream. Thus, it has issued a disconnect request
`for the second best-effort connection. I f after some time the
`receiver assumes that the congestion is over (see Sect. 3.1), it
`reconnects to the sender.
`
`The discrete scaling approach has some advantages:
`It is applicable to multicast connections.
`
`-
`
`- The receivers are handled "individual best."
`- Network routers require no knowledge of the traffic type.
`
`However, the scheme implies that scaling can only be done
`with a coarse granulari

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket