throbber
Design and Analysis of Frame-based Fair Queueing:
`A New Traffic Scheduling Algorithm for Packet-Switched Networks
`Dimitrios Stiliadis and Anujan Varma
`Computer Engineering Department
`University of California
`Santa Cruz, CA 95064
`
`Abstract
`In this paper we introduce and analyze frame-based fair
`queueing, a novel tra.flic scheduling algorithm for packet(cid:173)
`switched networks. The algorithm provides end-to-end delay
`bounds identical to those of PGPS (packet-level generalized
`processor sharing), without the complexity of simulating the
`fluid-model system in the background as required in PGPS.
`The algorithm is therefore ideally suited for implementation
`in packet switches supporting a large number of sessions.
`We present a simple implementation of the algorithm for
`a general packet switch. In addition, we prove that the
`algorithm is fair in the sense that sessions are not penalized
`for excess bandwidth they received while other sessions were
`idle. Frame-based fair queueing belongs to a general class
`of scheduling algorithms, which we call Rate-Proportional
`Servers. This class of algorithms provides the same end-to(cid:173)
`end delay and burstiness bounds as PGPS, but allows more
`flexibility in the design and implementation of the algorithm.
`We provide a systematic analysis of this class of schedulers
`and obtain bounds on their fairness.
`
`I. INTRODUCTION
`Providing QoS guarantees in a packet network requires
`the use of tra.flic scheduling algorithms in the switches ( or
`routers). The function of a scheduling algorithm is to se(cid:173)
`lect, for each outgoing link of the switch, the packet to be
`transmitted in the next cycle from the available packets be(cid:173)
`longing to the flows sharing the output link. Implementa(cid:173)
`tion of the algorithm may be in hardware or software. In
`ATM networks, where information is transmitted in terms
`of small fixed-size cells, the scheduling algorithm is usually
`implemented in hardware. In a packet network with larger
`packet-sizes, the algorithm may be implemented in software.
`Several scheduling algorithms are known in the liter(cid:173)
`ature for bandwidth allocation and transmission schedul(cid:173)
`ing in output-buffered switches.
`These include the
`packet-by-packet version of Generalized Processor Sharing
`
`This research is supported by the NSF Young Investigator Award
`No. MIP-9257103.
`Permission to make digital or hard copies of part or all of this work
`for personal or classroom use is granted without fee ~rovided that
`copies are not made or distributed for profit or commercial advantage
`and that copies bear this notice and the full citation on the first page.
`Copyrights for components of this work owned by others than ACM
`must be honored. Abstracting with credit is permitted. To copy
`otherwise, to republish, to post on servers or to redistribute to lists,
`requires prior specific permission and/ or a fee.
`
`SIGMETRICS 96-5/96 Philadelphia, PA, USA
`@1996 ACM
`
`(PGPS) [1] (also known as Weighted Fair Queueing [2]),
`VirtualClock [3], Self-Clocked Fair Queueing (SCFQ) [4],
`Delay-Earliest-Due-Date (Delay-EDD) [5], Weighted Round
`Robin [6], and Deficit Round Robin [7]. Many of these al(cid:173)
`gorithms are also capable of providing deterministic upper
`bounds on the end-to-end delay seen by a session when the
`burstiness of the session tra.flic is bounded (for example,
`shaped by a leaky bucket).
`Based on their internal structure, traffic schedulers can
`be classified into two main types: sorted-priority and frame(cid:173)
`based [8]. In a sorted-priority scheduler, there is a global
`variable - usually referred to as the virtual time - asso(cid:173)
`ciated with each outgoing link of the switch. Each time
`a packet arrives or gets serviced, this variable is updated.
`A timestamp, computed as a function of this variable, is
`associated with each packet in the system. Packets are
`sorted based on their timestamps, and are transmitted in
`that order. VirtualClock [3], Weighted Fair Queueing [2],
`and Delay-EDD [5] follow this architecture. A frame-based
`scheduler, on the other hand, does not require a priority(cid:173)
`queue of packets to be maintained.
`Instead, bandwidth
`guarantees are provided by splitting time into frames of
`fixed or variable length, and limiting the amount of traf(cid:173)
`fic a session is allowed to transmit during a frame period.
`Examples for frame-based schedulers include Hierarchical
`Round Robin [9], Stop-and-Go Queueing [10], Weighted
`Round Robin [6] and Deficit Round Robin [7].
`A traffic scheduling algorithm must possess several desir(cid:173)
`able features to be useful in practice:
`1. Isolation of flows: The algorithm must isolate an end(cid:173)
`to-end session from the undesirable effects of other
`(possibly misbehaving) sessions. Note that isolation is
`necessary even when policing mechanisms are used to
`shape the flows at the entry point of the network, as the
`flows may accumulate burstiness within the network.
`2. Low end-to-end delays: Real-time applications require
`from the network low end-to-end delay guarantees.
`3. Utilization: The algorithm must utilize the link band(cid:173)
`width efficiently.
`4. Fairness: The available link bandwidth must be di(cid:173)
`vided among the connections sharing the link in a fair
`manner. An unfair scheduling algorithm may offer
`widely different service rates to two connections with
`the same reserved rate over short intervals.
`5. Simplicity of implementation: The scheduling algo(cid:173)
`rithm must have a simple implementation. In an ATM
`network, the available time for completing a scheduling
`decision is very short and the algorithm must be im(cid:173)
`plemented in hardware. In packet networks with larger
`
`Page 1 of 12
`
`

`

`packet sizes and/or lower speeds, a software implemen(cid:173)
`tation may be adequate, but scheduling decisions must
`still be made at a rate close to the arrival rate of pack(cid:173)
`ets.
`6. Scalability: The algorithm must perform well in
`switches with a large number of connections, as well
`as over a wide range of link speeds.
`Weighted Fair Queueing (WFQ), also known as Packet(cid:173)
`level Generalized Processor Sharing (PGPS), is an ideal
`scheduling algorithm in terms of its fairness and delay
`bounds, but is complex to implement because of the need
`to simulate an equivalent fluid-model system in the back(cid:173)
`ground. Timestamp computations in PGPS have a time(cid:173)
`complexity of 0(V), where V is the number of sessions
`sharing the outgoing link.
`Self-Clocked Fair Queueing
`(SCFQ) [4] enables timestamp computations to be per(cid:173)
`formed in 0(1) time and has fairness comparable to that of
`PGPS, but results in increased end-to-end delay bounds [11],
`[12]. The VirtualClock scheduling algorithm [3] provides the
`same end-to-end delay bound as that of PGPS with a simple
`timestamp computation algorithm, but the price paid is in
`terms of fairness. A backlogged session in the VirtualClock
`server can be starved for an arbitrary period of time as a
`result of excess bandwidth it received from the server when
`other sessions were idle [l].
`Frame-based fair queueing (FFQ) is a sorted-priority al(cid:173)
`gorithm, and therefore uses timestamps to order packet
`transmissions. However, it requires only 0(1) time for the
`timestamp calculation independent of the number of sessions
`sharing the server. At the same time, the end-to-end delay
`guarantees of FFQ are identical to those obtained from a
`corresponding PGPS server. In addition, the server is fair
`in the sense that connections are always served proportion(cid:173)
`ally to their reservations when they are backlogged, and are
`not penalized for an arbitrary amount of time for bandwidth
`they received while the system was empty. The algorithm
`uses a framing approach similar to that used in frame-based
`schedulers to update the state of the system; the transmis(cid:173)
`sion of packets, however, is still based on timestamps.
`II. PRELIMINARIES
`We assume a packet switch where a set of V connections
`share a common output link. The terms connection, flow,
`and session will be used synonymously. We denote with Pi
`the rate allocated to connection i.
`We assume that the servers are non-cut-through devices.
`Let Ai ( r, t) denote the arrivals from session i during the
`interval ( r, t] and Wi ( r, t) the amount of service received
`by session i during the same interval. In a system based
`on the fluid model, both A;(r,t) and W;(r,t) are continu(cid:173)
`ous functions oft. However, in the packet-by-packet model,
`we assume that Ai ( r, t) increases only when the last bit of
`a packet is received by the server; likewise, W;( r, t) is in(cid:173)
`creased only when the last bit of the packet in service leaves
`the server. Thus, the fluid model may be viewed as a spe(cid:173)
`cial case of the packet-by-packet model with infinitesimally
`small packets.
`Definition 1: A system busy period is a maximal interval
`of time during which the server is never idle.
`
`A;
`
`11
`
`t2
`
`t3
`
`14
`
`Fig. 1. Intervals (t1, t2] and ( t3, t4] are two different busy periods.
`During a system busy period the server is always transmit(cid:173)
`ting packets.
`Definition 2: A backlogged period for session i is any pe(cid:173)
`riod of time during which packets belonging to that session
`are continuously queued in the system.
`Let Q;(t) represent the amount of session i traffic queued in
`the server at time t, that is,
`Q;(t) = A;(O, t) - Wi(O, t).
`A connection is backlogged at time t if Qi(t) > 0.
`Definition 3: A session i busy period is a maximal inter(cid:173)
`val of time ( ri , r2] such that for any time t E ( r1, r2], packets
`of connection i arrive with rate greater than or equal to p;,
`or,
`
`A;(ri, t) ~ p;(t- r1).
`A session busy period is the maximal interval of time
`during which if the session was serviced with exactly the
`guaranteed rate, it would remain continuously backlogged
`(Figure 1). Multiple session-i busy periods may appear dur(cid:173)
`ing a system busy period.
`The session busy period is defined only in terms of the
`arrival function and the allocated rate. It is important to
`realize the basic distinction between a session backlogged
`period and a session busy period. The latter is defined with
`respect to a hypothetical system where a backlogged connec(cid:173)
`tion i is serviced at a constant rate Pi, while the former is
`based on the actual system where the instantaneous service
`rate varies according to the number of active connections
`and their service rates. Thus, a busy period may contain in(cid:173)
`tervals during which the actual backlog of session i traffic in
`the system is zero; this occurs when the session receives an
`instantaneous service rate of more than Pi during the busy
`period.
`In [11], we introduced a general model for traffic schedul(cid:173)
`ing algorithms, called Latency-Rate (.CR.) servers. Any
`server in this class is characterized by two parameters: la(cid:173)
`tency 0; and minimum allocated rate p;. Let us assume that
`the jth busy period of connection i starts at time r. We de(cid:173)
`note by W{j ( r, t) the total service provided to the packets
`of the connection that arrived after time r and until time t
`by server S. Notice that the total service offered to connec(cid:173)
`tion i in this interval, W/ ( r, t), may actually be more than
`W{j(r,t) since some packets from a previous busy period,
`that are still queued in the system, may be serviced as well.
`
`Page 2 of 12
`
`

`

`Definition 4: A server S belongs in the class en if and
`only if for all times t after time T that the j-th busy period
`started and until the packets that arrived during this period
`are serviced,
`W{3(r, t) 2::: max:(0, p;(t- T - 0f)).
`ef is the minimum non-negative number that can satisfy
`the above inequality.
`The right-hand side of the above equation defines an en(cid:173)
`velope to bound the minimum service offered to session i
`during a busy period. It is easy to observe that the latency
`ef represents the worst-case delay seen by a session-s packet
`arriving into an empty queue. The maximum delay through
`a network of CR-servers can be computed from the knowl(cid:173)
`edge of the latencies of the individual servers and the traffic
`model. Thus, the theory of CR-servers allows us to deter(cid:173)
`mine tight upper-bounds on end-to-end delays in a network
`of servers where the servers on a path may not all use the
`same scheduling algorithm.
`The function W{3 ( r, t) may be a step function in a
`packet-by-packet scheduler. As in the case of W;( r, t), we
`update Wi:J ( r, t) only when the last bit of a packet has been
`serviced. Only in the case of a fluid-server, packets can be
`arbitrarily small and thus W;:3(r, t) may be continuous.
`To determine end-to-end delay bounds, we assume that
`traffic from session i at the source is leaky-bucket shaped
`[13]. That is,
`
`Ai(T, t) ~<Ti+ Pi(t- r)
`during any time interval ( T, t]. Also, we assume that session i
`is allocated a minimum rate of p; in the network. We state
`without proof the following key result from [11].
`Theorem 1: The maximum delay Df and the maximum
`backlog Qf of session i after the Kth node in an arbitrary
`network of CR-servers are bounded as
`K
`D!( < <Ti + ~ 9(S3).
`'-Pi ~' '
`J=l
`
`K
`Q!( < u· + p· ~ E)(Sj).
`' - '
`'.L....i i
`'
`j=l
`where 0)5,l is the latency of the jth server on the path of
`the session.
`In Table I we summarize the latencies of many well(cid:173)
`known work-conserving schedulers, along with bounds on
`their fairness and implementation complexity. The fairness
`parameter in the table is the maximum difference in nor(cid:173)
`malized service offered by the scheduler to two connections
`over any interval during which both connections are con(cid:173)
`tinuously backlogged. The implementation complexity is at
`least O(log2 V) for all sorted-priority schedulers.
`The packet-by-packet approximation of GPS (PGPS) has
`the lowest latency among all the packet-by-packet servers;
`thus, from Theorem 1, PGPS has the lowest bounds on end(cid:173)
`to-end delay and buffer requirements. However, PGPS also
`has the highest implementation complexity. VirtualClock
`has the same latency as PGPS, but is not a fair algorithm [3],
`
`[1]. Notice, however, that none of the other algorithms suf(cid:173)
`fers from such a high level of unfairness. In SCFQ as well
`as the round-robin schedulers, the latency is a function of
`the number of connections that share the output link. In a
`broadband network, the resulting end-to-end delay bounds
`may be prohibitively large.
`The GPS scheduler provides ideal fairness by offering the
`same normalized service to all backlogged connections at
`every instant of time. Thus, if we represent the total amount
`of service received by each session by a function, then these
`functions can be seen to grow at the same rate for each
`backlogged session. Golestani [ 4] introduced such a function
`and called it virtual time. The virtual time of a backlogged
`session is a function whose rate of growth at each instant is
`exactly the rate of normalized service provided to it by the
`scheduler at that instant. Similarly, we can define a global
`virtual-time function that increases at the rate of the total
`service performed by the scheduler at each instant during a
`server-busy period. In a GPS scheduler, the virtual times
`of all backlogged connections are identical at every instant,
`and are equal to the global virtual time. This is achieved by
`setting the virtual time of a connection to the global virtual
`time when it becomes backlogged and then increasing the
`former at the rate of the instantaneous normalized service
`received by the connection during the backlogged period.
`This allows an idle connection to receive service immediately
`once it becomes backlogged, resulting in zero latency.
`We introduce such a function to represent the state of
`each connection in a scheduler and call it potential. The po(cid:173)
`tential uf a connection is a non-decreasing function of time
`during a system-busy period. When connection i is back(cid:173)
`logged, its potential increases exactly by the normalized ser(cid:173)
`vice it received. That is, if Pi(t) denotes the potential of
`connection i at time t, then, during any interval ( r, t] within
`a backlogged period for session i,
`
`Pi(t)-P;(r)= W,(r,t)_
`Pi
`Note that the potentials of all connections can be initialized
`to zero at the beginning of a system-busy period, since all
`state information can be reset when the system becomes
`idle.
`From the above definition of potentials, it is clear that
`a fair algorithm must attempt to increase the potentials of
`all backlogged connections at the same rate, the rate of in(cid:173)
`crease of the system potential. Thus, the basic objective is
`to equalize the potential of each connection. Sorted-priority
`schedulers such as GPS, PGPS, SCFQ, and VirtualClock all
`attempt to achieve this objective. However, in our defini(cid:173)
`tion of potential, we did not specify how the potential of a
`connection is updated when it is idle, except that the po(cid:173)
`tential is non-decreasing. Scheduling algorithms differ in the
`way they update the potentials of idle connections. Ideally,
`during every time interval that a connection i is not back(cid:173)
`logged, its potential must increase by the normalized service
`that the connection could receive if it were backlogged. We
`will call this service the missed service of connection i. If the
`potential of an idle connection is increased by the service it
`missed, it is easy to see that, when the connection becomes
`
`Page 3 of 12
`
`

`

`II Server
`GPS
`
`PGPS
`
`SCFQ
`
`Latency
`
`0
`
`L; + !:..m,,,;_
`P,
`r
`
`h + !:..m,,,;.(V -1)
`P,
`r
`
`h+.!:.z.
`Pi
`Pj
`
`Virtual Clock
`
`h+!:..m,,,;.
`P,
`r
`
`Deficit Round Robin ~
`
`r
`
`Weighted Round Robin
`
`(F-4>.+Lc)
`r
`
`00
`
`3F
`r
`
`E.
`r
`
`Fairness
`
`0
`
`Complexity II
`-
`
`max(max(Ci+!:..m,,,;.+!::.i. C·+!:..m,,,;.+h)
`Pi
`Pj '
`"
`Pj
`Pi '
`where
`C; = min((V -1)~, max (Ln)).
`•
`1:o;n:o;v Pn
`
`O(V)
`
`O(log V)
`
`O(log V)
`
`0(1)
`
`0(1)
`
`TABLE I. Latency, fairness and implementation complexity of several work-conserving servers. L; is the maximum packet size of
`session i and Lmax the maximum packet size among all the sessions. C; is the maximum normalized service that a session
`may receive in a PGPS server in excess of that in the GPS server. In weighted round-robin and deficit round-robin, Fis the
`frame size and 'Pi is the amount of traffic in the frame allocated to session i. Le is the size of the fixed packet ( cell) in weighted
`round-robin.
`busy again, its potential will be identical to that of other
`backlogged connections in the system, allowing it to receive
`service immediately.
`One way to update the potential of a connection when
`it becomes backlogged is to define a system potential that
`keeps track of the progress of the total work done by the
`scheduler. The system potential P(t) is a non-decreasing
`function of time. When an idle session i becomes backlogged
`at time t, its potential P;(t) can be set to P(t) to account
`for the service it missed. Schedulers use different functions
`to maintain the system potential, giving rise to widely dif(cid:173)
`ferent delay- and fairness-behaviors. In general, the system
`potential at time t can be defined as a non-decreasing func(cid:173)
`tion of the potentials of the individual connections before
`time t, and the real time t. Let t- denote the instant just
`before time t. Then,
`P(t) = :F(Pi(t-), P2(t-), ... , Pv(t-), t).
`For example, the GPS server initializes the potential of a
`newly backlogged connection to that of a connection cur(cid:173)
`rently backlogged in the system. That is,
`P(t) = P;(t), for any i E B(t);
`where B(t) is the set of backlogged connections at time t.
`The VirtualClock scheduler, on the other hand, initializes
`the potential of a connection to the real time when it be(cid:173)
`comes backlogged, so that
`
`(2.1)
`
`The utility of the system potential function P( t) is in es(cid:173)
`timating the amount of service missed by a connection while
`it was idle. In an ideal server like GPS, the system potential
`is always equal to the potential of the connections that are
`currently backlogged and are thus receiving service. How(cid:173)
`ever, this approach requires that all connections can receive
`service at the same time. In a packet-by-packet scheduler we
`need to relax this constraint since only one connection can
`be serviced at a time. In the next section we will formulate
`the necessary conditions that the system potential function
`must satisfy in order for the server to have zero latency.
`If the potential of a newly backlogged connection is es(cid:173)
`timated higher than the potential of the connections cur(cid:173)
`rently being serviced, the former may have to wait all the
`other connections before it can be serviced. The self-clocked
`fair queueing (SCFQ) algorithm is a self-contained approach
`to estimate the system potential function. The potential of
`the system is estimated by the finish potential of the packet
`that is currently being serviced. This approximation, may
`assign to the system potential function a value greater than
`the potential of some backlogged connections. Thus, a con(cid:173)
`nection may not receive service immediately when a busy
`period starts. This behavior is different from that in GPS,
`where an idle connection starts to receive service immedi(cid:173)
`ately when it becomes backlogged.
`III. RATE-PROPORTIONAL SERVERS
`We now use the concept of potential introduced in the
`last section to define a general class of schedulers, which we
`call Rate-Proportional Servers (RPS). We will first define
`these schedulers based on the fluid model and later extend
`the definition to the packet-by-packet version. We denote
`the set of backlogged connections at time t by B(t).
`
`We will later show how the choice of the function P(t) in(cid:173)
`fluences the delay and fairness behavior of the scheduler.
`
`Page 4 of 12
`
`

`

`Definition 5: A rate proportional server has the following
`properties:
`1. Rate Pi is allocated to connection i and
`
`V
`Epi::;r
`i=l
`where r is the total service rate of the server.
`2. A potential function P;(t) is associated with each con(cid:173)
`nection i in the system, describing the state of the
`connection at time t. This function must satisfy the
`following properties:
`(a) When a connection is not backlogged, its potential
`remains constant.
`(b) If a connection becomes backlogged at time r, then
`Pi( r) = max(P;( r-), P( r-))
`(3.1)
`(c) For every time t > r, that the connection remains
`backlogged, the potential function of the connection
`is increased by the normalized serviced offered to
`that connection during the interval ( r, t]. That is,
`
`Pi(t)=Pi(r)+ Wi(r,t)
`Pi
`3. The system potential function P(t) describes the state
`of the system at time t. Two main conditions must be
`satisfied for the function P(t):
`(a) For any interval (ti, t2] during a system busy period,
`
`(3.2)
`
`(b) The system potential is always less than or equal to
`the potential of all backlogged connections at time
`t. That is,
`
`(3.3)
`
`P(t)::; min (Pj(t)).
`jEB(t)
`4. Connections are serviced at each instant t according
`to their instantaneous potentials as per the following
`rules:
`(a) Among the set of backlogged connections, only the
`set of connections with the minimum potential at
`time t is serviced.
`(b) Each connection in this set is serviced with an in(cid:173)
`stantaneous rate proportional to its reservation, so
`as to increase the potentials of the connections in
`this set at the same rate.
`The above definition specifies the properties of the system
`potential function for constructing a zero-latency server, but
`does not define it precisely. In practice, the system potential
`function must be chosen such that the scheduler can be im(cid:173)
`plemented efficiently. When we introduce the frame-based
`fair queueing algorithm in the next section, it will become
`clear how this definition can be used to design a practical
`scheduling algorithm.
`GPS multiplexing is a rate-proportional server where the
`system potential is always equal to the potential of the back(cid:173)
`logged connections. Since the service rate offered to the
`
`connections is proportional to their reservations at every in(cid:173)
`stant, the normalized service they receive during an interval
`( ti , t2] is never less than ( t2 - ti). Thus, the amount of
`service received by a connection i, backlogged during the
`interval (ti, t2), is given by
`
`and therefore,
`
`W;(ti, t2)
`Pi
`t2 - t1.
`VirtualClock is a rate-proportional server as well. Consider
`a server where the system potential function is defined as
`P(t) = t.
`It is easy to verify that such a server satisfies all the prop(cid:173)
`erties of a rate-proportional server. Consider a packet-by(cid:173)
`packet server that transmits packets in increasing order of
`their finishing potentials. Such a server is equivalent to the
`packet-by-packet VirtualClock server.
`We now proceed to show that every rate-proportional
`server is a zero-latency server. This will establish that this
`class of servers provide the same upper-bounds on end-to(cid:173)
`end delay as GPS. To prove this result, we first introduce
`the following definitions:
`Definition 6: A session-i active period is a maximal in(cid:173)
`terval of time during a system busy period, over which the
`potential of the session is not less than the potential of the
`system. Any other period will be considered as an inactive
`period for session i.
`The concept of active period is useful in analyzing the be(cid:173)
`havior of a rate-proportional scheduler. When a connection
`is in an inactive period, it can not be backlogged and there(cid:173)
`fore can not be receiving any service. On the other hand, an
`active period need not be the same as a backlogged period
`for the connection. Since, in a rate-proportional server, the
`potential of a connection can be below the system potential
`only when the connection is idle, a transition from inactive
`to active state can occur only by the arrival of a packet of
`a connection that is currently idle, whose potential is be(cid:173)
`low that of the system. A connection in an active period
`may not receive service throughout the active period since a
`rate-proportional server services only connections with the
`minimum potential at each instant. However, it always re(cid:173)
`ceives service at the beginning of the active period, since its
`potential is set equal to the system potential at that time.
`Since £7l-servers are defined in terms of busy periods, it
`is necessary to establish the correspondence between busy
`periods and active periods in a rate-proportional server. We
`will now show that the beginning of a busy period is the
`beginning of an active period as well.
`Lemma 1: If r is the beginning of a session-i busy period
`in a rate-proportional server, then r is also the beginning of
`an active period for session i.
`A proof of this and subsequent lemmas and theorems can
`be found in [14]. When connection i becomes active, its po(cid:173)
`tential is the minimum among all backlogged connections,
`
`Page 5 of 12
`
`

`

`enabling it to receive service immediately. However, if a sub(cid:173)
`sequent connection j becomes active during the busy period
`of connection i, then the service of i may be temporarily sus(cid:173)
`pended until the potentials of i and j become equal. In the
`following lemma, we derive a lower bound on the amount of
`service received by connection i during an active period.
`Lemma 2: Let r be the time at which a connection i
`becomes active in a rate-proportional server. Then, at any
`time t > T that belongs in the same active period, the service
`offered to connection i is
`W;(r, t) ~ p;(t- r).
`
`This lemma is proved in [14]. Intuitively, this result asserts
`that the service of a backlogged connection is suspended
`only if it has received more service than its allocated rate
`earlier during the active period.
`A session busy period may actually consist of multiple
`session active periods. In order to prove that a rate propor(cid:173)
`tional server is an C:R, server with zero latency, we need to
`prove that for every time t after the beginnin g of the j-th
`busy period at time T,
`W;,j(T, t) ~ p;(t- r).
`
`The above lemmas lead us to one of our key results:
`Theorem 2: A rate-proportional server belongs to the
`class CR and has zero latency.
`The main argument for proving this theorem is that dur(cid:173)
`ing inactive periods the connection is not backlogged and
`is thus receiving no service. By Lemma 2, the connection
`can receive less than its allocated bandwidth only during an
`inactive period. However, since no packets are waiting to be
`serviced in an inactive period, the connection busy period
`must have ended by then. The formal proof can be found
`in [14].
`Thus, the definition of rate-proportional servers provides
`us a tool to design scheduling algorithms with zero latency.
`Since both GPS and VirtualClock can be considered as rate(cid:173)
`proportional servers, by Theorem 2, they have the same
`worst-case delay behavior.
`A. Packet-by-Packet Rate-Proportional Servers
`A packet-by-packet rate proportional server can be de(cid:173)
`fined in terms of the fluid-model as one that transmits pack(cid:173)
`ets in increasing order of their finishing potential. Let us as(cid:173)
`sume that when a packet from connection i finishes service
`in the fluid server, the potential of connection i is TS;. We
`can use this finishing potential to timestamp packets and
`schedule them in increasing order of their time-stamps. We
`call such a server a packet-by-packet rate-proportional server
`(PRPS).
`In the following, we denote the maximum packet size of
`session i as L; and the maximum packet size among all the
`sessions as Lmax•
`In order to analyze the performance of a packet-by-packet
`rate-proportional server we will bound the difference of ser(cid:173)
`vice offered between the packet-by-packet server and the
`fluid-server when the same pattern of arrivals is applied to
`
`both the servers. Let us assume that the service offered
`to session i during the interval ( r, t] by the fluid server is
`wt ( T, t) and by the packet-by-packet server is wt ( r, t).
`Let us assume that the kth packet leaves the system under
`the PRPS service discipline at time tf. The same packet
`leaves the RPS server at time tf. Using a similar approach
`as the one used for GPS servers [1], we can prove the fol(cid:173)
`lowing lemma:
`Lemma 3: For all packets in a packet-by-packet rate(cid:173)
`proportional server,
`
`tf $ t[ + Lmax .
`r
`
`If we include the partial service received by packets in trans(cid:173)
`mission, the maximum lag in service for a session i in the
`packet-by-packet server occurs at the instant when a packet
`starts service. Let us denote with W; ( t) the service offered
`to connection i at time t if this partial service is included.
`At the instant when the kth packet starts service in PRPS,
`
`Thus, we can state the following corollary:
`Corollary 1: At any time t,
`Wt(o, t) - Wt(o, t) $ Lmax•
`
`In order to be complete we also have to bound the amount by
`which the service of a session in the packet-by-packet server
`can be ahead of that in the fluid-server. Packets are serviced
`in PRPS in increasing order of their finishing potentials. If
`packets from multiple connections have the same finishing
`potential, then one of them will be selected for transmission
`first by the packet-by-packet server, causing the session to
`receive more service temporarily than in the fluid server. In
`order to bound this additional service, we need to determine
`the service that the connection receives in the fluid-server.
`The latter, in turn, requires knowledge of the potentials the
`other connections sharing the same outgoing link. We will
`use the following lemma to derive such an upper bound.
`Lemma 4: Let (0, t] be a server-busy period in the fluid
`server. Let i be a session backlogged in the fluid server at
`time t such that i received more service in the packet-by(cid:173)
`packet server in the interval (0, t]. Then there is another
`session j, with Pj(t) $ P,(t), that received more service in
`the fluid server than in the packet-by-packet server during
`the interval (0, t].
`A proof of this lemma can be found in [14]. We will
`now use the above lemma and a method similar to the one
`presented in [15] for the PG

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket