throbber
344
`
`IEEWACM TRANSACTIONS ON NEJ’WORJONG, VOL. 1, NO. 3, JUNE 1993
`
`A Generalized Processor Sharing Approach
`in Integrated Services
`to Flow Control
`Networks: The Single-Node Case
`
`Abhay K. Parekh, Member,
`
`IEEE, and Robert G. Gallager,
`
`Fellow, IEEE
`
`Abstruet-The problem of allocating network resources to the
`users of an integrated services network is investigated in the
`context of rate-based flow control. The network is assumed to be
`a virtual circuiq comection-based
`packet network. We show that
`the use of Generalized processor Sharing (GPS), when combined
`with Leaky Bucket admission control, allows
`the network to
`make a wide range of worst-case performance
`guarantees on
`throughput and delay. The scheme is flexible in that d~erent
`users may be given widely different performance
`guarantees,
`and is efilcient in that each of the servers
`is work conserving.
`We present a practicat packet-by-packet
`service discipline, PGPS
`(first proposed by Deme5
`Shenker, and Keshav
`[7] under the
`name of Weighted Fair Queueing),
`that closely approximates
`GPS. This altows us to relate ressdta for GPS to the packet-by-
`packet scheme in a precise manner.
`In this paper, the performance of a single-server GPS system is
`analyzed exactty from the standpoint of worst-case packet delay
`and burstiness when the sources are constrained by leaky buckets.
`The worst-case sewdon backlogs are also determined. In the sequel
`to this paper,
`these results are extended to arbitrary
`topology
`networks with multiple nodes.
`
`I.
`
`INTRODUCTION
`
`focus on a central problem
`[17]
`‘fltis pttper and its sequel
`in high-speed
`integrated
`services
`in the control of congestion
`networks. Traditionally,
`the flexibility
`of data networks
`has
`been triided off with the performance
`guarantees
`given to
`its users. For example,
`the telephone
`network provides good
`performance
`guarantees
`but poor
`flexibility, while
`packet
`switched networks are more flexible but only provide margittal
`performance
`guarantees.
`Integrated
`services
`networks must
`~arry a wide r~nge of tfaffic t~pes and still be able to provide
`performance
`guarantees
`to real-time
`sessions
`such as voice
`and video. We will
`investigate
`art approach to reconcile
`these
`apparently
`conflicting
`demands when the short-term demand
`for link usage frequently exceeds
`the usable capacity.
`We propose the combined use of a packet
`service discipline
`based on Generalized
`Processor
`Sharing
`and Leaky Bucket
`
`Manuscript nxeived June 1992; revised Febmary and April 1992;approved
`by IEHYACM TRANSACTIONSONNETWORSUNGEditor MosheSidi.This paper
`was presentedin put at IEEE INFOCOM ’92. Tk research of A. Parekh
`was psrtfy funded by a ~lnton Hayes Fellowship and a Center for Intelligent
`ControlSystemsFellowship.The researchof R. Gallager was funded by the
`Nationrd Science Foundation under 8802991-NCR and by the Army Research
`Officeunder DAAL03-86-K-0171.
`A. K. Parekh is with the JBM T. J. WatsonResearch Center, Yorktown
`Heights, NY 10598.
`and Decision
`Information
`for
`is with the Laboratory
`R. G. Gstlager’s
`Systems, Massachusetts
`Institute of Technology, Cambridge, MA.
`IEEE Log Number9211033.
`
`and fair use of the
`flexible, efficient,
`to provide
`rate control
`links. Neither Generalized
`Processing Sharing, nor its packet-
`based version, POPS, are new. Generalized Processor Sharing
`is a natural generalization
`of uniform processor
`sharing [14],
`and the packet-based
`version (while developed independently
`by us) was first proposed in [7] under
`the name of Weighted
`Fair Queueing. Our contribution
`is to suggest
`the use of PGPS
`in the context of integrated services networks
`and to combine
`this mechanism with Leaky Bucket admission control
`in order
`to provide performance
`guarantees
`in a flexible environment.
`A major part of our work is to analyze networks of arbitrary
`topology
`using these
`specialized
`servers,
`and to show how
`the analysis
`leads to implementable
`schemes
`for guaranteeing
`worst-case
`packet
`delay.
`In this paper,
`however, we will
`restrict our attention to sessions at a single node, and postpone
`the analysis of arbitrary topologies
`to the sequel.
`Our approach can be described as a strategy for rate-based
`flow control. Under
`rate-based
`schemes,
`a source’s
`traffic is
`parametrized
`by a set of statistics
`such as average
`rate, max-
`imum rate, and burstiness,
`and is assigned a vector of vahtes
`corresponding
`to these parameters.
`The user
`also requests
`a certain quality of service
`that might be characterized,
`for
`example,
`by tolerance
`to worst-case
`or average
`delay, The
`network checks
`to see if a new source can be accommodated
`and,
`if so, takes actions
`(such as reserving transmission
`links
`or switching capacity)
`to ensure the quality of service desired.
`Once a source begins sending traffic,
`the network ensures
`that
`the agreed-upon
`values of traffic parameters
`are not violated.
`Our analysis will concentrate
`on providing
`guarantees
`on
`throughput
`and worst-case
`packet delay. While packet delay
`in the network can be expressed as the sum of the processing,
`queueing,
`transmission,
`and propagation
`delays, we will focus
`on how to limit queueing delay.
`exclusively
`is done through
`We will assume that rate admission control
`leaky buckets
`of using leaky
`[20]. An importaht
`advantage
`buckets
`is that
`this allows us to separate the packet delay into
`two components:
`delay in the leaky bucket and delay in the
`network. The first of these components
`is independent
`of the
`other active sessions,
`and can be estimated by the user
`if the
`statistical
`characterization
`of the incoming data is sufficiently
`simple (see [1, Sect. 6.3] for an example). The traffic entering
`the network
`has been “shaped”
`by the leaky bucket
`in a
`manner
`that can be succinctly characterized
`(we will do this
`in Section V), and so the network can upper bound the seeond
`component
`of packet delay through this characterization.
`lltis
`
`1063-6692/93$03.00 @ 1993 IEEE
`
`

`

`PAREKH AND GALLAGER: PROCESSOR SHARING APPROACH TO FLOW CONTROL
`
`345
`
`upper bound is independent of the statistics of the incomirsg
`data, which is helpful
`in the usual case where these statistics
`are either
`complex
`or unknown. A similar
`approach
`to the
`analysis of interconnection
`networks has been taken by Cruz
`[5], From this point on, we will not consider
`the delay in the
`leaky bucket.
`is defined and ex-
`(GPS)
`Sharing
`Processor
`Generalized
`plained in Section Il.
`In Section 111, we present
`the packet-
`based scheme, PGPS, and show that
`it closely approximates
`GPS. Results obtained
`in this
`section allow us to translate
`session delay and buffer
`requirement
`bounds
`derived
`for a
`GPS server
`system to a PGPS server
`system. We propose
`a virtual
`time implementation
`of PGPS in the next section.
`Then, PGPS is compared
`to weighted
`round robin, virtual
`clock multiplexing
`[21 ], and stop-and-go
`queueing [9]–[ 11].
`Having
`established
`PGPS
`as
`a desirable multiplexing
`scheme, we turn our attention to the rate enforcement
`function
`in Section V. The Leaky Bucket
`is described and proposed as
`a desirable
`strategy for admission
`control. We then proceed
`with an analysis,
`in Sections VI–VIII, of a single GPS server
`system in which the sessions are constrained
`by leaky buckets,
`The results obtained here are crucial
`in the analysis of arbitrary
`topology and multiple node networks, which we will present
`in the sequel
`to this paper,
`
`H. GPS MULTIPLEXING
`
`at the nodes
`service discipline
`The choice of an appropriate
`flow control.
`the network
`is key to providing
`effective
`of
`to treat users
`A good
`scheme
`should
`allow the network
`differently,
`in accordance with their desired quality of service.
`this jiexibiliry
`the fairness
`However,
`should not compromise
`of the scheme,
`i.e., a few classes of users should not be able to
`degrade service to other classes,
`to the extent
`that performance
`guarantees
`are violated. Also,
`if one assumes
`that
`the demand
`for high bandwidth
`services
`is likely to keep pace with the
`increase in usable link bandwidth,
`time and frequency multi-
`plexing are too wasteful of the network resources
`to be con-
`sidered candidate multiplexing
`disciplines. Finally,
`the service
`discipline must be analyzable
`so that performance
`guarantees
`can be made in the first place. We now present a flow-based
`multiplexing
`discipline
`called Generalized
`Processor Sharing
`that
`is efficient,
`flexible,
`and analyzable,
`and that
`therefore
`seems very appropriate
`for integrated services networks. How-
`ever,
`it has the significant drawback of not transmitting packets
`as entities.
`In Section
`III, we will present
`a packet-based
`multiplexing
`discipline
`that
`is an excellent
`approximation
`to
`GPS even when the packets are of variable length.
`is work
`A Generalized
`Processor
`Sharing
`(GPS)
`server
`conserving and operates at a fixed rate T. By work conserving,
`we mean that
`the server must be busy if there are packets
`waiting
`in the system.
`It
`is characterized
`by positive
`real
`numbers 41. qbz..... @,v. Let s, (7. t) be the amount of session
`i traffic served in an interval
`(T-.t]. A session is backlogged
`at
`time t if a positive
`amount of that session’s
`traffic is queued
`at time t. Then. a GPS server
`is defined as one for which
`
`(1)
`
`l
`
`l
`
`l
`
`l
`
`i
`
`for any session z that is continuously
`(7, t].
`Summing
`
`over all sessions
`
`j:
`
`backlogged in the interval
`
`3
`
`and session i
`
`is guaranteed
`a rate of
`g,= &7’.
`
`(2)
`
`scheme
`
`for a number
`
`of
`
`GPS is an attractive multiplexing
`reasons:
`Define r,
`rate. Then, as long
`to be the session i average
`as r, S g~, the session can be guaranteed
`a throughput of
`pi
`independent
`of the demands of the other
`sessions.
`In
`addition to this throughput guarantee,
`a session i backlog
`will always be cleared at a rate ~ g,.
`The delay of an arriving session i bit can be bounded as
`a function of the session i queue length,
`independent
`of
`the queues
`and arrivals of the other
`sessions. Schemes
`such as FCFS, LCFS, and Strict F%ority do not have this
`property.
`By varying the @i’s, we have the flexibility of treating the
`sessions in a variety of different ways. For example, when
`all @,’s are equal,
`the system reduces to uniform processor
`sharing. As long as the combined
`average
`rate of
`the
`sessions
`is less than r, any assignment
`of positive ~i’s
`yields
`a stable
`system. For example,
`a high-b~ndwidth
`delay-insensitive
`session i can be assigned gi muctt
`less
`than its average
`rate,
`thus allowing
`for better
`treatment
`of the other
`sessions.
`net-
`to make worst-case
`Most
`importantly,
`it is possible
`are
`work queueing
`delay guarantees when the sources
`constrained
`by leaky buckets. We will present our results
`on this
`later. Thus, GPS is particularly
`attractive
`for
`sessions
`sending real-time traffic such as voice and video.
`Fig.
`1 illustrates generalized processor
`sharing. Variable-length
`packets arrive from both sessions on infinite capacity links and
`let Ai(O, t) be
`appear as impulses
`to the system. For i = 1,2,
`the amount of session i
`traffic that arrives
`at
`the system in
`the interval
`(0, t] and,
`similarly,
`let Si(O. t) be the amount
`of session
`traffic that
`is served in the interval
`(O, t]. We
`assume
`that
`the server works at rate 1. When 41 = +2 and
`both sessions
`are backlogged,
`they are each served at rate ~
`(e.g.,
`interval
`[1. 6]). When 201 = @Z and both sessions
`are
`backlogged,
`session 1 is served at rate ~ and session 2 at rate
`~. Notice how increasing
`the relative weight of 42 leads to
`better
`treatment of that session in terms of both backlog and
`delay. The delay to session 2 goes down by one time unit, and
`the delay to session 1 goes up by one time unit. Also, notice
`that under both choices of @i, the system is empty at time 13
`since the server is work conserving under GPS.
`It should be clear
`from the example
`that
`the delays expe-
`rienced by a session’s
`packets can be reduced by increasing
`the value of @ for
`that session. This reduction,
`though, may
`of a corresponding
`be at
`the expense
`increase
`in delay for
`packets
`from the other sessions. Fig. 2 demonstrates
`that
`this
`
`

`

`346
`
`IEEWACM TRANSACTfON5ONNETWORKfNG, VOL. I, NO. 3, Jw
`
`1993
`
`TABLE I
`How GPS AND POPS COMPAREFOR THE EXAMPLE IN FIG. 1.
`
`packet
`information
`
`41=
`
`42
`
`241 = 42
`
`Arrival
`Size
`
`GPS
`PGPS
`GPS
`PGPS
`
`Session
`3
`2
`2
`1
`
`1
`11
`2
`
`Session 2
`9
`5
`2
`2
`
`0
`3
`
`5
`5
`5
`5
`
`9
`7
`9
`9
`
`13
`13
`13
`13
`
`5
`3
`4
`3
`
`9
`9
`8
`7
`
`11
`11
`11
`11
`
`1
`1
`
`3
`4
`4
`4
`
`The lower portion of the table gives the packet departure
`
`times under both schemes.
`
`asai!xd
`
`3micQ_2
`
`picketu=
`I
`
`&rd2d4
`
`puketsize
`
`34
`
`--------------------------------------------------------------
`
`-
`
`&+l
`
`‘k. ‘kind
`‘&e‘&m
`
`10
`
`2a
`
`1
`
`.J
`,
`
`I
`2
`
`I
`
`I
`46
`
`I
`8
`
`I
`10
`
`I
`12
`
`1
`14 ‘ire’
`
`‘%rm-ze
`
`Fig. 1. An example of generalized
`
`processor
`
`sharing.
`
`session is steady.
`may not be the case when the better-treated
`Thus, when combined with appropriate
`rate enforcement,
`the
`can be used effectively
`to
`flexibility
`of GPS multiplexing
`control packet delay.
`
`III. A PACKET-BY-PACKETTRANSMISSIONSCHEME–POPS
`
`discipline
`is an idealized
`it
`A problem with GPS is that
`that does not
`transmit packets
`as entities.
`It assumes
`that
`the
`server
`can serve multiple
`sessions
`simultaneously
`and that
`the traftic is infinitely divisible.
`In this section, we present
`a simple
`packet-by-packet
`transmission
`scheme
`that
`is an
`excellent
`approximation
`to GPS even when the packets
`are
`of variable length. Our idea is identical
`to the one used in [7].
`We will adopt
`the convention that a packet has arrived only
`afier its last bit has arrived.
`Let FP be the time at which packet p will depart
`(finish
`a very
`service) under Generalized
`Processor Sharing. Then,
`good approximation
`of GPS would be a work-conserving
`
`Fig. 2, The effect of increasing o,
`
`for a steady session i].
`
`order of FP. Now,
`in increasing
`that serves packets
`scheme
`time r. The next
`that
`the server beeomes
`free at
`suppose
`to depart under GPS may not have arrived at
`time -r
`packet
`and, since the server has no knowledge
`of when this packet
`to be both work
`will arrive,
`there is no way for
`the server
`order of FP.
`conserving
`and serve the packets
`in increasing
`The server picks the first packet
`that would complete
`service
`in the GPS simulation
`if no additional packets were to arrive
`this scheme PGPS for packet-by-
`after
`time T. Let us call
`packet Generalized
`Processor Sharing. As stated earlier,
`this
`mechanism was originally called Weighted Fair Queueing [7].
`Table I shows how PGPS performs
`for the example in Fig. 1.
`Notice
`that when ~1 = 42,
`the first packet
`to complete
`service under GPS is the session 1 packet
`that arrives at time
`1. However,
`the POPS server
`is forced to begin serving the
`long session 2 packet at time O since there are no other packets
`in the system at that
`time. Thus,
`the session 1 packet arriving
`at time 1 departs
`the system at time 4, i.e., 1 time unit
`later
`than it would depart under GPS.
`is how much later
`A natural
`issue to examine at this point
`packets may depart
`the system under POPS relative to GPS.
`First, we present a useful property of GPS systems.
`Lemma I: Let p and p’ be packets in a GPS system at time
`~, and suppose that packet p completes
`service before packet
`
`

`

`PAREXH AND GALLAGER: PROCESSOR SHARING APPROACH ?O FLOW CONTROL
`
`347
`
`is
`
`p’ if there are no arrivals after time ~. Then, packet p will also
`service before packet p’ for any pattern of arrivals
`complete
`after
`time r.
`The sessions to which packets p and p’ belong are
`Proc$
`both backlogged from time r until one completes
`transmission.
`By (1),
`the ratio of the service received by these sessions
`independent
`of future arrivals.
`if PGPS schedules
`is that
`A consequence
`of
`this
`lemma
`another packet p’
`a packet p at
`time ~ kfore
`that
`is also
`backlogged
`at
`time ~,
`then in the simulated GPS system,
`packet p cannot
`leave later
`than packet
`p’. Thus,
`the only
`packets
`that are delayed more in PGPS are those that arrive
`too late to be transmitted
`in their GPS order.
`Intuitively,
`this
`means that only the packets that have a small delay under GPS
`are delayed more under PGPS.
`Now let FP be the time at which packet
`PGPS. We show that
`Theorem 1: For all packets
`
`p departs under
`
`p,
`
`is the maximum packet
`
`length and r
`
`is the rate
`
`where L~u
`of the server.
`Proojl Since both GPS and PGPS are work-conserving
`is
`disciplines,
`their busy periods coincide,
`i.e., the GPS server
`in a busy period iff the PGPS server
`is in a busy period. Hence,
`it suffices
`to prove the result
`for each busy period. Consider
`any busy period and let the time that it begins be time zero. Let
`p~ be the kth packet
`in the busy period to depart under PGPS,
`and let k
`be Lk. Also,
`let t~ be the time that p/c departs
`hgth
`under PGPS and Uk be the time that pk departs under GPS.
`let a~ be the time that pk tives. We now show that
`Finally,
`
`for k = 1,2,
`O<rnsk-
`
`.... Let m be the largest
`integer
`landu,,,
`>uk.
`Thus,
`
`that satisfies both
`
`ll.~>uk~ul
`
`form<
`
`i<k.
`
`(3)
`
`Then, packet pm is transmitted
`. . . . pk
`before packets p~+l
`under PGPS but after all these packets under GPS. If no such
`for the case m > 0,
`integer m exists,
`then set m = O. Now,
`packet pm begins transmission
`at t~ – ~;
`so, from Lemma 1,
`
`min{a~+l,
`
`L.
`.... W} > tm – -y.
`
`(4)
`
`Since pm+I.
`. . ..pk_l
`pk does under GPS,
`
`1
`uk~#Lk+Lk_l+L&z+
`
`tiVe
`
`after
`
`t~ – ~
`
`and depart befOre
`
`...+L~+l)+t~–~
`
`L~
`
`If m = O, then pk _ ~,..., pl
`pk does, and SO
`
`all
`
`leave the GPS server before
`
`Uk >tk.
`
`(cid:144)1
`
`leave simultaneously
`packets
`if fv maximum-size
`Note that
`in the reference
`system,
`they can be served in arbitrary order
`in the packet-based
`system. Thus, F“ – FP 2 (IV – 1) ~
`even if the referenqe
`system is tracked perfectly.
`bt
`t) ~d SI(7,
`t) be the amount of session i traffic (in
`S1(T,
`bits, not packets)
`served under GPS and PGPS in the interval
`[T,t].
`Theorem 2: For all
`
`times ~ and sessions
`
`i:
`
`Si(O, ~) – Sa(O, ~) s L~aX.
`
`The slope of Si alternates
`Proof
`r when a
`between
`session i packet
`is being transmitted,
`and O when session i is
`not being served. Since the slope of Si also obeys these limits,
`the difference
`Si (O, t) – Si (O, t)
`reaches
`its maximal
`value
`when session z packets begin transmission
`under PGPS. Let
`t
`be some such time, and let L be the length of the packet going
`into service. Then,
`the packet completes
`transmission
`at time
`t + ~. Let ~ be the time at which the given packet completes
`transmission
`under GPS. Then,
`since
`session i packets
`are
`served in the same order under both schemes,
`
`S1(O,~) = Sa(O, t+ ~).
`
`From Theorem 1,
`
`+ Si(O,
`
`t + L-:m”)
`
`T>(~+L)–&
`T
`< Si(o, t+:)
`
`T
`
`= Si(O, t) + L.
`
`(5)
`
`(6)
`
`(7)
`
`Since the slope of S1 is at most T,
`the theorem follows.
`Let Q~(-r) and Q,(t)
`be the session i backlog (in units of
`traffic) at time T under PGPS and GPS,
`respective y. Then,
`it
`immediately
`follows
`from Theorem 2 that
`Codlav
`1: For all
`times 7 and sessions
`
`i
`
`oz(o,~)
`
`- Qi(O, T) < L~aX.
`
`l
`
`l
`
`l
`
`the uniform
`shown for
`the result
`Theorem 1 generalizes
`processing case by Greenberg and Madras
`[12]. Notice that
`Theorem I and Corollary
`I can be used to translate
`bounds on GPS worst-case
`packet delay and backlog to
`corresponding
`bounds on PGPS.
`Variable packet
`lengths are easily handled by PGPS. This
`is not
`true of weighted round robin.
`to pro-
`applied
`be
`The
`results
`derived
`so far
`can
`problem studied
`a
`vide
`an
`alternative
`solution
`to
`in [4],[19],[2],[8 ],[3]: There
`are N input
`links
`to a
`the peak rate of the ith link is Ci, and the
`multiplexer;
`rate of
`the multiplexer
`is C ~ ~~=1 Cl. Since up to
`L max bits from a packet may be queued from any link
`before
`the packet has “arrived,”
`at
`least Lmax bits of
`buffer must be allocated
`to each link.
`In fact,
`in [3] it
`is shown that at
`least 2L~aX bits are required,
`and that
`a class of buffer policies
`called Least Time to Reach
`Bound (LTRB) meets
`this bound.
`It
`is easy to design
`a PGPS policy that meets
`this bound as well: Setting
`@i = Ci,
`it is clear
`that
`the resulting GPS server ensures
`
`q
`q
`

`

`348
`
`IEEWACM TRANSACTIONS ON NETWORKING, VOL. 1, NO. 3, JUNE 1993
`
`any busy period, and let the
`is idle. Consider
`when the server
`time that it begins be time zero. Then, V(t) evolves as follows:
`
`v(o) = o
`V(tj-1 + T) = V(tj-1)
`
`+
`
`T ~ tj –tj–l,
`
`Ei:,
`j =2,3,
`
`+i ‘
`...
`
`(lo)
`
`bits are ever queued at any link.
`that no more than L~=
`1 guarantees
`that no more than
`The bound of Corollary
`2L~m bits need to be allocated
`per
`link under POPS.
`if Li
`In fact,
`is the maximum allowable
`packet
`size for
`link Z, then the bound on the link i buffer
`requirement
`is
`Li+Lm=.
`Further, various generalizations
`of the problem
`can be solved: For example,
`suppose the link speeds are
`arbitrary, but no more than /i(t) + Titbits can arrive on
`link i
`in any interval of length t (for each i). Then,
`if
`xi ~i < C,
`setting q$ = ~i for each i yields
`a PGPS
`service discipline
`for which the buffer
`requirement
`is
`L max + m~~~o(fi(t)
`– r~t) bits for each link i.
`c > 0 such that
`l There is no constant
`
`S~(O, t) – S1(0, t) < CL~U
`
`(8)
`
`i over all patterns of arrivals. To see
`holds for all sessions
`f#~=K,
`~z =...=
`~N=land
`t,his, let K=[c+2j,
`sizes at Lma. At time zero, K – 1 session
`fix all packets
`1 packets arrive and one packet arrives
`from each of the
`other
`sessions. No more packets
`arrive after
`time zero.
`Denote the K – lat session 1 packet
`to depart GPS (and
`‘~l(IV+K-1)~,
`PGPS) as packet p. Then, FP = —
`the
`i = 2, .... IV. Thus,
`and Si(O, Fp) = ~L~U
`for
`first K – 1 packets
`to depart
`the GPS system are the
`session
`1 packets,
`and packet
`leaves POPS at
`time
`p
`(K - 1)*.
`Consequently,
`
`S1(O, (K - 1)%)
`
`= (K – l)L~aX
`
`and
`
`SI(O, (K – 1) +_zq
`
`= K(K – l)Lm=
`N-K+l
`
`“
`
`This yields
`
`&(O,
`
`(K – 1) %)
`
`- S1(O, (K-
`
`l)=)
`
`= (K-
`
`l) LmU(l
`
`- ~_;
`
`+l).
`
`(9)
`
`the RHS of (9) can be made to approach
`For any given K,
`(K -
`l)L~a
`arbitrarily closely by increasing
`iV.
`
`A. Wtual
`
`i’lme Implementation of PGPS
`
`Time
`of Wtual
`In this section, we will use the concept
`to track the progress
`of GPS that will
`lead to a practical
`implementation
`of POPS. Our
`interpretation
`of virtual
`time
`generalizes
`the innovative
`one considered
`in [7] for uniform
`processor
`sharing.
`In the following, we assume that
`the server
`works at rate 1.
`each arrival and departure from the GPS
`Denote as an m
`tj be the time at which the jth event occurs
`server, and let
`(simultaneous
`events are ordered arbitrarily). Let
`the time of
`the first arrival of a busy period be denoted as tl = O. Now
`observe that, for each j = 2,3,
`..., the set of sessions
`that are
`(tj-l,
`tj)
`is fixed, and we may denote this
`busy in the interval
`set as Bj. Wturd time V(t)
`is defined to be zero for all times
`
`The rate of change of V, namely av$~+’),
`
`is
`
`‘d
`
`.
`
`~’
`service at rate @iWqj+r)
`session i receives
`each backlogged
`Thus, V can be interpreted
`as increasing
`at the marginal
`at which backlogged
`sessions
`receive
`service.
`the kth session i packet
`arrives at time
`Now suppose
`that
`length L$. Then,
`and has
`denote
`the virtual
`times
`at
`a!
`which this packet begins
`and completes
`service
`as S,~ and
`F},
`respectively. Defining F’) = O for all
`i, we have
`
`rate
`
`S$ = max{Ff-l,
`
`V(a~)}
`
`(11)
`
`time
`the virtual
`of
`properties
`attractive
`are three
`There
`interpretation
`from the standpoint of implementation.
`First,
`the
`virtual
`time finishing times can be determined
`at
`the packet
`arrival
`time. Second,
`the packets are served in order of virtual
`time finishing time. Finally, we need only update virtual
`time
`when there are events in the GPS system. However,
`the price
`to be paid for these advantages
`is some overhead in keeping
`track of sets Bj, which is essential
`in the updating of virtual
`time.
`to be the real time at which the next packet
`Define Next(t)
`will depart
`the GPS system after
`time t if there are no more
`arrivals
`after
`time t.Thus,
`the next virtual
`time update after
`at Nezt(t)
`t will be performed
`if there are no arrivals
`in the
`[t, Next (t)]. Now,
`interval
`suppose
`a packet
`arrives at some
`time t (let
`it be the jth event) and that
`the time of the event
`just prior
`to t is I- (if there is no prior event,
`i.e.,
`if the packet
`is the first arrival
`in a busy period,
`then set ~ = O). Then, since
`is fixed between events, V(t) may be
`the set of busy sessions
`computed
`from (10) and the packet
`stamped with its virtual
`time finishing time. Next (t)is the real
`time corresponding
`to
`the smallest virtual
`time packet
`finishing time at time t. This
`real
`time may be computed
`from (10) since the set of busy
`sessions, B~, remains
`fixed over the interval [t,Next(t)]:
`Let
`~~in be the smallest virtual
`time finishing time of a packet
`in
`the system at time t. Then,
`from (10)
`
`Next(t)
`
`– t
`
`‘M’” = ‘(t)+ Zi.B, di
`
`- Next(t)
`
`= t+ (Fmin – V(t)) ~
`iEB,
`
`~i.
`
`Given
`is defined
`is updated
`finishing
`packets
`
`time, POPS
`virtual
`this mechanism for updating
`virtual
`time
`arrives,
`as follows: When a packet
`and the packet
`is stamped with its virtual
`time
`time. The
`server
`is work conserving
`and serves
`in an increasing order of timestamp.
`
`

`

`PAREKH AND GALLAGER: PROCESSOR SHARING APPROACH ?0 FLOW CONTROL
`
`349
`
`IV. COMPARINGPOPS TO OTHER SCHEMES
`
`Under weighted round robin, every session i has an integer
`weight
`wi associated with it. The server polls the sessions
`according a precomputed
`sequence in an attempt
`to serve ses-
`sion 2 at a rate of fi.
`If an ernpt y buffer
`is encountered,
`the
`
`instantaneously.
`server moves to the next session in the order
`in
`When an arriving
`session
`i packet
`just misses
`its
`slot
`a frame,
`it cannot be transmitted
`before
`the next
`session i
`slot.
`If the system is heavily loaded in the sense that almost
`every slot
`is utilized,
`the packet may have to wait almost N
`slot
`times
`to be served, where N is the number of sessions
`sharing the server. Since POPS approximates GPS to within
`one packet
`transmission
`time regardless of the arrival patterns,
`it is immune to such effects. POPS also handles variable-length
`packets in a much more systematic
`fashion than does weighted
`round robin. However,
`if N or the packet sizes are small,
`then
`it
`is possible
`to approximate GPS well by weighted
`round
`robin. Hahne [ 13] has analyzed round robin in the context of
`providing fair rates to users of networks
`that utilize hop-by-hop
`window flow control.
`scheme called virtual clock
`Zhang proposes
`an interesting
`multit)lexing [21], %-tual clock multiplexing
`allows a guaran-
`teed rate and (average) delay for each session,
`independent
`of
`the behavior of other sessions. However,
`if a session produces
`a large burst of data, even while the system is lightly loaded,
`that session can be “punished”
`much later when the other
`sessions become
`active. Under PGPS,
`the delay of a session
`i packet can be bounded in terms of the session i queue size
`seen by that packet upon arrival, even in the absence of any
`rate control. This enables
`sessions
`to take advantage of lightly
`loaded network conditions. We illustrate
`this difference with
`a numerical
`example:
`fixed-size pack-
`that submit
`Suppose there are two sessions
`is one, and the
`ets of one unit each. The rate of the server
`packet
`arrival
`rate is ~ for each session. Starting
`at
`time
`zero, 1000 session
`1 packets
`begin to arrive
`at a rate of
`1 packet/second. No session 2 packets
`arrive in the interval
`[0900) but, at time 900,450
`session 2 packets begin to arrive
`at a rate of one packetkecond.
`Now if the sessions
`are to be
`treated equally,
`the virtual clock for each session will tick at a
`rate of ~, and the PGPS weight assignment will be 41 = oz.
`Since both disciplines
`are work conserving,
`they will serve
`session 1 continuously
`in the interval
`[0900).
`from either
`At
`time 900-,
`there are no packets
`in queue
`session;
`the session
`1 virtual
`clock will
`read 1800 and the
`session
`2 virtual
`clock will
`read 900. The 450 session
`2
`packets
`that begin
`arriving
`at
`this
`time will be
`stamped
`900902904,.....1798,
`while
`the 100 session
`1 packets
`that
`arrive
`after
`time 900 will be stamped
`1800.1804,....1998.
`Thus, all of the session 2 packets will be served under Virtual
`Clock before
`any of
`the session
`1 packets
`are served. The
`session 1 packets
`are being punished
`since the session used
`the server exclusively
`in the interval
`[0900). Note, however,
`this exclusive use of the server was no? at the expense of
`that
`any session 2 packets. Under PGPS,
`the sessions are served in
`round robin fashion from time 900 on, which results in much
`less delay to the session 1 packets.
`
`aspect
`is an attractive
`feature
`The lack of a punishment
`of PGPS since,
`in our scheme,
`the admission
`of packets
`is
`regulated at the network periphery through leaky buckets and
`it does not seem necessag
`to punish users at the internal nodes
`as well. Note, however,
`that in this example PGPS guarantees
`a
`throughput
`of ~ to each session even in the absence of access
`control.
`is proposed in [9]-[ 11] and is based
`Stop-and-Go Cheueing
`on a network-wide
`time slot structure.
`It has two advantages
`over our approach:
`it provides better jitter control and is prob-
`ably easier
`to implement. A finite number of connection
`types
`are defined, where a type g connection
`is characterized
`by a
`fixed frame size of Tg. Since each connection must conform
`to a predestined connection
`type,
`the scheme is somewhat
`less
`flexible than PGPS. The admission policy under which delay
`and buffer size guarantees
`can be made is that no more than
`riTg bits may
`be submitted during any type g frame.
`If sessions
`1,2,...,N are served by a server of capacity 1, it is stipulated
`that ~~~1 r, S 1, where the sum is only taken over
`the real-
`grow linearly with Tg, so
`time sessions. The delay guarantees
`in order
`to provide low delay one has to use a small slot size.
`The service discipline
`is not work conserving
`and is such that
`each packet may be delayed up to 2Tg time units, even when
`there is only one active session at the server. Observe that for
`a single-session
`PGPS system in which the peak rate does not
`exceed the rate of the server, each arriving packet
`is served
`immediately
`upon arrival. Also, since it
`is work conserving,
`PGPS will provide better average delay than stop-and-go
`for
`a given access control
`scheme.
`It is clear that ~i is the average rate at which the source i can
`send data over a single slot. The relationship
`between delay
`and slot size may force Stop-and-Go
`to allocate bandwidth by
`peak to satisfy delay -senstive sessions. This may also happen
`under PGPS, but not to the same degree. To see this, consider
`an on/off periodic source that fluctuates between values C – f
`and O. (As usual,
`f is small.) The on period is equal
`to the
`off period,
`say they are B seconds
`in duration. We assume
`that B is large. Clearly,
`the average
`rate of
`this session is
`0.5(C – f). We are interested
`in providing
`this session low
`delay under Stop-and-Go
`and PGPS. To do this, one has to
`pick a slot size smaller
`than B, which forces
`r = C – f. The
`remaining
`capacity
`of
`the server
`that can be allocated
`is c.
`Under PGPS, we allocate
`a large value of @ to the session
`to bring its delay down to the desired level; however,
`now
`the remaining
`capacity that can be allocated
`is 0.5(C + c).
`is a second on/off
`Now observe
`that
`if
`there
`session with
`identical on and off periods
`as the first sesision, but which
`is relatively
`less delay sensitive,
`then PGPS can carry both
`sessions
`(since the combined
`sustainable
`rate is less than C)
`whereas Stop-and-Go
`cannot.
`
`V. LEAKY BUCKET
`
`that we will
`[20]
`scheme
`the Leaky Bucket
`Fig. 3 depicts
`the traffic that enters
`the network. Tokens or
`use to describe
`permits
`are generated
`at a fixed rate, p, and packets
`can be
`released
`into the network
`only after
`removing
`the required
`number of tokens
`from the token bucket. There is no bound
`
`

`

`350
`
`IEEIYACM TRANSACTIONS ON NETWORKING, VOL. 1, NO. 3, JUNE 1993
`
`,.~
`
`.-
`
`Tokens enter at rate pi
`
`4
`
`fm,t)
`
`S,(o,t)
`
`I-JC; titS
`1>
`‘===+=+”o—Rate < Ci
`
`Buffer
`
`Aa(~)q
`
`Ai(O,r) -
`
`.=>
`
`To the network
`
`Incoming (Bursty) ‘lh.fIic
`
`/
`
`F]g. 3. A Leaky Bucket.
`
`I)JJBucket Empty
`
`- Ai(O, t)
`
`z~--K1(t)
`
`,0”
`
`.“
`
`,.’
`
`,’H
`
`,4 Z’--
`
`‘ Bucket mu
`
`u; + Ki(t
`
`L
`
`/’
`
`/
`,~’’l~(b)
`
`-“
`
`:i~)
`
`,0’
`
`u~
`
`._
`
`/
`
`.0”
`
`,0’
`
`.’
`
`----’
`
`= Ui
`
`t
`
`,
`
`slope = P;
`
`Fig. 4. A,(t)
`
`and l,(t).
`
`the token
`that can be buffered, but
`on the number of packets
`bucket contains
`at most a bits worth of tokens.
`In addition to
`securing the required number of tokens,
`the traffic is fuxther
`constrained
`to leave the bucket at a maximum rate of C > p.
`The constraint
`imposed by the leaky bucket
`is as follows:
`If
`Ai (~, t) is the amount of session i flow that
`leaves
`the leaky
`bucket and enters the network in time interval
`(I-, t], then
`
`+
`
`L!
`
`t
`
`>
`
`Fig. 5. At(O,
`
`t), St(O,
`
`t), Qi(t)
`
`~d Dl(t)
`
`We may now express 11(t) as
`/i(t)= ~i+ Ki(t)
`
`– Ai(O, t).
`
`(15)
`
`From (15) and (14), we obtain the useful
`
`inequality
`
`VI. ANALYSIS
`
`of
`performance
`In this section, we analyze the worst-case
`single-node GPS systems for sessions that operate under Leaky
`Bucket
`constraints,
`i.e.,
`the session traffic constrained
`as in
`(12).
`and the only assumptions we make
`There are N sessions,
`traffic are that Ai N (~i, pi, Ci )
`about
`the incoming
`for
`~

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket