throbber
Computer
`Communication
`Review
`
`NT/ENE? We
`
`E0. E9): 833%7"“1rl
`
`Richardson. TX
`L75Q§3-3371
`
`9'
`.
`1
`acmo:o::§sIgcomm
`6.9
`
`o
`
`‘ Speclallnterest
`Group on
`Data
`Communication
`
`The
`SIGCOMM
`Quarterly
`Publication
`
`.
`
`Volume 23
`Number 4
`
`O°t°be'- 1993 ’
`
`SlGCOMM'93
`
`CONFERENCE PROCEEDINGS
`
`Communications Architectures, Protocols and Applications
`
`, September 13-17, 1993
`
`San Francisco, California, USA
`
`
`
`|"“ I
`‘
`RPX Exhibit 1220
`RPX Exhibit 1220
`RPX v. DAE
`"‘
`|J
`RPX v. DAE
`,
`4
`--.-_-j-.-_-ti;//I2
`1 g. Ask AEL
`lr I
`
`1j_?_..—..__
`
`

`
`The Association for Computing Machinery
`1515 Broadway
`New York, N.Y. 10036
`
`Copyright ©1993 by the Association for Computing Machinery, Inc. Copying without fee
`is permitted provided that the copies are not made or distributed for direct commercial
`advantage,and credit to the source is given. Abstracting with credit is permitted. For other
`copying of articles that carry a code at the bottom of the first page, copying-is permitted
`provided that the per-copy fee indicated in the code is paid through the Copyright Clearance
`Center, 27 Congress Street, Salem, MA 01970. For permission to republish write to:
`Director of Publications, Association for Computing Machinery. To copy otherwise or
`republish, requires a fee and/or specific permission.
`
`ISBN: O—89791—619-0
`
`Additional copies may be ordered prepaid from:
`
`ACM Order Department
`P.O. Box 12114
`Church Street Station
`New York, N.Y. 10257
`
`Phone: 1-800-342-6626
`(USA. and Canada)
`1-212-626-0500
`(All other countries)
`Fax: 1-212-944-1318
`E-mail: acmpubs@acm.org
`
`ACM Order Number: 533930
`
`Printed in the U.S.A.
`
`
`
`

`
`This material may be protected by Copyright law (Title 17 U.S. Code)
`
`

`
`In
`simple if certain independence assumptions hold.
`this case,
`the analysis gives rise to so-called product-
`form solutions,
`i.e.
`the queue size distribution for an
`entire network of queues is equal to the product of the
`queue size distribution of the individual queues [14]. Us-
`ing this result, a number of parameters including the
`queue size and delay distributions can be easily calcu-
`lated. However, product—form networks cannot incor-
`porate features of real-life networks such as the correla-
`tions introduced when traffic streams merge and split,
`the regulation of traflic by the routing and flow control
`mechanisms, or the packet losses due to buffer overflow.
`Although progress in these areas has been recently re-
`ported (e.g.
`[5]), it should be pointed out that due to
`the lack of analytic solutions, many studies of packet
`delay and loss behavior have been conducted with sim-
`ulation and experimental approaches.
`
`Regarding simulation approaches, recent work has
`examined the impact of routing and flow control mech-
`anisms on end—to—end delay. For example, reference [25]
`concludes that both link state and distance vector rout-
`ing yield similar average packet delay statistics in a
`NSFNET—like network. References [28, 29] investigate
`the dynamic behavior of TCP connections. In realistic
`situations (i.e.
`for connections with so—called tWo—way
`traffic), it is found that the interactions between data
`and acknowledgement packets generate a clustering of
`the acknowledgement packets which in turn gives rise to
`rapid fluctuations in queue lengths. These results em-
`phasize the importance of studying the dynamics, i.e.
`the time—dependent behavior, of computer networks.
`
`systematic
`approaches,
`Regarding experimental
`measurements of packet delay and loss were carried out
`on the ARPANET as early as 1971 [14, ch. 6]. They
`examined the variations of packet delay for different
`paths, different times of day and days of the week, etc.
`Other measurements were taken to determine how de-
`lays across the ARPANET were influenced by packet
`length. The results were used to assess whether TCP
`performance could be improved by including a depen-
`dence on packet length in the retransmission timeout
`algorithm [15]. Several other studies have addressed
`timeout adjustment in TCP, and they have proposed
`improvements to take into account packet losses, packet
`retransmissions, and the variance of packet round trip
`delays [12, 13].
`I
`
`The NSFN ET replaced the ARPANET in 1990. Re-
`cent studies have measured the delay and loss behavior
`in the NSFNET, and more generally in the Internet.
`They have examined this behavior over different time
`scales.
`
`Merit Network Inc. publishes monthly statistics of
`packet delay between the nodes of the NSFNET. These
`statistics are obtained from measurements performed at
`
`15 minute intervals. They are used in [6] to examine
`the distribution of median delay between nodes of the
`NSFNET. Unfortunately, the Merit statistics are based
`on measurements performed between the exterior inter-
`faces of the backbone nodes. Thus, they might not ac-
`curately characterize end—to—end delay over paths which
`span a combination of backbone and regional or inter-
`national networks.
`
`The behavior of end-to-end round trip delays over
`somewhat shorter time scales is examined in [19]. There,
`groups of 10 ICMP echo packets [26] are sent period-
`ically from a source node to a destination node and
`echoed back to the source node, with a 1 minute interval
`between successive groups. Packets within a group are
`sent at regular 1 second intervals. Round trip delays
`are measured for each packet, and then averaged over
`a group. Various paths, i.e.
`source destination pairs,
`are considered. The results indicate that the delay dis-
`tribution for all paths is best modeled by a constant
`plus gamma distribution, where the parameters of the
`gamma distribution depend on the path (e.g. a path
`over a regional network vs. a path over the NSFNET
`backbone) and the time of the day. A spectral anal-
`ysis of the average delays shows a clear diurnal cycle,
`suggesting the presence of a base congestion level which
`changes slowly with time. Furthermore, packet losses
`and reorderings are positively correlated with various
`statistics of delay.
`
`The behavior of end—to—end round trip delays over
`even shorter time scales is examined in [21, 22]. There,
`small UDP packets are sent every 39.06 ms from a source
`node to a destination node, and echoed back to the
`source node. The authors show how their measurements
`can be used to detect problems in the Internet. For ex-
`ample, they observed in May 1992 that round trip de-
`lays would increase dramatically every 90 seconds. They
`identified the problem as being caused by a ‘debug’ op-
`tion in some gateway software. They identified other
`problems caused by synchronized routing updates, by
`faulty Ethernet interfaces, etc.
`[22]. Their measure-
`ments were also used to observe the dynamics of the
`Internet, e.g.
`the changes in round trip delays caused
`by route changes [21].
`
`.
`
`Despite all the efforts and results described above,
`the end—to—end performance of the Internet remains an
`area which deserves more research attention; For exam-
`ple, there is no clear consensus yet on how “well” the
`Internet performs, or on how to characterize its perfor-
`mance.
`
`In this paper, we use measurements of end-to-end
`delay and loss to characterize the behavior of the In-
`ternet. We obtain these measurements with the UDP
`echo tool used in [21, 22], which provides the round trip
`delays of UDP packets at regular time intervals. By
`
`290
`
`

`
`varying the interval between successive packets, we can
`examine the delay and loss behavior of the Internet over
`different time scales.
`
`Our observations agree with results obtained by oth-
`ers using simulation and experimental approaches. For
`example, our estimates of Internet traflic are compat-
`ible with the hypothesis of a mix of bulk traflic using
`large packet size, and interactive traffic characterized by
`smaller packet size. We observe compression (or clus-
`tering) of the probe packets and rapid fluctuations of
`queueing delays over small intervals. Our estimates of
`Internet traffic are compatible with the hypothesis of a
`mix of bulk traffic using larger packet size, and inter-
`active trafiic characterized by smaller packet size. Our
`results also show interesting and less expected behavior.
`For example, we find that the losses of probe packets are
`essentially random unless the probe traffic uses a large
`fraction of the available bandwidth.
`
`The rest of the paper is organized as follows. In Sec-
`tion 2, we describe the data collection process, i.e. how
`the measurements of packet delay and loss are obtained,
`In Section 3, we outline our strategy for analyzing the
`measurements.
`In Section 4, we analyze the charac-
`teristics of the measured packet delays.
`In Section 5,
`we analyze the characteristics of the measured packet
`losses. Section 6 concludes the paper.
`
`2 Data collection
`
`Recent measurements indicate that the number of hosts
`in the Internet is fast approaching the 1 million mark.
`Clearly,
`it is impossible to study the delay and loss
`characteristics for all possible connections,
`i.e.
`for all
`source—destination pairs. In this paper, we examine one
`specific connection in detail. This connection links IN-
`RIA in France to the University of Maryland (UMd)
`in the United States. The routes taken by the packets
`sent over the connection can be obtained either with
`the route record option of ping, or with traceroute [26].
`Table 1 shows the route between INRIA and UMd as
`
`obtained with traceroute in July 1992. Nodes 5 and 6
`are distinct nodes in the Ithaca Nodal Switching Sys-
`tem. Nodes 4 and 5 are the endpoints of the transat-
`lantic link between France and the United States. At
`the time the experiments were carried out (July 1992),
`the transatlantic link was the the bottleneck link with
`
`with a bandwidth equal to 128kb/s.
`
` tom.inria.fr
`
`
` D-*COCD\ICfiO'l|->C»3l\Db—|
`
`t8-gw.inria.fr
`sophia-gw.atlantic.fr
`icm-sophia.icp .net
`Ithaca.NY.NSS.NSF .NET
`Ithaca1.NY.NSS.NSF.NET
`nss—SURA—eth.sura.net
`sura8-umd-c1.sura.net
`
`csc2hub—gw.umd.edu
`avwhub—gw.umd.edu
`
`
`
`Table 1: Route between INRIA and the University of
`
`Maryland in July 1992
`
`Packet delays and losses on the INRIA—UMd con-
`nection are obtained using NetDyn, a measurement tool
`developed by Dheeraj Sanghi [22]. This tool sends UDP
`packets at regular intervals from a source host to a des-
`tination host via an intermediate host. Throughout the
`rest of the paper, we refer to these packets as probe
`packets, or simply probes. Upon receipt of a probe
`packet from the source, the intermediate host immedi-
`ately echoes the packet to the destination host. The user
`can specify the number of probe packets to be sent, the
`size of the packets, and the interval between successive
`packets sent by the source. In our experiments, we send
`probe packets of 32 bytes each. The interval between
`successive packets ranges over the following values: 8,
`20, 50, 100, 200, and 500 ms. Each experiment lasts 10
`minutes.
`
`A packet includes three 6—byte tirnestamp fields. The
`source timestamp is written when the packet is sent
`by the source host. The echo timestarnp is written
`when the packet is received by the intermediate host.
`The destination timestamp is written when the packet
`is received by the destination host. Furthermore, each
`packet has a unique packet number in order to detect
`packet losses.
`
`If the source, intermediate, and destination hosts are
`geographically distant, then their local clocks may not
`be synchronized and hence the timestamps in the UDP
`probe packets would be difiicult to interpret. To avoid
`this problem, we let the source host be the same as
`the destination host. Furthermore, we measure only
`the difference between the source timestamp and the
`destination timestamp, i.e. we measure only roundtrip
`delays. In our experiments, we use a DECstation 5000
`as a source host. Its clock resolution is 3.906 ms.
`
`We have taken measurements of end—to-end packet
`delay and loss on connections other than the INRIA-
`UMd connection, e.g. connections between UMd and
`MIT, between UMd and the University of Pittsburgh,
`between INRIA and‘ universities in Europe, etc. Even
`
`291
`
`

`
`time series analysis are the model fitting problem and
`the prediction problem. In the first problem, the goal
`is to obtain a model which fits the observations. In the
`second problem, the goal is to predict a future value of
`a process given a record of past observations.
`
`One problem we examine in this paper is the model
`fitting problem. However, we do not use the standard
`procedures from time series analysis, for the following
`reason.
`It turns out that much of the work on this
`problem has been devoted to fitting observations to a
`variety of general types of models, the most well-known
`being the so—called AR (autoregressive), MA (moving
`average), and ARMA (auto regressive and moving aver-
`age) models. The standard model fitting procedures do
`not require any assumption about the underlying struc-
`ture of the observed system, and hence they are suit-
`able even for dealing with those systems where no back-
`ground information is available. In our case, however,
`much is known about the system under study, namely
`the connection over which the probe packets are sent.
`Examples of available information include the number
`of hops, the mix of traffic expected at the intermediate
`nodes [4], etc. Therefore, in this paper, our approach
`is to use this information to interpret our observations
`and to suggest a specific model for the system behavior.
`A similar approach is advocated in
`
`In parallel with this work, we are examining the
`model fitting problem and the prediction problem using
`standard procedures from time series analysis. Specifi-
`cally, we examine whether ARMA models are adequate
`to model queueing delays in communication networks.
`This has consequences for the performance of predic-
`tive control mechanisms, since most such mechanisms
`described so far use these or related models (e.g.
`[16]).
`
`4 Analysis of packet delay
`
`Figure 1 above shows a time series plot of measured
`round trip delays, i.e.
`the evolutions of rttn as a func-
`tion of n.
`In this section, we find it convenient to ex-
`amine instead a so-called phase plot of the round trip
`delays. In a phase plot, a marker is printed at coordi-
`nates (23, y) if there exist a value n such that x = rttn
`and y = 7-tt,.+1. The (a:,y) plane is referred to as the
`phase plane. Figure 2 shows a phase plot of rttn in the
`range 0 3 n 3 800 when 6 2 50 ms. The structure of
`the phase plot is clearly very different from the structure
`of the time series plot. To understand this structure, it
`is convenient to introduce a simple model which cap-
`tures the essential features of our experiments. Refer to
`Figure 3. In our model, we use a constant delay D to
`model the fixed component of the round trip delay of
`the probe packets, and a single server queue with finite
`buffer and FIFO service discipline to model the variable
`
`though the physical characteristics of these connections
`are very'difl'erent, we have found that the observations
`made on the basis of the measurements taken on the
`
`INRIA-UMd connection essentially hold for the other
`connections.
`
`3 Data analysis strategy
`
`In this section, we present our approach to analyzing
`the measurements obtained with the tool described in
`
`Section 2. Recall that in each experiment, the source
`sends probe packets at regular intervals. We denote by
`s,,
`the time at which packet 71
`is sent by the source,
`by 7'” the time at which it is received by the source
`from the echo host,>and by rttn the packet round trip
`delay. If the packet is lost, then 1-,, and hence rttn are
`undefined. In this paper, we let rttn = 0 if packet 72 is
`lost. Otherwise, we have rttn = r,, — s,.,. We denote by
`6 the interval between the send times of two successive
`
`packets, i.e. 6 : s,,+1 — s,, for all n.
`
`Figure 1 shows the evolutions of rttn as a function
`of n in the range 0 3 n 3 800 when 6 : 50 ms. Notice
`the large number of packet losses (the loss probability
`for this experiment turns out to be equal to 9%).
`
`600
`
`500 400
`llll
`ll ll
` l
`
`
`
`urttn(ms)
`
`300
`
`200 I
`l
`
`100
`
`Ill l
`
`200
`
`300
`
`600
`
`700
`
`
`
`
`
`I 0
`
`0
`
`Figure 1: Evolutions of rttn vs.
`
`12 when 6 = 50 ms
`
`The evolutions of rttn as shown in Figure 1 are often
`referred to as time series plots, by which is meant a
`finite set of observations of a random process in which
`the parameter is time
`A large body of principles and
`techniques referred to as time series analysis has been
`developed to analyze time series, the goal being to infer
`‘as much as possible about the whole process from the
`observations
`Two important problems considered in
`
`292
`
`

`
`1:-tt(n+1)
`
`(ms)
`
`200
`
`300
`
`400
`rttn (ms)
`
`500
`
`600
`
`700
`
`800
`
`Figure 2: Phase plot of rttn in the range 0 3 n 3 800
`when 6 = 50 ms
`
`W~>E]?Eo——>
`
`Internet traffic
`
`Figure 3: A model for our experiments
`
`component of these delays. We denote by ,u the service
`rate, expressed in bit/s, of the server. We denote by
`K the buffer size. The arrival stream at the queue is
`the superposition of two streams. The probe stream is
`modeled as a periodic stream of fixed—size packets. We
`denote by P the length, expressed in bits, of the probe
`packets. The Internet stream is in turn the superpo-
`sition of many streams which share common Internet
`resources with the probe connection. Throughout the
`rest of the paper, we refer to packets from the Internet
`stream as Internet packets.
`
`We assume that the Internet stream contributes a
`
`quantity of b,. bits to the queue between the arrival
`times of the probe packets n and n + 1.
`b,,
`is a ran-
`dom variable which characterizes the trafiic pattern of
`the Internet stream. We denote by w,, the waiting time
`(not including the service time) of the probe packet n.
`
`We now explain the structure of the phase plot in
`Figure 2. Consider the situation where the workload
`seen by consecutive probe packets is constant. This sit—
`uation occurs if the Internet traflic is light, i.e.
`if the
`number of Internet packets in the buffer is small, and if
`these packets are small, e.g. Telnet packets. Then the
`queueing delays of consecutive probe packets is approx-
`
`imately constant and we can Write
`
`wn+1 = wn ‘l’ 5n
`
`where 6,, is a random process with mean 0 andlow vari-
`ance. Since rtt,,+1 : D + w,,+1 + P/it and rttn
`D + w,, + P/,u, we obtain
`I
`
`rtt,,+1 — rttn : 6,,
`
`(1)
`
`Thus, in this case, the points on the phase plane are cen-
`tered around the line rtt,,_,_1 = rttn, and they stay close
`to the minimum delay point with coordinates (D,D).
`In Figure 2, we find D m 140 ms.
`
`Consider now the situation Where 6 is small and a
`
`large workload of Internet packets (e.g. one or more
`FTP packets) is received by the queue between the ar-
`rival instants of two consecutive probes. Let B denote
`the size of this Internet workload (expressed in bits)
`ahead of probe packet n + 1. Then the queueing delay
`of probe packet n + 1 is
`
`wn+1 = wn+B/I‘.
`
`and hence
`
`rtt,,+1 — rttn :: B/,u
`
`(2)
`
`If w,,+1 > 6, then one or more probe packets will ac-
`cumulate behind probe packet n + 1 waiting for the
`Internet workload to be processed by the server. Let
`/c denote the number of such packets. Assume that no
`Internet packet is received at the queue between the ar-
`rival times of probe packets n + 1 and n+k. Then probe
`packets): +1 through n+ k will leave the queue at regu-
`lar intervals, specifically P/,u seconds apart. Therefore,
`we have
`
`H
`
`(7’n+2 - 3n+2)"'(7'n+1 - 3n+1)
`
`(7'n+2 “ "‘n+1)‘(3n+2 " 3n+1)
`
`PM ~ 5
`
`I
`
`(3)
`
`Similarly,
`
`7‘ttn+3 - 7‘ttn+2 ‘J
`
`= ’I'tt,,+]c -- 1‘ttn+],;_1 =
`
`— 5
`
`and hence the points on the phase plane corresponding
`to the packets n to n + Ic will be located on the straight
`line rtt,,+1 : rtt,, + P/,u — 6. This straight line is the
`thin dashed line in Figure 2. If 6 Z P/,u, then the probe
`trafiic alone completely saturates the queue. Therefore,
`it is reasonable to keep 6 < P/p in all experiments,
`
`The existence of points on the line rtt,.,+1 2 rttn +
`P//1 —- 6 in the phase plane indicates that probe pack-
`ets accumulate behind large Internet packets. We refer
`to this phenomenon as probe compression because of
`its similarity with the phenomenon of ACK compres-
`sion which has been observed in simulations [29] and in
`
`293
`
`

`
`o:c;IA>c,or\a>--
`
`7 8 r
`
`-tr-in-—Ar-4o—A«D
`
`measurements on the NSFNET [18]. We note that the
`line y =: :c+P//1-6 intersects the x—axis for as = 6—P/,u.
`In Figure 2, we find this point to be about 48 ms. Since
`6 = 50 ms and P : 32 bytes, we obtain it % 130 kb/s.
`This confirms that the transatlantic link between France
`
`and the United States with a bandwidth equal to 128
`kb/s is the bottleneck link on the path from INRIA to
`UMd.
`‘
`
`We now examine the packet delay when 6 is very
`large,
`i.ei.' wheniprobe packets are sent
`infrequently.
`Figure 4 shows the phase plot of rttn in the range '
`0 3 n 3 800 when 6 = 500 ms. V
`In this case,
`we have 6 — P//1 = 490 ms. We observe that only
`two points on the phase plot are located on the line
`rtt,,+1 = rttn — 490, indicating that consecutive probes
`almost never accumulate behind one another. This is
`
`expected since the maximum queueing delay measure-
`ment in this experiment is 620 ms (corresponding to a
`round trip delay of 760 ms and a propagation delay of
`140 ms), i.e. barely larger than the interarrival time be—
`tween successive probe packets. Equation (1) indicates
`that for large values of 6, the points in the phase plane
`should be scattered around the diagonal. This is indeed
`What we observe in Figure 4.
`
`rt:(n+1)
`
`
`
`(ms)
`
`Figure 4: Phase plot of rttn in the range 0 3 n 3 800
`when 6 I 500 ms
`
`We have examined the round trip delays of probe
`packets over connections other than the INRIA—UMd
`connection. In all cases, we have found that the struc-
`ture of the phase plots of rttn is similar to that de-
`scribed above. To illustrate this, we next present the
`phase plots of round trip delays measured in May 1993
`between UMd and the University of Pittsburgh. Table 2
`showsthe path taken by the probe packets at the time
`of the experiments.
`It is not clear which link in the
`
`294
`
`Table 2: Route between the University of Maryland and
`the University of Pittsburgh in May 1993
`
`path is the bottleneck link. However, it is very likely
`that the bottleneck bandwidth is much higher than the
`bottleneck bandwidth between IN RIA and UMd in June
`
`1992, namely 128 kb/s. Figure 5 shows the phase plot
`of rttn in the range 0 3 n 3 800 when 6 : 8 ms. The
`figure also shows the straight lines rttn+1 : rttn and
`rtt,,+1 = rttn — 8. Figure 6 shows the phase plot of
`rttn in the range 0 3 n 3 800 when 6 = 50 ms. The
`somewhat regular spacing between the points in the
`phase plane is caused by the 3 ms clock resolution of
`the source host at UMd. When 6 is small, we observe
`
`rtl:(n-I-1)
`
`(ms)
`
`100
`80
`rttn(ms)
`
`120
`
`140
`
`160
`
`190
`
`Figure 5: Phase plot of rttn in the range 0 3 n 3 800
`when 6 : 8 ms
`
`the same probe compression phenomenon described ear-
`lier. When 6 is large, we observe that the points in the
`
`:4}-C»Dt\':r—*©
`
`lena.cs.umd.e_du
`
`avwlhub-gw.umd.edu
`csc2hub—gw.umd.edu
`192.221.38.5
`en—0.enss136.t3.nsf.net
`
`t3-l.Washington—DC— cnss58 .t3 .ans.net
`t3—3 .Washington—DC—cnss56 .t3 .ans.net
`t3—0.New-York—cnss32.t3.ans.net
`t3—1Cleveland-cnss40.t3.ans.net
`t3-0.Cleveland-cnss41.t3.ans.net
`t3—0.enss132.t3.ans.net
`
`externals.gw.pitt.edu
`136.142.2.54
`
`hub-eh.gw.pitt.edu
`
`

`
`at the queue at time 6, and hence that probe packet n
`arrives at time n6. We also assume that the Internet
`
`stream contributes 12,, bits between the arrival times of
`probe packets n and n + 1. We further assume that all
`1),, bits arrive at the queue at the same time tn (clearly
`n6 3 tn g (n + 1)6). Let wbn denote the waiting time
`of the Internet packet. Applying Lindley’s recurrence
`equation to tub” and w,,,, we obtain
`
`wbn = (wn + P//I - tn)+
`
`(4)
`
`Applying Lindley’s recurrence equation to w,,+1 and
`tub”, we obtain
`
`“’n+1 = (wbn ‘l’ bn//‘ ‘ (‘S ‘ tn))+
`
`(5)
`
`O
`
`if:
`rsé:
`
`0900
`
`O o
`
`n9000
`onon0000
`ou§333com:
`0990
`
`0660000000
`
`(ms)
`
`rtt(n+1)
`
`40
`
`80
`60
`rttn (ms)
`
`100
`
`129
`
`140
`
`Substituting equation (4) into equation (5),’we obtain
`
`Figure 6: Phase plot of rttn in the range 0 g n 3 800
`when 6 = 50 ms
`
`phase plane are scattered around the line rtt,,+1 = rttn.
`
`Throughout the rest of the paper, we analyze only
`those measurements taken on the INRIA—UMd connec-
`
`tion. Next, we present an exact analysis of the queueing
`model shown in Figure 3. The analysis is based on two
`successive applications of Lindley’s recurrence equation
`[14]. Lindley’s recurrence equation expresses the rela-
`tionship between the waiting times of two successive
`customers in a single channel queue. Specifically, let
`tun denote the waiting time of packet n, yn denote the
`service time of packet n, and 33,, denote the interarrival
`time between packets n and n + 1. Then
`
`wn-+1 : {
`
`wn‘i'yn‘33n
`U
`
`ifwn'l"yn"37n>0
`otherwise
`
`A “graphical proof” of this equation is shown in Figure
`7. Throughout the rest of the paper, we use M‘ to de-
`
`Packet arrival
`Packet departure
`
`7'
`'
`
`X,‘
`
`n+1
`u
`
`
`time
`
`
`
`Figure 7: A proof of Lindley’s recurrence equation
`
`note max(a:, 0). With this notation, the above equation
`can be rewritten as
`
`wn.+1 : (wn + yn “' 17n)+
`
`We now go back to our queueing model shown in
`Figure 3. We assume that the first probe packet arrives
`
`“’n+1 7' ((1% ‘l’ P//1 ‘ tn)-it ‘I’ bn/A” " (5 ‘ tn))+
`
`The term w,., + P//J — tn is positive if the buffer does
`not empty during the interval [n6,t,, + n6]. Then the
`above equation becomes
`
`If
`
`“’n+1
`
`(um + Pm ~ tn + +1»,/u — (6 — ta)>+
`(wn + (P + bu)//1 — 6)+
`
`The term 112,, + (P + bn)//1 — 6 is positive if the buffer
`does not empty during the interval [n6,t,, + n6]. Then
`the above equation becomes
`
`U-’n+l :7~”n +(P+bn)//1'“;
`
`and hence
`
`bn : ,u(w,,,+1 — wn + 6) — P
`
`(6)
`
`Thus the probability distribution of b,,, i.e, the proba-
`bility distribution of the Internet workload over an inter-
`val of length 6, can be estimated from the distribution
`of w,,+1 — wn. Recall, however, that the above equal-
`ity holds only if the buffer does not empty during the
`interval [n6, (7: + 1)6]. If the buffer does empty during
`this interval, then equation (6) might not hold. How-
`ever, it is clear that for a given Internet traflic (i.e. for
`a given sequence of b,,), the probability that the buffer
`does empty in the interval [n.6, (n + 1)6] increases as 6
`increases. Therefore, it is reasonable to estimate b,, us-
`ing equation (6) if 6 is sufficiently small, typically if the
`product 6/1 is smaller than some average value of bn .
`
`Figure 8 shows the distribution of w,,+1 — tun + 6 for
`6 = 20 ms. From equation (6), we see that
`‘
`
`wn.+1— wn + 6 = (bu + P)//A
`
`i.e. w,,+1 — 10,, + 6 is the Internet workload, expressed
`in ms, received by the server between in the interval
`[n6, (n + 1)6]. We note that w,,+1 — 1.0,, + 6 is also the,
`
`295
`
`

`
`probe packets that accumulate behind one or more FTP
`packets decreases as 6 increases (i.e. probe compression
`becomes less frequent as 6 increases).
`
`fraction
`
`200
`
`HIS
`
`300
`
`400
`
`500
`
`Figure 9: Distribution of w,,+1 — wn + 6 for 6 = 100 ms
`
`5 Analysis of packet loss
`
`The structure of the loss process in a network is typically
`characterized by the packet loss probability. We let ulp
`denote the (unconditional) loss probability for the probe
`packets. Thus, ulp is defined by
`
`ulp = P(rtt,, = 0)
`
`Table 3 presents the measured values of ulp for different
`Values of 6 (ignore ulp and pig for the moment). The loss
`
`
`20
`50
`100
`200
`500
`
`0.23
`0.16
`0.12
`0.10
`0.11
`0.97
`.60
`0.42
`0.27
`0.18
`0.18
`0.09
`
`
`
`
`
`6 p
`
`
`ulp
`
`
`
`
`
`ly
`
`Table 3: Variations of ulp, clp, and ply for different
`values of 6
`
`probability increases when 6 becomes very small because
`the contribution of the probe packets to the buffer queue
`length becomes non negligible. Furthermore, if a probe
`packet is lost at time t because of buffer overflow, then
`the next probe packet which arrives at time t + 6 will
`also be lost if 6 is less than the service time of the packet
`in service. Clearly, the probability of this occurrence
`increases as 6 decreases.
`
`fraccion
`
`Figure 8: Distribution of w,,+1 — 10,, + 6 for 6 = 20 ms
`
`interarrival time between the probe packets n and n + 1
`when they return to the source after their journey in
`the Internet.
`
`The same arguments used earlier to explain the
`structure of the phase plot in Figure 2 can be used to ex-
`plain the structure of the distribution in Figure 8. Con-
`sider for example the leftmost peak in the distribution.
`This peak is centered around the value P//.1, i.e. it corre-
`sponds to those packets for which w,,+1 — wn = P/,u —— 6.
`These are the packets that accumulated in the buffer be-
`hind a large Internet packet (cf. Equation 3). The sec-
`ond leftmost peak is centered around the value 6. Thus,
`it corresponds to those packets for which w,,+1 : wn,
`i.e. to those packets which experience very small queue-
`ing delays in the queue (cf. Equation 1). The third left-
`most peak corresponds to those packets which were the
`first packets in a series of probe packets to accumulate
`behind a large Internet packet. Using equation (6), we
`can find the size 012,, of this packet. We obtain
`
`bn
`
`. H(wn+l "' wn +
`128 =1: 35 - 72 * 8
`
`" P
`
`3904 bits z 488 bytes
`
`i.e. approximately the size of one FTP packet. Simi-
`larly, we find that the fourth leftmost peak corresponds
`to those packets which accumulated behind 2 FTP pack-
`ets, etc.
`
`Figure 9 shows the distribution of w,,+1 -— w,, + 6
`for 6 = 100 ms. The structure of the distribution is
`similar to that found when 6 =: 20 ms. However, we
`note that the height of the leftmost peak relative to
`that of the other peaks is much smaller than in Figure
`8. This is expected, since the number of consecutive
`
`296
`
`

`
`This indicates that probe losses might be correlated
`and occur in bursts.
`It is well-known that the bursti-
`ness of the packet loss process has a significant impact
`on the performance of various network protocols. For
`example, correlated losses decrease the effectiveness of
`open-loop error control schemes such as forward error
`correction (FEC) schemes, but they increase the effec-
`tiveness of closed~loop control schemes such" as the au-
`tomatic request (ARQ) schemes
`In this paper, we
`characterize the burstiness of the probe packet loss by
`the conditional probability that a probe packet is lost
`given that the previous probe packet was lost. A related
`performance measure is the number of consecutively lost
`probe packets, which we refer to as the packet loss gap.
`We denote by cl1) the conditional probe loss probability,
`1.e.
`
`Clp = .P(’I'ttn+1 = 0|rtt,, T;
`
`We denote by pig the packet loss gap. Assuming that
`the sequence of rttn is stationary and ergodic, ply can
`be expressed in terms of clp byz
`
`ply = 1/(1 — 0110)
`
`Table 3 presents the measured values of clp and pig
`for different values of 6 (refer to the beginning of the
`section). We observe that clp is greater than ulp for all
`values of 6. This can be explained as follows. The con-
`ditional loss probability is the probability that a given
`probe packet, say packet 71 + 1, is lost given that packet
`11 is lost,
`i.e. given that the buffer occupancy upon
`arrival of probe packet 71 is equal to K. The uncon-
`ditional loss probability is the probability that packet
`n + 1 is lost, irrespective of the buffer occupancy upon
`arrival of probe 71. Clearly, the loss probability for probe
`n + 1 increases with the buffer occupancy upon arrival
`of probe n. Therefore, clp 2 ulp. For large values of 6,
`clp and ulp are almost identical. This is expected since
`the states of the buffer seen by two successive probe
`packets become less and less correlated as 6 increases.
`Our results show that the losses of probe packets are
`essentially random as long as the probe traffic uses less
`than 10% of the available capacity of the connection
`over which the probes are sent.
`
`We observe that ulp stabilizes around 10% as 6 in-
`creases.
`It is not completely clear why the stationary
`loss probability would be so large. Losses of probe pack-
`ets are certainly caused in part by buffer overflows (most
`likely occurring at the endpoints of the transatlantic
`link).
`In [17], it is reported that faulty Ethernet and
`FDDI interface cards randomly drop packets, with a
`loss rate reaching up to 3% in the Suranet mid-level net-
`work. Since the path of the connection between INRIA
`
`2A proof of this can be obtained using results from Palm prob-
`abilities (see e.g.
`[1,
`
`and the University of Maryland crosses over Suranet, it
`is likely that a fraction of the observed probe losses are
`caused by such faulty interface cards.
`
`It is important to note that the loss gap stays close
`to 1 even for small values of 6. This result has im-
`portant consequences for the design of audio and video
`applications over the Internet. Audio applications send
`audio packets at regular intervals. The interval length
`depends on the audio sampling frequency, the numbe

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket