`
`by
`
`Je rey K. MacKie-Mason
`Hal R. Varian
`University of Michigan
`
`April 1993
`Current version: February 10, 1994
`
`Abstract. This paper was prepared for the conference “Public Access to the Internet,” JFK School
`of Government, May 26–27 , 1993. We describe the technology and cost structure of the NSFNET
`backbone of the Internet, and discuss how one might price Internet access and use. We argue that
`usage-based pricing is likely to be necessary to control congestion on the Internet and propose a
`particular implementating of usage-based pricing using a “smart market”.
`
`Keywords. Networks, Internet, NREN, NII.
`
`Address. Hal R. Varian, Jeffrey K. MacKie-Mason, Department of Economics, University of
`Michigan, Ann Arbor, MI 48109-1220. E-mail: jmm@umich.edu, halv@umich.edu.
`
`GUEST TEK EXHIBIT 1017
`Guest Tek v. Nomadix, IPR2019-00253
`
`
`
`Pricing the Internet
`Jeffrey K. MacKie-Mason
`Hal R. Varian
`
`On December 23, 1992 the National Science Foundation (NSF) announced that it will cease funding
`
`the ANS T3 Internet backbone in the near future. This is a major step in the transition from a
`
`government-funded to a commercial Internet. This movement has been welcomed by private
`
`providers of telecommunication services and businesses seeking access to the Internet.
`
`No one is quite sure about how this privatization will work; in particular, it is far from clear
`
`how use of the privatized Internet will be priced. Currently, the several Internet backbone networks
`
`are public goods with exclusion: usage is essentially free to all authorized users. Most users are
`
`connected to a backbone through a “pipe” for which a fixed access fee is charged, but the user’s
`
`organization nearly always covers the access fee as overhead without any direct charge to the user.
`
`None of the backbones charge fees that depend at the margin on the volume of data transmitted. The
`
`result is that the Internet is characterized by “the problem of the commons,” and without instituting
`
`new mechanisms for congestion control it is likely to soon suffer from server “overgrazing.” We
`
`shall propose an efficient pricing structure to manage congestion, encourage network growth, and
`
`guide resources to their most valuable uses.
`
`We first describe the Internet’s technology and cost structure, since a feasible and efficient
`
`pricing scheme must reflect both technology and costs. We then describe congestion problems in
`
`the network, and some past proposals to control it. We turn to pricing by first describing in general
`
`terms the advantages and disadvantages of using pricing to control congestion, followed by the
`
`details of our proposed pricing structure. We devote particular attention to a novel feature of our
`
`proposal: the use of a “smart market” to price congestion in real time.
`
`We wish to thank Guy Almes, Eric Aupperle, Hans-Werner Braun, Paul Green, Dave Katz, Mark Knopper, Ken Latta,
`Dave McQueeny, Jeff Ogden, Chris Parkin, Scott Shenker and Paul Southworth for helpful discussions, advice and data.
`We are also grateful to James Keller and Miriam Avins for extensive, helpful editorial advice. MacKie-Mason was visiting
`the Department of Economics, University of Oslo when this paper was completed.
`
` Most users of the NSFNET backbone do not pay a pipeline fee to ANS, the service provider, but instead pay for a
`connection to their “regional” or mid-level network, which then is granted a connection to the NSFNET.
`
`1
`
`
`
`1. Internet Technology and Costs
`
`The Internet is a network of networks. We shall focus on backbone networks, although most of our
`
`pricing ideas apply equally well to mid-level and local area networks. There are essentially four
`
`competing backbones for the Internet: ANSnet, PSInet, Alternet, and SprintLINK. ANS is a non-
`
`profit that was formed in 1990 to manage the publicly-funded NSFNET for research and educational
`
`users. ANSnet now provides the backbone service for NSFNET, as well as backbone service for
`
`commercial users through its subsidiary, ANS CO+RE, Inc. PSInet and Alternet are independent
`
`commercial providers of backbone Internet services to commercial and non-commercial users.
`
`Sprint, of course, is a major telecommunications provider as well as a provider of Internet transport
`
`services.
`
`The Internet networks use packet-switching communications technology based on the TCP/IP
`
`protocols. While much of the traffic moves across lines leased from telephone common carriers,
`
`packet-switching technology is quite different from the circuit-switching used for voice telephony.
`
`When a telephone user dials a number, a dedicated path is set up between the caller and the
`
`called number. This path, with a fixed amount of network resources, is held open; no other caller
`
`can use those resources until the call is terminated. A packet-switching network, by contrast,
`
`uses “statistical multiplexing”: each circuit is shared by many users, and no open connection is
`
`maintained for a particular communications session. A data stream is broken up into small chunks
`
`called “packets.” When a packet is ready, the computer sends it onto the network. When one
`
`computer is not sending a packet, the network line is available for packets from other computers.
`
`The TCP (Transmission Control Protocol) specifies how to break up a datastream into packets and
`
`reassemble it; the IP (Internet Protocol) provides the necessary information for various computers
`
`on the Internet (the routers) to move the packet to the next link on the way to its final destination.
`
`The data in a packet may be 1500 bytes or so. Recently the average packet on NSFNET carries
`
`about 200 bytes of data (packet size has been steadily increasing). On top of these 200 bytes the
`
` In addition, a new alliance called CoREN has been formed between eight regional networks and MCI. This represents
`a move away from the traditional backbone structure towards a mesh-structured set of overlapping interconnections.
`
` Some telephone lines are multiplexed, but they are synchronous: 1=Nth of the line is dedicated to each open circuit
`no matter how lightly used is that circuit.
`
`2
`
`
`
`TCP/IP headers add about 40; thus about 17% of the traffic carried on the Internet is simply header
`
`information.
`
`Packetization allows for the efficient use of expensive communications lines. Consider a
`
`typical interactive terminal session to a remote computer: most of the time the user is thinking.
`
`The network is needed only after a key is struck or when a reply is returned. Holding an open
`
`connection would waste most of the capacity of the network link. Instead, the computer waits until
`
`after a key is struck, at which point it puts the keystroke information in a packet which is sent across
`
`the network. The rest of the time the network links are free to be used for transporting packets
`
`from other users.
`
`The other distinguishing feature of Internet technology is that it is “connectionless.” This
`
`means that there is no end-to-end setup for a session; each packet is independently routed to its
`
`destination. When a packet is ready, the host computer sends it on to another computer, known
`
`as a router (or switch). The router examines the destination address in the header and passes the
`
`packet along to another router, chosen by a route-finding algorithm. A packet may go through 30
`
`or more routers in its travels from one host computer to another. Because routing is dynamically
`
`calculated, it is entirely possible for different packets from a single session to take different routes
`
`to the destination.
`
`The postal service is a good metaphor for the technology of the Internet (Krol (1992), pp. 20–
`
`23). A sender puts a message into an envelope (packet), and that envelope is routed through a
`
`series of postal stations, each determining where to send the envelope on its next hop. No dedicated
`
`pipeline is opened end-to-end, and thus there is no guarantee that envelopes will arrive in the
`
` Some interactive terminal programs collect keystrokes until an Enter or Transmit key is struck, then sends
`the entire “line” off in a packet. However, most Internet terminal sessions use the telnet program, which sends each
`keystroke immediately in a separate packet.
`
` Some packet-switching networks are “connection-oriented” (notably, X.25 networks, such as Tymnet and frame-relay
`networks). In such a network a connection is set up before transmission begins, just as in a circuit-switched network.
`A fixed route is defined, and information necessary to match packets to their session and defined route is stored in
`memory tables in the routers. Thus, connectionless networks economize on router memory and connection set-up time,
`while connection-oriented networks economize on routing calculations (which have to be redone for every packet in a
`connectionless network).
`
` Dynamic routing contributes to the efficient use of the communications lines, because routing can be adjusted to balance
`load across the network. The other main justification for dynamic routing is network reliability, since it gives each packet
`alternative routes to their destination should some links fail. This was especially important to the military, which funded
`most of the early TCP/IP research to improve the ARPANET.
`
`3
`
`
`
`sequence they were sent, or follow exactly the same route.
`
`The TCP protocol enables packets to be identified and reassembled in the correct order. TCP
`
`prefaces the data in a packet with a header containing the source and destination ports, the sequence
`
`number of the packet, an acknowledgment flag, and so on. The header takes up 20 or more bytes.
`
`TCP sends the packet to a router, a computer that is in charge of forwarding packets to their next
`
`destination. At the routers, IP adds another header (another 20 or more bytes) containing source
`
`and destination addresses and other information needed for routing the packet. The router then
`
`calculates the best next link for the packet to traverse, and sends it on. The best link may change
`
`minute by minute, as the network configuration changes. Routes can be recalculated immediately
`
`from the routing table if a route fails. The routing table in a switch is updated nearly continuously.
`
`Over the past five years, the speed of the NSFNET backbone has increased from 56 Kbps to
`
`45 Mbps (“T3” service). The newer backbones have also upgraded to 45 Mbps. These lines
`
`can move about 1,400 pages of text per second; a 20-volume encyclopedia can be sent across the
`
`Internet in half a minute. Many regional networks still provide T1 (1.5Mbps) service, but these too
`
`are being upgraded.
`
`The transmission speed of the Internet is remarkably high. We recently tested the transmission
`
`delay at various times of day and night for sending a packet to Norway from Ann Arbor, Michigan.
`
`Each packet traversed 16 links: the IP header was read and modified 16 times, and 16 different
`
`routers calculated the best next link. Despite the many hops and substantial packetization and
`
`routing, the longest delay on one representative weekday was only 0.333 seconds (at 1:10 PM
`
`EST); the shortest delay was 0.174 seconds (at 5:13 PM EST).
`
`Current backbone network costs
`
`The postal service is a good metaphor for packet-switching technology, not for the cost structure of
`
`Internet services. Most of the costs of providing the Internet are more-or-less independent of the
`
` Routing is based on a dynamic knowledge of which links are up and a static “cost” assigned to each link. Currently
`routing does not take congestion into account. Routes can change when hosts are added or deleted from the network
`(including failures), which happens often with about 2 million hosts and over 21,000 subnetworks.
`
` “Kbps” is thousand (kilo) bits per second; “Mbps” is million (mega) bits per second.
`
` While preparing the final manuscript we repeated our delay experiment for 20 days in October–November, 1993. The
`range in delay times between Ann Arbor and Norway was then 0.153 seconds and 0.303 seconds.
`
`4
`
`
`
`level of usage of the network; i.e., most of the costs are fixed costs. If the network is not saturated
`
`the incremental cost of sending additional packets is essentially zero.
`
`The NSF in 1993 spent about $11.5 million to operate the NSFNET and provided $7 million
`
`per year in grants to help operate the regional networks. NSF grants also help colleges and
`
`universities connect to the NSFNET. Using the conservative estimate of 2 million hosts and 20
`
`million users, this implies that the 1993 NSF Internet subsidy was less than $10 per year per host,
`
`or less than $1 per user.
`
`Total salaries and wages for NSFNET have increased by a little more than one-half (about 68%
`
`nominal) over 1988–1991, a time when the number of packets delivered has increased by a factor
`
`of 128. It is hard to calculate total costs because of large in-kind contributions by IBM and
`
`MCI during the initial years of the NSFNET project, but it appears that total costs for the 128-fold
`
`increase in packets have increased by a factor of about 3.2.
`
`Two components account for most of the costs of providing a backbone network: communi-
`
`cations lines and routers. Lease payments for lines and routers accounted for nearly 80% of the
`
`1992 NSFNET costs. The only other significant cost is for the Network Operations Center (NOC),
`
`which accounts for roughly 7% of total cost. Thus we focus on the costs of lines and routers.
`
`We have estimated costs for the network backbone as of 1992–93. A T3 (45 Mbps) trunk line
`
`running 300 miles between two metropolitan central stations could be leased for about $32,000
`
`per month. The cost to purchase a router capable of managing a T3 line was approximately
`
` In a postal service most of the cost is in labor, which varies quite directly with the volume of the mail.
`
` The regional network providers generally set their charges to recover the remainder of their costs, but there is also
`some subsidization from state governments at the regional level.
`
` This, of course, represents only backbone costs for NSFNET users. Total costs, including LAN and regional network
`costs, are higher.
`
` Since packet size has been slowly increasing, the amount of data transported has increased even more.
`
` A NOC monitors traffic flow at all nodes in the network and troubleshoots problems.
`
` We estimated costs for the network backbone only, defined to be links between common carrier Points of Presence
`(POPs) and the routers that manage those links. We did not estimate the costs for the feeder lines to the mid-level or regional
`networks where the data packets usually enter and leave the backbone, nor for the terminal costs of setting up the packets
`or tearing them apart at the destination.
`
`5
`
`
`
`$100,000. Assuming another $100,000 for service and operation costs, and 50-month amortization
`
`at a nominal 10% rate yields a rental cost of about $4900 per month for the router.
`
`The costs of both lines and switching have been dropping rapidly for over three decades. In the
`
`1960s, digital computer switching was more expensive (per packet) than lines (Roberts (1974)),
`
`but switching has since become substantially cheaper. In Table 1 we show estimated 1992 costs for
`
`transporting 1 million bits of data through the NSFNET backbone and compare these to estimates
`
`for earlier years. As can be seen, in 1992 lines cost about eight times as much as routers.
`
`Table 1.
`Communications and Router Costs
`(Nominal $ per million bits)
`Year
`Lines Routers Transmission Speed
`1960
`1.00
`2.4 kbps
`1962
`1963
`1964
`1967
`1970
`1971
`1974
`1992
`
`10.00
`
`0.42
`0.34
`0.33
`
`40.8 kbps
`50.0 kbps
`50.0 kbps
`
`56.0 kbps
`45 mbps
`
`0.168
`0.102
`0.026
`0.11
`0.00094 0.00007
`
`Notes: 1. Costs are based on sending one million bits of data approximately 1200 miles on a path that traverses five routers.
`Sources: 1960–74 from Roberts (1974). 1992 calculated by the authors using data provided by Merit Network, Inc.
`
`The structure of the NSFNET backbone directly reflects its costs: lots of cheap routers manage
`
`a limited number of expensive lines. We illustrate a portion of the network in Figure 1. Each
`
`numbered square is an RS6000 router; the numbers listed beside a router are links to regional
`
`networks. In general, each packet moves through two separate routers at the entry and exit nodes.
`
`For example, if we send a message from the University of Michigan to Bell Laboratories, it will
`
`traverse link 131 to Cleveland, where it passes through two routers (41 and 40). The packet goes
`
`to New York, where it moves through another two routers (32 and 33) before leaving the backbone
`
`on link 137 to the JVNCnet regional network to which Bell Labs. Two T3 lines are navigated using
`
`four routers.
`
`6
`
`
`
`Partial NSFNET T3 Backbone Map
`
`To
`Chicago, IL
`
`Cleveland, OH
`
`40
`
`41
`
`131
`132
`
`43
`
`158
`168
`190
`212
`
`44
`
`167
`197
`
`Key
`
`RS/6000
`
`Cisco
`
`T3
`
`131
`
`Regional
`network
`
`Hartford, CT
`
`48
`
`51
`
`205
`
`49
`
`133
`134
`
`52
`
`177
`209
`222
`
`36
`
`33
`
`137
`
`32
`
`NY, NY
`
`35
`
`155
`160
`163
`164
`206
`
`56
`
`136
`145
`58
`
`57
`
`146
`
`37
`
`161 175
`189 194
`196 214
`217 219
`AS517
`
`Wash, DC
`
`59
`
`186
`
`60
`
`166 179
`184 198
`199 210
`
`To
`Greensboro,
`NC
`
`To
`Greensboro,
`NC
`
`Figure 1. Network Map Fragment
`
`Relation between technology and costs
`
`Line and switching costs have been exponentially declining at about 30% per year (see the semi-log
`
`plot in Figure 2). But more interesting than the rapid decline is the change from expensive routers
`
`to expensive transmission links. Indeed, it was the crossover around 1970 (Figure 2) that created
`
`a role for packet-switching networks. When lines were cheap relative to switches it made sense
`
`to have many lines feed into relatively few switches, and to open an end-to-end circuit for each
`
`connection. In that way, each connection wastes transmission capacity (lines are held open whether
`
`data is flowing or not) but economizes on switching (one set-up per connection).
`
`When switches become cheaper than lines the network is more efficient if data streams are
`
`broken into small packets and sent out piecemeal, allowing many users to share a single line. Each
`
`packet must be examined at each switch along the way to determine its type and destination, but
`
`this uses the relatively cheap switch capacity. The gain is that when one source is quiet, packets
`
`from other sources use the same (relatively expensive) lines.
`
`7
`
`
`
`Communications Line and Router Costs
`$ / MM bits across 4 nodes, nominal
`10
`
`Exponentially decreasing
`at about 30% per year
`
`1
`
`0.1
`
`0.01
`
`0.001
`
`0.0001
`
`$ / MM bits
`
`1E-05
`1955
`
`1960
`
`1965
`
`1970
`
`1975
`
`1980
`
`1985
`
`1990
`
`1995
`
`Comunications Lines
`
`Routers
`
`Figure 2. Trends in costs for communications links and routers.
`
`2. Congestion Problems
`
`The Internet is an extremely effective way to move information; for users, the Internet usually
`
`seems to work reliably and instantly. Sometimes, however, the Internet becomes congested, and
`
`there is simply too much traffic for the routers and lines to handle. At present, the only two ways
`
`the Internet can deal with congestion is to drop packets, so that some information must be resent
`
`by the application, or to delay traffic. These solutions impose external social costs: Sally sends
`
`a packet that crowds out Elena’s packet; Elena suffers delay, but Sally does not for the cost she
`
`imposes on Elena.
`
`In essence, this is the classic problem of the commons. When villagers have shared, unlimited
`
`access to a common grazing field, each will graze his cows without recognizing the costs imposed
`
`on the others. Without some mechanism for congestion control, the commons will be overgrazed.
`
`Likewise, as long as users have access to unlimited Internet usage, they will tend to “overgraze”,
`
`creating congestion that results in delays and dropped packets for other users.
`
`This section examines the extent of congestion, and explores some recent work on controlling
`
`congestion. Our proposal, which is based on charging per-packet prices that vary according to the
`
`degree of congestion, is explained later in the paper.
`
`The Internet experienced severe congestion in 1987. Even now congestion problems are
`
`relatively common in parts of the Internet (although not yet on the T3 backbone). According to
`
`8
`
`
`
`Kahin (1992): “: : : problems arise when prolonged or simultaneous high-end uses start degrading
`
`service for thousands of ordinary users. In fact, the growth of high-end use strains the inherent
`
`adaptability of the network as a common channel” (page 11). Some contemplated uses, such
`
`as real-time video and audio transmission, will lead to substantial increases in the demand for
`
`bandwidth, and congestion problems will only get worse unless there is substantial increase in
`
`bandwidth. For example, Smarr and Catlett write that:
`If a single remote visualization process were to produce 100 Mbps bursts, it would take
`only a handful of users on the national network to generate over 1Gbps load. As the remote
`visualization services move from three dimensions to [animation] the single-user bursts
`will increase to several hundred Mbps : : : Only for periods of tens of minutes to several
`hours over a 24-hour period are the high-end requirements seen on the network. With
`these applications, however, network load can jump from average to peak instantaneously.”
`Smarr and Catlett (1992), page 167.
`
`This has happened. For example, during the weeks of November 9 and 16, 1992, some packet
`
`audio/visual broadcasts caused severe delay problems, especially at heavily-used gateways to
`
`the NSFNET backbone and in several mid-level networks. Today even ordinary use is causing
`
`significant delays in many of the regional networks around the world as demand grows faster than
`
`capacity.
`
`Of course, deliveries can be delayed for a number of other reasons. For example, if a router
`
`fails then packets must be resent by a different route. However, in a multiply-connected network,
`
`the speed of rerouting and delivery of failed packets measures one aspect of congestion, or the
`
`scarcity of the network’s delivery bandwidth.
`
`To characterize congestion on the Internet, we timed the delay in delivering packets to seven
`
`sites around the world. We ran our test hourly for 37 days during February and March 1993. Figure
`
`3 and Figure 4 show our results from four of our hourly probes. Median and maximum delivery
`
`delays are not always proportional to distance: the delay from Michigan to New York was generally
`
`longer than to Berkeley, and delays from Michigan to Nova Scotia, Canada, were often longer than
`
`to Oslo, Norway. (Figure 3).
`
`There is substantial variability in Internet delays. For example, the maximum and median
`
`delays vary greatly according to the time of day. There appears to be a large 4PM peak problem on
`
` We use the term bandwidth to refer to the overall capacity of the network to transport a data flow, usually measured
`in bits per second. The major bottleneck in backbone capacity today is in fact switch technology, not the bandwidth of the
`lines.
`
`9
`
`
`
`Maximum Delay by Time of Day
`Roundtrip time in ms.
`1400
`
`4878
`
`1200
`
`1000
`
`800
`
`600
`
`400
`
`200
`
`0
`
`Univ of
`Wash
`
`ATT Bell
`Labs
`
`Univ of
`Calif,
`Berkeley
`
`New York
`University
`
`Univ of
`Arizona,
`Tucson
`
`Nova
`Scotia,
`Canada
`
`Univ of
`Oslo,
`Norway
`
`4 am
`
`8 am
`
`12 pm
`
`4pm
`
`Eastern Standard Time
`
`Median Delay by Time of Day
`Roundtrip time in ms.
`250
`
`200
`
`150
`
`100
`
`50
`
`0
`
`Univ of
`Wash
`
`ATT Bell
`Labs
`
`Univ of
`Calif,
`Berkeley
`
`New York
`University
`
`Univ of
`Arizona,
`Tucson
`
`Nova Scotia,
`Canada
`
`Univ of
`Oslo,
`Norway
`
`4 am
`
`8 am
`
`12 pm
`
`4pm
`
`Eastern Standard Time
`
`Figure 3. Maximum and Median Transmission Delays on the Internet
`
`the east coast for packets to New York and Nova Scotia, but much less for ATT Bell Labs (in New
`
`Jersey). The time-of-day variation is also evident in Figure 5(borrowed from Claffy, Polyzos,
`
`and Braun (1992)).
`
`In Figure 4 we measure delay variation by the standard deviation of delays by time of day for
`
`each destination. Delays to Nova Scotia, Canada were extraordinarily variable, yet delays to Oslo
`
`were no more variable than in transmission to New Jersey (ATT). Variability in delay fluctuates
`
`widely across times of day, as we would expect in a system with bursty traffic, but follows no
`
` The high maximum delay for the University of Washington at 4PM is correct, but appears to be aberrant. The maximum
`delay was 627 msec; the next two highest delays (in a sample of over 2400) were about 250 msecs each. After dropping
`this extreme outlier, the University of Washington looks just like UC Berkeley.
`
` Note that the Claffy et al. data were for the old, congested T1 network. We reproduce their figure to illustrate the
`time-of-day variation in usage; the actual levels of link utilization are generally much lower in the current T3 backbone.
`Braun and Claffy (1993) show time-of-day variations in T3 traffic between the US and three other countries.
`
`10
`
`
`
`Standard Deviation of Delay by Time of Day
`
`Roundtrip time in ms. Eastern Standard Time.
`
`1000
`
`800
`
`600
`
`400
`
`200
`
`0
`
`Univ of
`Wash
`
`ATT Bell
`Labs
`
`Univ of
`Calif,
`Berkeley
`
`New York
`University
`
`Univ of
`Arizona,
`Tucson
`
`Nova Scotia,
`Canada
`
`Univ of
`Oslo,
`Norway
`
`4 am
`
`8 am
`
`12 pm
`
`4pm
`
`Figure 4. Variability in Internet Transmission Delays
`
`Figure 5. Utilization of Most Heavily Used Link in Each Fifteen Minute Interval (Claffy et al.
`
`(1992))
`
`obvious pattern.
`
`How much delay is involved, and who is inconvenienced? As seen in Figure 3, during our
`
`experiment we never experienced delays of more than 1 second in round trip time except to the
`
`site in Nova Scotia. Is that too trivial a delay to be concerned about? Probably “yes”, for e-mail.
`
`11
`
`
`
`However, delays of that magnitude can be quite costly for some users and some applications.
`
`For example, two-way voice communications must be limited to delays of 25 ms (500 ms with
`
`echo cancellors); likewise for interactive video. Even a simple use like a terminal session can
`
`be very unpleasant if each character transmitted takes a second before echoing back to the screen.
`
`And of course, without a better mechanism for congestion control, we expect delays to increase in
`
`frequency and duration.
`
`Controlling congestion
`
`NSFNET usage has been growing at about 6% per month, or doubling every twelve months.
`
`Although the cost of adding capacity is declining rapidly, we think it is very likely that congestion
`
`will continue to be a problem, especially as new very-high bandwidth uses (such as real-time broad-
`
`cast video) become common. It is becoming increasingly important to consider how congestion in
`
`networks such as the Internet should be controlled, and much work is needed. As Kleinrock (1992)
`
`writes, “One of the least understood aspects of today’s networking technology is that of network
`
`control, which entails congestion control, routing control, and bandwidth access and allocation.”
`
`There is some literature on network congestion control; see Gerla and Kleinrock (1988) for an
`
`overview. Most researchers have focused on schemes that offer different priorities and qualities
`
`of service, depending on users’ needs. For example, users could send e-mail with a low priority,
`
`allowing it to be delayed during congested periods so that more time-critical traffic could get
`
`through.
`
`In fact, IP packets contain fields called “Precedence” and “Type of Service” (TOS), but most
`
`commercial routers do not currently use these fields. To facilitate the use of the TOS field,
`
` We should also note that our experiment underestimated the delay that many application might experience. We were
`sending probes consisting of a single packet. Some real data flows involve hundreds or thousands of packets, such as in
`terminal sessions, file transfers and multimedia transmissions. For these flows, periodic delays can be much longer due to
`the flow-control protocols implemented in the applications.
`
` An “ms” or millisecond is one one-thousandth of a second.
`
` The compound growth rate in bytes transported has been 5.8% per month from March 1991 to September 1993, and
`6.4% per month from September 1992 to September 1993. This probably underestimates growth in Internet usage because
`traffic on other backbone routes has been growing faster.
`
` In 1986 the NSFNET experienced severe congestion and the there was some experimentation with routing based on
`the IP precedence field and the type of application. When the NSFNET was upgraded to T1 capacity, priority queuing was
`abandoned for end-user traffic.
`
`12
`
`
`
`its interpretation will probably be changed to the form described in Almquist (1992). Almquist
`
`proposes that the user be able to request that the network minimize delay, maximize total flow,
`
`maximize reliability, or minimize monetary cost. Prototype algorithms to provide such service
`
`are described in Prue and Postel (1988); a related proposal to ease congestion is in Bohn, Braun,
`
`Claffy, and Wolff (1993). In this scheme a router looks up the destination address and examines
`
`the possible routes. Each route has a TOS number; the router looks for one that matches the TOS
`
`number of the packet.
`
`To an economist, this is too inflexible. In particular, the TOS value “minimize monetary cost”
`
`seems strange. Of course senders would want to minimize monetary cost for a given quality of
`
`service: that is an objective, not a constraint. Also, it is unfortunate that TOS numbers do not allow
`
`for inequality relations. Normally, one would think of specifying the maximum amount that one
`
`would be willing to pay for delivery, with the assumption that less expensive service (other things
`
`being equal) would be better.
`
`As Almquist (1992) explains, “There was considerable debate over what exactly this value
`
`[minimize monetary cost] should mean.” However, he goes on to say:
`
`“It seems likely that in the future users may need some mechanism to express the
`maximum amount they are willing to pay to have a packet delivered. However, an IP option
`would be a more appropriate mechanism, since there are precedents for having IP options
`that all routers are required to honor, and an IP option could include parameters such as the
`maximum amount the user was willing to pay. Thus, the TOS value defined in this memo
`merely requests that the network ‘minimize monetary cost.”’ Almquist (1992)
`
`Almquist’s remarks reflect the limited attention to pricing in most research to date, especially
`
`to control congestion. But without pricing it is hard to imagine how priority schemes could
`
`be implemented. What is to stop an e-mail user from setting the highest priority if it costs
`
`nothing? What political or organizational authority should be allowed to dictate the relative priority
`
`to give college student real-time multimedia rap sessions versus elementary school interactive
`
`classrooms? Cocchi, Estrin, Shenker, and Zhang (1992) and Shenker (1993) make the important
`
`point that if applications require different combinations of network characteristics (responsiveness,
`
`reliability, throughput, etc.), then some sort of pricing will be needed to sort out users’ demands
`
`for these characteristics.
`
` Enforcing externally determined priorities may be impossible anyway since bytes are bytes and it is difficult to monitor
`anything about the content of a data stream.
`
`13
`
`
`
`Faulhaber (1992) has considered some of the economic issues related to pricing access to the
`
`Internet. He suggests that “transactions among institutions are most efficiently based on capacity
`
`per unit time. We would expect the ANS to charge mid-level networks or institutions a monthly or
`
`annual fee that varied with the size of the electronic pipe provided to them. If the cost of providing
`
`the pipe to an institution were higher than to a mid-level network : : : the fee would be higher.”
`
`Faulhaber’s suggestion makes sense for recovering the cost of a dedicated line, one that connects
`
`an institution to the Internet backbone. But we don’t think that it is appropriate for charging for
`
`backbone traffic itself because the bandwidth on the backbone is inherently a shared resource—
`
`many packets “compete” for the same bandwidth. There is an overall constraint on capacity, but
`
`there is no such thing as an individual’s capacity on the backbone.
`
`Although it is appropriate to charge a flat fee to cover the costs of a network connection, it is
`
`important to charge for network usage when the network is congested. During times of congestion
`
`bandwidth is a scarce resource. Conversely, when the network is not congested the marginal cost
`
`of transporting additional packets is essentially zero; it is therefore appropriate to charge users a
`
`very low or no price for packets when the system is not congested.
`
`One problem with usage-sensitive pricing is the cost of accounting and