throbber
The economics of the Internet: Utility, utilization, pricing, and Quality of
`Service
`
`Andrew Odlyzko
`
`AT&T Labs - Research
`amo@research.att.com
`
`July 7, 1998.
`
`Abstract. Can high quality be provided economically for all transmissions on the Internet? Current
`
`work assumes that it cannot, and concentrates on providing differentiated service levels. However,
`
`an examination of patterns of use and economics of data networks suggests that providing enough
`
`bandwidth for uniformly high quality transmission may be practical. If this turns out not to be possible,
`
`only the simplest schemes that require minimal involvement by end users and network administrators
`
`are likely to be accepted. On the other hand, there are substantial inefficiencies in the current data
`
`networks, inefficiencies that can be alleviated even without complicated pricing or network engineering
`
`systems.
`
`1. Introduction
`
`The Internet has traditionally treated all packets equally, and charging has involved only a fixed
`
`monthly fee for the access link to the network. However, there are signs of an imminent change.
`
`There is extensive work on provision of Quality of Service (QoS), with some transmissions getting
`
`preferential treatment. (For a survey of this area and references, see the recent book [FergusonH].)
`
`Differential service will likely require more complicated pricing schemes, which will introduce yet
`
`more complexity.
`
`The motivation behind the work on QoS is the expectation of continued or worsening congestion.
`
`As Ferguson and Huston say (p. 9 of [FergusonH])
`
`...
`
`it sometimes is preferable to simply throw bandwidth at congestion problems. On a
`
`global scale, however, overengineering is considered an economically prohibitive luxury.
`
`Within a well-defined scope of deployment, overengineering can be a cost-effective alter-
`
`native to QoS structures.
`
`The argument of this paper is that overengineering (providing enough capacity to meet peak de-
`
`mands) on a global scale may turn out not to be prohibitively expensive. It may even turn out to be the
`
`GUEST TEK EXHIBIT 1019
`Guest Tek v. Nomadix, IPR2019-00211
`
`

`

`cheapest approach when one considers the costs of QoS solutions for the entire information technolo-
`
`gies (IT) industry.
`
`Overengineering has been traditional in corporate networks. Yet much of the demand for QoS is
`
`coming from corporations. It appears to be based on the expectation that overengineering will not be
`
`feasible in the future. “There's going to come a time when more bandwidth is just not going to be
`
`available... and you'd better be able to manage the bandwidth you have,” according to one network
`
`services manager [JanahTD].
`
`The abandonment of the simple traditional model of the Internet would be a vindication for many
`
`serious scholars who have long argued that usage-sensitive pricing schemes and differential service
`
`would provide for more efficient allocation of resources. (See [McKnightB] for references and surveys
`
`of this work.) The need for usage-sensitive pricing has seemed obvious to many on the general grounds
`
`of “tragedy of the commons”. As Gary Becker, a prominent economist, said recently (in advocating
`
`car tolls to alleviate traffic jams and the costs they impose on the economy [Becker]):
`
`An iron law of economics states that demand always expands beyond the supply of free
`
`goods to cause congestion and queues.
`
`It may indeed be an iron law of economics that demand for free goods will always expand to
`
`exceed supply. The question is, will it do so anytime soon? An iron law of astrophysics states that the
`
`Sun will become a red giant and expand to incinerate the Earth, but we do not worry much about that
`
`event. Furthermore, the law of astrophysics is much better grounded in both observation and theoretical
`
`modeling than the law of economics. For example, consider Table 1 (based on data from tables 12.2
`
`and 18.1 of [FCC]). It shows a dramatic increase in total length of toll calls per line. Such calls are paid
`
`by the minute of use, and their growth was presumably driven largely by decreasing prices, as standard
`
`economic theory predicts. On the other hand, local calls in the U.S. (which are almost universally
`
`not metered, but paid for by a fixed monthly charge, in contrast to many other countries) have stayed
`
`at about 40 minutes per day per line in the last two decades. The increase of over 62% in the total
`
`volume of local calls was accompanied by a corresponding increase in the number of lines. There is
`
`little evidence in this table of that “iron law of economics” that causes demand to exceed supply, and
`
`which, had it applied, surely should have led to continued growth in local calling per line. (There is also
`
`little evidence of the harm that Internet access calls are supposed to be causing to the local telephone
`
`companies. This is not to say there may not have been problems in some localities in California, for
`
`example, or that there won' t be any in the future. However, at least through 1996 the increasing use of
`
`2
`
`

`

`networked computers has not been a problem in aggregate.)
`
`An obvious guess as to why we have stable patterns of voice calls is that people have limited time,
`
`and so, with flat-rate pricing, their demand for local calls had already been satisfied by 1980. However,
`
`that is not what the data in Table 1 shows. While the total volume of local calls went up almost
`
`63% between 1980 and 1996, population increased only 16.5%, so minutes of local calls per person
`
`(including modem and fax calls) increased by 40%. Thus demand for local calls has been growing
`
`vigorously, but it was satisfied by a comparable increase in lines. Families and businesses decided, on
`
`average, to spend more on additional phone lines instead of using more of the “free good” that was
`
`already available. Somewhat analogous phenomena appear to operate in data networking, and may
`
`make it feasible to provide high quality undifferentiated service on the Internet.
`
`In data networks, at first sight there does appear to be extensive evidence for that “iron law of
`
`economics.” Comprehensive statistics are not available, but it appears that Internet traffic has been
`
`doubling each year for at least the last 15 years, with the exception of the two years of 1995 and 1996,
`
`when it appears to have grown by a factor of about 10 each year [CoffmanO]. Almost every data link
`
`that has ever been installed was saturated sooner or later, and usually it was sooner rather than later.
`
`An instructive example is that of the traffic between the University of Waterloo and the Internet, shown
`
`in Fig. 1. The Waterloo connection started out as a 56 Kbps link, was upgraded to 128 Kbps in July
`
`1993, then to 1.5 Mbps in July 1994, and most recently to 5 Mbps in April 1997 [Waterloo]. Based on
`
`current usage trends, this link will be saturated by the end of 1998, and will need to be upgraded, or
`
`else some rationing scheme will have to be imposed. (A partial rationing scheme is already in effect,
`
`since the link is heavily utilized and often saturated during the day.)
`
`The University of Waterloo statistics could be regarded as clinching the case for QoS and usage-
`
`sensitive pricing. They show a consistent pattern of demand expanding to exceed supply. However,
`
`I suggest a different view. The volume of data traffic in Fig. 1 grows at a regular pace, just about
`
`doubling each year. The 12-fold jump in network bandwidth from 128 Kbps to 1.5 Mbps in July 1994
`
`did not cause traffic to jump suddenly by a factor of 12. Instead, it continued to grow at its usual pace.
`
`The students did not go wild, and saturate the link by downloading more pictures. Similarly, statistics
`
`for traffic on the Internet backbones show steady growth, aside from an anomalous period of extremely
`
`rapid increase in 1995 and 1996 [CoffmanO], and the NSFNet backbone in particular had traffic almost
`
`exactly doubling from the beginning to 1991 to the end of 1994. And should an increase in traffic be
`
`wrong? We are on the way to an Information Society, and so in principle we should expect growth in
`
`data traffic.
`
`3
`
`

`

`How much capacity to provide should depend on the value and the price of the service. To decide
`
`what is feasible or desirable, we have to consider the economics of the Internet. Unfortunately, the
`
`available sources (such as those in the book [McKnightB] or those currently available online through
`
`the links at [MacKieM, Varian]) are not adequate. The information they contain is often dated, and it
`
`usually covers only the Internet backbones. However, these backbones are a small part of the entire
`
`data networking universe. Sections 2 to 6 attempt to partially fill the gap in published information about
`
`the economics of the Internet.
`
`Fig. 2 is a sketch of the Internet, with the label “Internet” attached just to the backbones (as the
`
`term is often used). As will be shown in Section 2 (based largely on the companion papers [CoffmanO,
`
`Odlyzko2]), these backbones are far smaller than the aggregate of corporate private line networks,
`
`whether measured in bandwidth or cost (although not necessarily in traffic). (See Table 2 for the sizes
`
`of data networks in the U.S. It is taken from [CoffmanO], and effective bandwidth, explained in that
`
`reference, compensates for most data packets traveling over more than a single link.) The private line
`
`networks, in turn, are dwarfed by the LANs (local area networks) and academic and corporate campus
`
`networks. Most of the pricing and differentiated service schemes that are being considered, though, are
`
`aimed at Internet backbones or private line WAN links. We need to consider how they would interact
`
`with the other data networks and the systems and people those networks serve.
`
`Most of the effort on QoS schemes is based on the assumption of endemic congestion. However,
`
`when we examine the entire Internet, we find that most of it is uncongested. That the LANs are lightly
`
`used has been common knowledge. However, it appears to be widely believed that long distance data
`
`links are heavily utilized. The paper [Odlyzko2] (see Section 3 for a summary) shows that this belief
`
`is incorrect. Even the backbone links are not used all that intensively, and the corporate private line
`
`networks are very lightly utilized. There are some key choke points (primarily the public exchange
`
`points, the NAPs and MAEs, and the international links) that are widely regarded as major contributors
`
`to poor Internet performance, but there is even some dispute about their significance.
`
`(In general,
`
`while there have been numerous studies of the performance of the Internet, some very careful, such as
`
`[Paxson], there is still no consensus as to what causes the poor observed performance.)
`
`What is not in dispute is that a large fraction of the problems that cause complaints from users are
`
`not caused by any deficiencies in transmission. Delays in delivery of email are frequent, but are almost
`
`always caused by mail server problems, as even trans-Atlantic messages do get through expeditiously.
`
`A large fraction of Web-surfing complaints are caused by server overloads or other problems. There
`
`are myriad other problems that arise, such as those concerned with DNS, firewalls, and route flapping.
`
`4
`
`

`

`A key question is whether QoS would help solve those other problems, or would aggravate them, by
`
`making the entire system more complicated, increasing the computational burden on the routers, and
`
`increasing the numbers and lengths of queues.
`
`Many QoS schemes require end-to-end coordination in the network, giving up on the stateless
`
`nature of the Internet, which has been one of its greatest strengths. Essentially all QoS schemes have the
`
`defect that they require extensive involvement by network managers to make them work. However, it is
`
`already a major deficiency of the Internet that, instead of being the dumb network it is often portrayed
`
`as, it requires a huge number of network experts at the edges to make it work [Odlyzko3]. Instead of
`
`throwing hardware and bandwidth at the problem, QoS would require scarce human resources.
`
`The evidence presented in this paper, combined with that of [Odlyzko2], shows that the current
`
`system, irrationally chaotic as it might seem, does work pretty well. There appear to be only a small
`
`number of choke points in the system, which should not be too expensive to eliminate. Further, there
`
`are some obvious inefficiencies in the system that can be exploited. By moving away from private lines
`
`to VPNs (Virtual Private Networks) over the public Internet, one could provide excellent service for
`
`everybody through better use of aggregation of traffic and complementarity of usage patterns. The bulk
`
`of the work on QoS may be unnecessary.
`
`Anania and Solomon wrote a paper in 1988 (which was widely circulated and discussed at that
`
`time, but was only published recently in [AnaniaS]) that took the unorthodox approach of arguing
`
`for a flat-rate approach to broadband pricing. That paper was about pricing of what are now called
`
`ATM services, which have QoS built in, but many of Anania and Solomon's arguments also imply
`
`the desirability of a simple undifferentiated service. My work presents some additional arguments and
`
`extensive evidence of the extent to which the traditional undifferentiated service, flat-price system can
`
`work.
`
`QoS does have a role to play. There will always be local bottlenecks as well as emergency situations
`
`that will require special treatment. Even when local network and server resources are ample, there
`
`will often be need to ration access to scarce human resources, such as technical support personnel.
`
`Even in the network, methods such as Fair Queueing [FergusonH] can be valuable in dealing with
`
`local traffic anomalies, for example. Implementing them would represent a departure from the totally
`
`undifferentiated service model, but a mild one, and one that can be implemented inside the network,
`
`invisible to the users, and without requiring end-to-end coordination in the network. My argument is
`
`that we need to make the network appear as simple as possible to the users, to minimize their costs.
`
`Sections 2 through 12 describe the economics of the Internet. The conclusion is that with some ex-
`
`5
`
`

`

`ceptions, the system does work pretty well as is. There are bottlenecks, but there are also inefficiencies
`
`that can be exploited to eliminate the bottlenecks. Users in general behave sensibly, and although their
`
`demands for bandwidth are growing rapidly, these demands are reasonably regular and predictable. It
`
`appears likely that unit prices for transmission capacity will decline drastically (although total spending
`
`on high bandwidth connections will surely grow), which should make it economically feasible to meet
`
`the growing demand.
`
`It is impossible to predict with any certainty how the Internet will evolve, especially since its evo-
`
`lution depends on many factors, not only basic computing and networking technology and possible
`
`appearance of the proverbial “next killer app,” but also on government regulation and sociology. Still,
`
`some conclusions can be drawn from the study of the current system. The complexity of the entire In-
`
`ternet is already so great, that the greatest imperative should be to keep the system as simple as possible.
`
`The costs of implementing involved QoS or pricing schemes are large and should be avoided. Section
`
`13 outlines three scenarios that appear most likely. One is the continuation of the current flat rate pric-
`
`ing structure with almost uniform best-effort treatment of all packets, and enough bandwidth to provide
`
`high quality transmission. That scenario is likely to materialize if transmission prices decline rapidly
`
`enough. If they don't, the second scenario might arise, still with flat rate pricing and undifferentiated
`
`service, but with pricing reflecting expected usage of a customer. Finally, if even greater constraints
`
`are needed on traffic, ones that would provide congestion controls, approaches such as the Paris Metro
`
`Pricing (PMP) scheme of [Odlyzko1] might have to be used. PMP is the least intrusive possible usage-
`
`sensitive pricing scheme possible, and my prediction is that if any usage-sensitive pricing is introduced,
`
`it will eventually evolve towards (or degenerate into) PMP. None of these three scenarios would meet
`
`the conventional standards for economic optimality. However, the main conclusion of this paper is that
`
`optimality is unattainable, and we should seek the simplest scheme that works and provides necessary
`
`transmission quality.
`
`2. The Internet and other networks
`
`There are many excellent technical books and journal articles describing the technologies of the
`
`Internet (cf. [Keshav]). There is also a huge literature on how the Internet will change our economy
`
`and society (cf. [Gates]). On the other hand, practically nothing has been published on how the Internet
`
`is used, and how much it costs. It is as if we had shelves full of books telling us how to build internal
`
`combustion engines, and a comparable set of books on the effects of the automobile on suburban sprawl,
`
`income inequality, and other socioeconomic issues, but nothing about how many cars there were, or
`
`6
`
`

`

`how much they cost to operate.
`
`This section attempts to partially fill this gap in the knowledge of economics of data networks, but
`
`the picture it presents can only be a sketchy one. Still, it should help illuminate the major economic
`
`factors that are driving the evolution of the Internet.
`
`The Internet (sometimes called the global Internet) refers to the entire collection of interconnected
`
`networks around the world that share a common addressing scheme. As such, it includes all of the
`
`elements shown in Fig. 2, which is a grossly simplified sketch of the data networking universe. The
`
`element called “Internet” in Fig. 2 is really just the public Internet, the core of the network consisting of
`
`the backbones and associated lines that are accessible to general users. WANs (Wide Area Networks)
`
`consist of some of the clouds in that figure (which are made up of LANs and campus networks) con-
`
`nected via either private line networks, or via public Frame Relay and ATM data networks provided by
`
`telecommunications carriers, or else via the public Internet. Fig. 2 omits many important elements of
`
`the data networking universe, such as regional ISPs.
`
`This paper will concentrate on data networks in North America, primarily in the United States. Just
`
`as in the companion papers [CoffmanO, Odlyzko2], the justification is that most of the spending on
`
`data traffic is in the U.S. [DataComm]. Further, U.S. usage, influenced by lower prices than in most of
`
`the world [ITU], foreshadows what the rest of the world will be doing within a few years, as prices are
`
`reduced.
`
`Data networks do not operate in isolation. To see them in the proper perspective, let us note that total
`
`spending on information technologies (IT) in the U.S. was about $600 billion in 1997, approximately
`
`8% of gross domestic product. The IT sector of the economy is credited with stimulating the high
`
`growth rate of the economy of the last few years, low unemployment, and low inflation [DOC].
`
`Data communications cost about $80 billion in the U.S. in 1997, or 13% of total IT spending,
`
`according to [DataComm]. Table 2 is a brief summary of the statistics on where this spending was
`
`directed, based on the more detailed information in [DataComm] (which also covers the rest of the
`
`world). These statistics show that transmission accounted for only $16 billion, 20% of total for data
`
`communications, and 2.6% of total for all of IT. Thus data lines are a small part of the entire IT picture,
`
`and any scheme that attempts to improve their performance has to be weighed against costs that it might
`
`impose on the rest of the system. It is better to double the spending on transmission than to increase
`
`the average cost of all other IT systems by 3%.
`
`Let us also note that U.S. spending on phone services from telecommunications carriers was around
`
`$200 billion in 1997. Of this total, about $80 billion was for long distance calls, but about $30 billion
`
`7
`
`

`

`was for access charges, paid to the local carriers. Thus it is more appropriate to say that $50 billion was
`
`for long distance services, and $150 billion for local services. In any event, today much more is being
`
`spent on voice phone services than on data. In the future, as broadband services grow, we can expect
`
`the balance to shift towards data. In particular, looking at total communications spending and how it is
`
`still dominated by voice, it is reasonable to expect substantial growth in spending on data transmission.
`
`The core of the Internet, namely the backbones and their access links, is surprisingly inexpensive.
`
`There are many large estimates for total Internet spending, but those are misleading. There were about
`
`20 million residential accounts with online services such as AOL at the end of 1997. At $20/month,
`
`they generated revenues of around $5 billion per year. However, most of that revenue is used to cover
`
`local access costs (the modems, customer service, and marketing expenses of the ISPs). The back-
`
`bones are only a small part of the cost picture for residential customers (cf. [Leida]). In the statistics
`
`of [DataComm] (and of Table 2) they apparently are included in the ”Commercial Internet services”
`
`category, which came to $1.5 billion in 1997. We now derive two other estimates that are both in that
`
`range. According to industry analysts [IDC], MCI's Internet revenues (which include only a small
`
`contribution from residential customers, and are dominated by corporate and regional ISP links to the
`
`MCI network) came to $251 million in 1997 (a 103% increase over 1996), and were running at an
`
`annual rate of $328 million in the last quarter of 1997. Since MCI is estimated to carry between 20
`
`and 30 percent of the backbone traffic, we can estimate total revenues from all backbone operations
`
`between $1.1 and $1.6 billion at an annual rate at the end of 1997. (With revenues doubling each year,
`
`it is not adequate to look at annual statistics.) Yet another, rough way to estimate the costs of Internet
`
`backbones is to take their size, around 2,100 T3 equivalents at the end of 1997 [CoffmanO], and apply
`
`to that the $20,000 per month average cost of a T3 line [VS]. This produces an estimate of about $500
`
`million per year for the main backbone links. When we add some additional costs for the access lines
`
`from carriers' Points of Presence to their backbones (cf. [Leida]), and apply the general estimate that
`
`for large carriers, transmission costs are about half of total costs, we arrive at an estimate of about $1.5
`
`billion for the costs of the core of the Internet. (Costs and revenues are not the same, especially in the
`
`Internet arena, where red ink is plentiful as various players attempt to build market share, but within
`
`the huge uncertainty bounds we are working with, that should not matter much.)
`
`Compared to the Internet backbones, the total cost of private line networks is at least 6 times as
`
`large. Furthermore, the aggregate bandwidth of leased lines is also much greater than of the public
`
`backbones, although the traffic they carry appears to be comparable in volume. (See tables 2 and 5,
`
`taken from [CoffmanO].) This helps explain why the evolution of the Internet is increasingly dominated
`
`8
`
`

`

`by corporate networks.
`
`Just as with switched voice networks, data network costs are dominated by the local part. However,
`
`there is much more heterogeneity in data than in voice.
`
`In the foreseeable future, large academic
`
`and corporate networks are likely to have 2.4 Kbps wireless links along with 14.4 and 28.8 Kbps
`
`modems, megabit xDSL and cable modem links, and gigabit fiber optic cables. In the local campus
`
`wired environment, overengineering with Ethernet, Fast Ethernet, Gigabit Ethernet, and similar tools
`
`appears certain to be the preferred solution. However, there will still be challenges of interconnecting
`
`the other transmission components (whether slower or faster), as well as all the servers and other
`
`equipment that require the bandwidth. Network managers will have a hard time making everything
`
`interoperate satisfactorily even without worrying about QoS.
`
`The Internet backbones are small and inexpensive compared to the rest of the Internet. However,
`
`they are the heart of the Internet, just like a human heart that is small but crucial for the life of the
`
`body. The role of the backbones will likely become even more important in the future as a result of
`
`several related developments. One is that they are being traversed by an increasing fraction of data
`
`traffic. The traditional 80/20 rule, which said that 80% of the traffic stayed inside a local or campus
`
`network is breaking down. Ferguson and Huston [FergusonH] even mention some networks where as
`
`much as 75% of the traffic goes over long distance links. (We do not know how far that traffic goes,
`
`and in particular whether there will continue to be a strong distance dependence in the future. See
`
`[CoffmanO] for a more detailed discussion.) Another reason for the increasingly important role of the
`
`Internet backbones is that they are supplanting private line networks as corporate WAN links, and with
`
`the development of extranets, will be playing a crucial role in the functioning of the whole economy.
`
`Thus there is a reason to worry about the costs of the backbones, as they might become a larger fraction
`
`of the total networking pie. On the other hand, if one can overengineer only one part of the Internet,
`
`then it is best to do it to the core, as without high quality transmission at the core, other parts of the
`
`network will be only be able to offer poor service.
`
`3. Network utilization
`
`Network utilization rates are seldom discussed, yet they are the main factor determining costs of
`
`data services. A line that is used at 5% of capacity costs twice as much per byte of transmitted data as
`
`one whose average utilization rate is 10%.
`
`Although there is no simple relation between the quality perceived by customers and how heavily
`
`their networks are used, the less heavily loaded the network, the better the service. Even the notoriously
`
`9
`
`

`

`congested trans-Atlantic links do appear to provide good performance for applications as demanding
`
`as packet telephony in the early hours of Sunday morning. What this says is that even without any new
`
`QoS technologies, one can provide excellent quality by lowering utilization. Thus the main problem of
`
`the Internet is not a technical one, but an economic one, whether one can afford to have lightly utilized
`
`networks. As an example, the experimental vBNS network, discussed in [Odlyzko2], provides quality
`
`sufficient for even the most demanding applications, but it is expensive (or would be expensive, were it
`
`operated on a commercial basis), running at an average utilization of its links of around 3%.
`
`The main question for the future of the Internet is whether customers are willing to pay for high
`
`quality by having low utilization rates, or whether many links will be congested, with QoS providing
`
`high quality for some select fraction of data transfers. We can find much about the likely evolution
`
`of data networks from observation of usage patterns of existing networks. When we consistently see
`
`lightly utilized links where customers can obtain higher utilization rates and lower costs by switching
`
`to lower capacity lines, we can deduce that they do want high quality data transport, and are willing to
`
`pay for it.
`
`Table 4 shows utilization rates (averaged over a full week) for various networks. It is based on
`
`[Odlyzko2], except for the entry for local phone lines, which is derived from the data in Table 1 (which
`
`is based on [FCC]). A surprising result is that the long distance switched network is by far the most
`
`efficient in terms of utilizing transmission capacity. For most people, an even more surprising feature
`
`of the data is the low utilization rate of private line networks.
`
`The paper [Odlyzko2] discusses the reasons data networks are lightly utilized. Lumpy capacity is a
`
`major one. Rapid and unpredictable growth is another. Small private networks are yet another. Perhaps
`
`the main reason, though, is the bursty nature of data traffic. This traffic is bursty on both the short
`
`and long time scales, and customers do value such bursty transmission. This means that we cannot
`
`reasonably expect data networks to approach the efficiency with which the switched voice network
`
`uses transmission capacity.
`
`Utilization rates can also provide guides to the extent that QoS measures might improve perceived
`
`quality of networking. Practically all Internet users find service much better 5 in the morning than at
`
`noon. However, traffic on the backbones in the early hours of the morning is still about half that during
`
`the peak hours, as is seen in Fig. 10 of this paper and several figures in [Odlyzko2]. Further, traffic
`
`mix does not seem to vary much between trough and peak periods [ThompsonMW]. Therefore we
`
`can conclude that during peak periods, no QoS measure is likely to provide transmissions that have
`
`priorities around the median of all traffic with better service than the current undifferentiated service
`
`10
`
`

`

`provides for all traffic early in the morning. (There could be some improvements in jitter, for example,
`
`but algorithms that do provide such improvements could be used inside the network, invisibly to the
`
`users, to provide similar improvements for undifferentiated service.)
`
`An important point in considering the low utilization rates of data networks is that it is not caused
`
`primarily by inefficiency or incompetence. Customers choose the capacities of their lines, and their
`
`choices tell us what they want and what they are willing to pay for. This point will be treated at greater
`
`length in the following sections.
`
`4. Inefficiency is good (if you can afford it)
`
`Efficiency in utilization of transmission lines or switches should not be the main criterion for
`
`evaluating how good a network is. The crucial question is how well customers' needs are satisfied.
`
`Consider figures 3 and 4, which show utilizations of dial-in modems at Columbia University and
`
`the University of Toronto. (These figures are based on detailed data supplied by those institutions.
`
`Graphs for more recent periods, separated out further by 14.4 and 28.8 Kbps modem pools, can be
`
`found at [Columbia, Toronto].) The average utilizations (over the periods shown in the figures) were
`
`52% at the University of Toronto and 78% at Columbia University. Clearly Columbia was utilizing its
`
`modems more efficiently. Was it providing better service, though? Its modems were completely busy
`
`for more than 12 hours a day. (The slight drops below 100% in the utilization in Fig. 3 are misleading,
`
`since they represent only a little idle time, and are largely the resetting of modems after a session is
`
`terminated.) Clearly much demand is unsatisfied, and there are many frustrated potential users who do
`
`not accomplish their work. Further, the high 78% utilization rate is misleading, since many users are
`
`probably staying online for longer periods than they would if they had assurance they could get a new
`
`connection when they wanted it. Toronto, with a lower utilization rate, managed to accommodate all
`
`demands except for a brief period on Monday night.
`
`The University of Toronto managed to satisfy essentially all demands of its students and faculty
`
`for modem connections and still achieve a 52% utilization rate. This rate is extremely high. There are
`
`many examples of low utilization rates. The family car is typically used around 5% of the time. The
`
`fax in our office or home, the PC on the desktop, and the road we drive on are all designed for peak
`
`usage, and are idle most of the time. We are willing to pay for this inefficiency because the costs are
`
`low compared to the benefits we receive. As costs decrease, we usually accept lower efficiency. For
`
`example, in the early days of computing, programs were written in assembly language. Later, as the
`
`industry advanced, there was a shift towards compiled programs. They typically run at half the speed
`
`11
`
`

`

`of assembly coded versions, but they make software easier to write and more portable. With further
`
`advances in computing power, industry was willing to jump on the Java bandwagon, even though the
`
`early versions of Java typically ran a hundred times slower than compiled programs. With over 99% of
`
`computing cycles devoted to running screen savers, this was a worthwhile tradeoff.
`
`When capital cos

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket