`
`(12) United States Patent
`England et al.
`
`(10) Patent No.:
`(45) Date of Patent:
`
`US 7,039,715 B2
`May 2, 2006
`
`(54)
`
`(75)
`
`(73)
`
`(*)
`
`(21)
`(22)
`(65)
`
`(51)
`
`(52)
`(58)
`
`(56)
`
`METHODS AND SYSTEMIS FOR A
`RECEIVER TO ALLOCATE BANDWIDTH
`AMONG INCOMING COMMUNICATIONS
`FLOWS
`
`Inventors: Paul England, Bellevue, WA (US);
`Cormac E. Herley, Bellevue, WA (US)
`Assignee: Microsoft Corporation, Redmond, WA
`(US)
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 745 days.
`Appl. No.: 10/152,112
`Filed:
`May 21, 2002
`
`Notice:
`
`Prior Publication Data
`US 2003/02210O8 A1
`Nov. 27, 2003
`
`Int. C.
`(2006.01)
`G06F 5/16
`U.S. Cl. ....................... 709/232; 709/233; 370/235
`Field of Classification Search ................ 709/206,
`709/207, 223, 224, 225, 226, 228, 232, 233,
`709/234, 235; 370/235
`See application file for complete search history.
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`
`9, 1998 Berthaud et al.
`5,815,492 A
`9, 1999 Galand et al.
`5,956,341 A
`6,006.264 A * 12/1999 Colby et al. ................ TO9,226
`6,075,772 A
`6, 2000 Brown et al.
`
`1/2002 Krishnan et al.
`6,343,085 B1
`6,502,131 B1* 12/2002 Vaid et al. .................. TO9,226
`6,690,678 B1* 2/2004 Basso et al. ......
`... 370/468
`6,754,700 B1* 6/2004 Gordon et al. ....
`... 709,233
`6,948,104 B1* 9/2005 Herley et al. .....
`... 714,712
`2002fO181395 A1 12/2002 Foster et al. ................ 370,229
`
`OTHER PUBLICATIONS
`“Four Steps to Application Performance Across the Net
`work” by Packeteer, Inc., dated Nov. 2001.
`* cited by examiner
`Primary Examiner Marc D. Thompson
`(74) Attorney, Agent, or Firm Wolf, Greenfield & Sacks,
`P.C.
`
`(57)
`
`ABSTRACT
`
`Disclosed are methods and systems for a receiver to autono
`mously allocate bandwidth among its incoming communi
`cations flows. The incoming flows are assigned priorities.
`When it becomes necessary to alter the allocation of band
`width among the flows, the receiver selects one of the lower
`priority flows. The receiver then causes the selected flow to
`delay sending acknowledgements of messages received to
`the senders of the messages. In most modern protocols,
`senders are sensitive to the time it takes to receive acknowl
`edgements of the messages they send. When the acknowl
`edgement time increases, the sender assumes that the
`receiver is becoming overloaded. The sender then slows
`down the rate at which it sends messages to the receiver.
`This lowered sending rate in turn reduces the amount of
`bandwidth used by the flow as it comes into the receiver.
`This frees up bandwidth which can then be used by higher
`priority flows.
`
`41 Claims, 13 Drawing Sheets
`
`Automatically assign default priorities to the communications flows
`coming into the receiver 102 over the common Communications link
`108. Optionally, assign default priorities to applications potentially,
`but not presently, associated with incoming communications flows.
`
`Monitor each incoming communications flow and record its present
`bandwidth use,
`
`Associate an application with each incoming communications flow.
`
`Create a list of the associated applications, optionally including the
`potential ones. The list shows the default priorities and the present
`bandwidth use of the associated incoming communications flows
`(where appropriate).
`
`
`
`
`
`
`
`Display the list to a user of the receiver 102.
`
`Receive input from the user setting priorities for applications on the
`list. Derive priorities for the incoming communications flows from the
`priorities assigned by the user to the associated applications,
`
`
`
`Cloudflare - Exhibit 1082, page 1
`
`
`
`U.S. Patent
`
`May 2, 2006
`
`Sheet 1 of 13
`
`US 7,039,715 B2
`
`
`
`
`
`
`
`els q?M
`
`8 || ||
`
`
`
`/
`
`
`
`
`
`0 || ||?, "SDIH
`
`Cloudflare - Exhibit 1082, page 2
`
`
`
`U.S. Patent
`
`May 2, 2006
`
`Sheet 2 of 13
`
`US 7,039,715 B2
`
`Z "SDIE
`
`
`
`
`
`
`
`
`
`Cloudflare - Exhibit 1082, page 3
`
`
`
`U.S. Patent
`
`May 2, 2006
`
`Sheet 3 of 13
`
`US 7,039,715 B2
`
`FIG. 3
`Assign priorities to the Communications flows coming into the receiver
`102 over the Common Communications link 108.
`3OO
`
`
`
`No
`
`
`
`ls it time to re-allocate
`pandwidth among the incoming communications flows
`302
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`Select one or more of the lower priority incoming
`Communications flows.
`3O4.
`
`Increase the average response time of the selected incoming
`Communications flows.
`306
`
`ls it time to "undo" a
`bandwidth re-allocation performed previously?
`308
`
`
`
`Undo a previously performed increase in the average
`response time of an incoming communications flow.
`310
`
`Cloudflare - Exhibit 1082, page 4
`
`
`
`U.S. Patent
`
`May 2, 2006
`
`Sheet 4 of 13
`
`US 7,039,715 B2
`
`Automatically assign default priorities to the communications flows
`coming into the receiver 102 over the common communications link
`108. Optionally, assign default priorities to applications potentially,
`but not presently, associated with incoming communications flows.
`4OO
`
`Monitor each incoming communications flow and record its present
`bandwidth use.
`402
`
`
`
`
`
`
`
`Create a list of the associated applications, optionally including the
`potential ones. The list shows the default priorities and the present
`bandwidth use of the associated incoming communications flows
`(where appropriate).
`406
`
`Receive input from the user setting priorities for applications on the
`list. Derive priorities for the incoming communications flows from the
`priorities assigned by the user to the associated applications.
`410
`
`
`
`
`
`Cloudflare - Exhibit 1082, page 5
`
`
`
`May2, 2006
`
`Sheet 5 of 13
`
`US 7,039,715 B2
`
`punoibyoeg
`
`elUlales|eel-ts)
`
`SANOBIA}U|
`
`SAIOLIO}U]
`
`
`
`SUI-|22Y
`
`(
`i)
`
`TUeLealeeaiaaed
`
`punoiBbyoeg
`
`SAIOCIS}U|
`
`
`
`U.S. Patent
`
`qv‘Sis
`
`jougeiogAeldsiq
`
`fo——44
`
`
`
`ZO|Jenleoey_/
`
`JeAJaSWOlyPEOjUMOG| QL|JOAIOSWOdypeojuMOG||PL]
`
`
`
`
`
`
`
`
`
`
`
` Z#tJasmolg!L#JOSMOIG!
`
`
`(BuiluunsAjjueseidjou)|Auoudaje}
`
`
`
`
`
` Jeyndwio9||(po,senle0ayAsepucoeguoBuiuuny)!
`
`Cloudflare - Exhibit 1082, page 6
`
`Cloudflare - Exhibit 1082, page 6
`
`
`
`
`
`
`
`
`
`U.S. Patent
`
`May 2, 2006
`
`Sheet 6 of 13
`
`US 7,039,715 B2
`
`Set a threshold value for an amount of the incoming bandwidth of
`the Common Communications link 108.
`500
`
`Measure the total amount of incoming bandwidth of the common
`communications link 108 that is presently being used.
`502
`
`Compare the measured amount of bandwidth used with the
`bandwidth threshold value.
`504
`
`
`
`
`
`
`
`If the measured amount exceeds the threshold value, and if one
`incoming communications flow has a priority higher than the priority
`of another incoming Communications flow, then choose to re
`allocate bandwidth among the incoming communications flows.
`506
`
`
`
`Cloudflare - Exhibit 1082, page 7
`
`
`
`U.S. Patent
`
`May 2, 2006
`
`Sheet 7 of 13
`
`US 7,039,715 B2
`
`FIG. 5b)
`
`
`
`Monitor traffic on an outgoing communications flow that is
`associated with an interactive-type incoming Communications flow.
`51O
`
`If the monitoring reveals an indication (such as an HTTP GET()
`command) that the incoming communications flow will soon need
`more bandwidth, and if another incoming communications flow has
`a priority lower than the priority of this incoming communications
`flow, then choose to re-allocate bandwidth among the incoming
`Communications flowS.
`512
`
`Cloudflare - Exhibit 1082, page 8
`
`
`
`U.S. Patent
`
`May 2, 2006
`
`Sheet 8 of 13
`
`US 7,039,715 B2
`
`Set a desired value range for a Quality of Service (QOS) parameter
`of an incoming communications flow.
`516
`
`Compare the measured QOS parameter value with the desired
`QOS parameter value range.
`520
`
`
`
`
`
`
`
`
`
`
`
`If the measured QOS parameter does not lie within the desired
`QOS parameter range, and if another incoming Communications
`flow has a priority lower than the priority of this incoming
`communications flow, then choose to re-allocate bandwidth among
`the incoming communications flows.
`522
`
`
`
`Cloudflare - Exhibit 1082, page 9
`
`
`
`U.S. Patent
`
`May 2, 2006
`
`Sheet 9 of 13
`
`US 7,039,715 B2
`
`Find an incoming communications flow with a priority as low or
`lower than that of any other incoming flow.
`600
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`List all of the incoming communications flows that have a priority
`equal to the priority of the incoming flow found in step 600.
`602
`
`For each incoming flow listed in step 602, measure the amount of
`bandwidth used by the incoming flow.
`604
`
`Select an incoming flow listed in step 602 whose measured
`bandwidth use is greater than or equal to the measured bandwidth
`use of any other incoming flow listed in step 602.
`606
`
`
`
`Cloudflare - Exhibit 1082, page 10
`
`
`
`U.S. Patent
`
`May 2, 2006
`
`Sheet 10 of 13
`
`US 7,039,715 B2
`
`Application
`Program 700
`
`FIG. 7
`
`Application
`Program 702
`
`Application
`Program 704
`
`Layered Service Provider 706
`
`User Mode
`Kernel Mode
`
`
`
`input/
`Output
`Manager
`710
`
`
`
`Network Protocol Stack and Protocol Buffers 708
`
`Hardware Abstraction Layer 712
`
`Network Interface 714
`
`Common
`Communications
`-Link 108
`
`Cloudflare - Exhibit 1082, page 11
`
`
`
`U.S. Patent
`
`May 2, 2006
`
`Sheet 11 of 13
`
`US 7,039,715 B2
`
`Set a threshold value for an amount of the incoming bandwidth of
`the Common Communications link 1 O8.
`800
`
`Measure the total amount of incoming bandwidth of the common
`communications link 108 that is presently being used.
`802
`
`Compare the measured amount of bandwidth used with the
`bandwidth threshold value.
`804
`
`If the measured amount is below the threshold value, and if at least
`one incoming communications flow previously had its response time
`increased, then choose to undo a previous bandwidth re-allocation.
`806
`
`
`
`Cloudflare - Exhibit 1082, page 12
`
`
`
`U.S. Patent
`
`May 2, 2006
`
`Sheet 12 of 13
`
`US 7,039,715 B2
`
`
`
`Set a period of repose for an interactive-type incoming
`Communications flow.
`810
`
`Monitor traffic on an outgoing communications flow that is
`associated with the interactive-type incoming communications flow.
`812
`
`lf monitoring for a period of time equal to the period of repose
`reveals no indication (such as an HTTP GET() command) that the
`incoming communications flow will soon need more bandwidth, and
`if at least one incoming communications flow previously had its
`response time increased, then choose to undo a previous
`bandwidth re-allocation.
`814
`
`Cloudflare - Exhibit 1082, page 13
`
`
`
`U.S. Patent
`
`May 2, 2006
`
`Sheet 13 of 13
`
`US 7,039,715 B2
`
`Set a desired value range for a Quality of Service (QOS) parameter
`of an incoming communications flow.
`818
`
`Compare the measured QOS parameter value with the desired
`QOS parameter value range.
`822
`
`
`
`
`
`
`
`If the measured QOS parameter lies within the desired QOS
`parameter range, and if at least one incoming communications flow
`previously had its response time increased, then choose to undo a
`previous bandwidth re-allocation.
`824
`
`
`
`Cloudflare - Exhibit 1082, page 14
`
`
`
`US 7,039,715 B2
`
`1.
`METHODS AND SYSTEMIS FOR A
`RECEIVER TO ALLOCATE BANDWIDTH
`AMONG INCOMING COMMUNICATIONS
`FLOWS
`
`TECHNICAL FIELD
`
`The present invention is related generally to computer
`communications, and, more particularly, to controlling qual
`ity of service characteristics of incoming communications
`flows.
`
`10
`
`BACKGROUND OF THE INVENTION
`
`2
`time comes to play the Voice contained in the packet, the
`listener may hear a “pop.” When the late packet finally does
`arrive, it is worthless and is discarded. Real-time flows are
`often characterized by a fairly constant use of bandwidth
`over time. (The bandwidth used may vary somewhat over
`time with differing efficiencies achieved by compression
`algorithms.) Downloading live music is another example of
`a real-time communications flow, in this case involving only
`a receiving flow that is sensitive to latency. Receiver-side
`buffering can be used to relieve some, but not all, of the
`sensitivity to latency.
`For a second benchmark communications example, con
`sider a Web browser. The browser's communications flow is
`termed “interactive” because the amount of bandwidth
`demanded and the latency desired depend upon the user's
`actions at any one time. When the user clicks on an icon or
`otherwise requests a new page of information, the response
`may involve a large amount of information being sent to the
`user's computing device. The information is preferably
`delivered as quickly as possible so that the user does not
`have to wait long for it. After receiving the information,
`however, the user typically spends a while reviewing the
`information before making another request. During this
`period of user review, the browser's bandwidth demands are
`very low or nil. Thus, an interactive communications flow
`may be characterized by periods of little or no bandwidth
`demand interspersed with periods where large bandwidth
`and low latency are desirable.
`The third benchmark communications flow example
`involves a file download. The user requests that a large
`amount of information be sent to the computing device.
`Unlike in the interactive flow example, the user is not staring
`at the screen of the device waiting for the download to
`complete. Rather, the user is paying attention to other
`communications flows. Because this type of communica
`tions flow is not directly tied to the user's immediate
`perceptions, it is termed a “background flow. While this
`type of flow may demand enormous amounts of bandwidth,
`the demand may be satisfactorily met with Small amounts of
`bandwidth spread over a long period of time.
`When a user's computing device is simultaneously receiv
`ing examples of all three benchmark flows, it is clear that a
`“fair allocation of bandwidth does not satisfy the user's
`requirements. Instead, an ideal allocation of bandwidth
`would give real-time flows as much bandwidth as they need
`as soon as they need it. Interactive flows would receive the
`remainder of the bandwidth when responding to the user's
`requests for information and would receive little or no
`bandwidth otherwise. Background flows would use any
`bandwidth not needed by the real-time and interactive flows.
`An ideal allocation would change moment by moment with
`changes in the bandwidth demands of the flows and would
`not waste any bandwidth due to allocation inefficiencies.
`Such an ideal allocation is possible when all of the
`communications flows coming into the user's computing
`device originate at one sending device. The sender controls
`all of the flows and can allocate bandwidth accordingly. This
`case is the rare exception, however. A major benefit of
`today's communications environment is the proliferation of
`content providers and a user's ability to receive content from
`multiple providers to create a combined presentation unique
`to the user. This case of multiple, simultaneous providers is
`one consideration leading to the development of QOS (Qual
`ity of Service) protocols.
`QOS protocols are used by senders and receivers to
`negotiate various aspects of their communications. When
`fully deployed, QOS protocols would be very useful for
`
`15
`
`30
`
`40
`
`Today's communications environment is rich with infor
`mation providers, with the World Wide Web being the
`outstanding example. Modern communications technologies
`allow a Web user to download a software file to his com
`puting device from one Web site, listen to a live music
`broadcast from another, all the while browsing through other
`Web sites searching for meaningful content. At the same
`time, the user may hold a telephone conversation, possibly
`with a live video feed, with another user using the Web to
`provide the communications connection. The user's com
`puting device may also serve as a "gateway,” providing
`25
`communications services for another local computing
`device. This latter situation is common in home environment
`where all communications come through a central desktop
`computing device which then shares its communications
`capabilities with other devices in the home. Each of these
`activities creates one or more “flows’ of communications
`coming into the user's computing device. In the typical set
`up, the user's device has just one communications link
`handling all of these flows simultaneously. The connection
`is typically a modern connection, or more and more com
`35
`monly, a DSL (Digital Subscriber Line) or cable modern
`link.
`The communications link has a limited total capacity, or
`“bandwidth,” which it shares among all of the communica
`tions flows coming into the user's computing device. Typical
`modern communications protocols Support this sharing and,
`when the sum of the bandwidth demands of all of the
`incoming flows exceeds the total bandwidth available on the
`shared communications link, the protocols allocate the band
`width. This allocation is performed automatically by the
`protocols and eventually arrives at a more-or-less “fair”
`distribution of bandwidth among the competing communi
`cations flows. However, a “fair distribution is rarely what
`the user wants. In a first example, when the user is working
`from home on one computing device that serves as a
`gateway for a second device, then the user may wish his
`work activities to take precedence in their bandwidth
`demands over a second user's entertainment activities.
`Another reason for not wanting a “fair distribution of
`bandwidth among the incoming communications flows is
`55
`based on differences in bandwidth characteristics among
`various flows. The extent to which these differing charac
`teristics are Supported strongly affects the user's perception
`of the flows quality. To illustrate, consider three “bench
`mark' communications flows. First, a telephone conversa
`tion is termed “real-time” because the listening parties are
`very sensitive to latency, that is, to delays in the communi
`cations process. Both the sending and the receiving flows
`display this sensitivity to latency. A packet of a remote
`speaker's voice information whose delivery is delayed by
`just half a second, for example, cannot be played upon its
`arrival. Rather, if the packet has not yet arrived when the
`
`45
`
`50
`
`60
`
`65
`
`Cloudflare - Exhibit 1082, page 15
`
`
`
`3
`allocating bandwidth among competing incoming commu
`nications flows. If even a few devices do not yet implement
`full QOS protocols, however, the benefits of QOS can
`quickly become elusive. The user's computing device would
`not be able to depend upon the fact that all of its commu
`nications peers adhere to QOS and so would have to make
`other arrangements. Full QOS deployment is taking place
`only very slowly for many reasons. First, QOS protocols
`must be standards agreed upon by all participating parties.
`The protocol standardization process is slow because the
`needs of all participants must be accommodated without
`impeding the advanced capabilities of a few participants.
`Second, once the new QOS standards are set, all commu
`nicating devices must be upgraded to implement the stan
`dards, a process that can take years. Finally, because of their
`complexity, a full suite of QOS protocols may require more
`processing power to implement than some of today's Smaller
`devices can afford.
`What is needed is a way for a receiver computing device
`to autonomously change the allocation of bandwidth among
`its incoming communications flows.
`
`10
`
`15
`
`SUMMARY OF THE INVENTION
`
`25
`
`30
`
`35
`
`40
`
`In view of the foregoing, the present invention provides
`methods and systems for a receiver computing device,
`without the benefit of communicating with sending comput
`ing devices via QOS protocols, to autonomously allocate
`bandwidth among the communications flows coming into it.
`The receivers incoming communications flows are assigned
`priorities, such as “real-time' priority, “interactive' priority,
`and “background” or “back-off priority. When it becomes
`necessary to alter the allocation of bandwidth among the
`incoming communications flows, the receiver selects one or
`more of the lower priority flows. The receiver then causes
`the selected flows to delay sending acknowledgements of
`messages received to the senders of the messages. In most
`modern protocols, senders are sensitive to the time it takes
`to receive acknowledgements of the messages they send.
`When the acknowledgement time increases, the sender
`assumes that the receiver is becoming overloaded. The
`sender then “backs off or slows down the rate at which it
`sends messages to the receiver. This lowered sending rate in
`turn reduces the amount of bandwidth used by the commu
`nications flow as it comes into the receiver. This frees up
`45
`some bandwidth which can then be used by higher priority
`communications flows. Thus, the receiver changes the allo
`cation of incoming bandwidth by moving some bandwidth
`from lower priority communications flows to higher priority
`flows, all without requesting a bandwidth change from the
`senders.
`One aspect of the present invention presents several
`methods for setting priorities among the incoming commu
`nications flows. Some methods are automatic Such as basing
`priority upon the type of application receiving the incoming
`55
`communications flow or monitoring the incoming commu
`nications flow for characteristics indicative of a particular
`priority. Other methods include presenting a list of incoming
`communications flows to the user and allowing the user to
`set priorities. These methods may be used together, for
`example by automatically setting default priorities which
`may then be changed by the user.
`Another aspect of the present invention concerns how to
`decide whether a re-allocation of bandwidth is necessary. In
`Some simple cases, it may be appropriate to set a threshold
`target of total bandwidth use. For example, when the total
`amount of bandwidth in use exceeds 95% of the capacity of
`
`50
`
`60
`
`65
`
`US 7,039,715 B2
`
`4
`the communications link, then bandwidth is re-allocated
`from the lower priority communications flows to the higher
`priority flows. More sophisticated and efficient mechanisms
`include monitoring the actual communications characteris
`tics of the higher priority communications flows to see if, for
`example, their latency targets are being met and, if not, then
`allocating more bandwidth to them. For an incoming com
`munications flow of the interactive type, the corresponding
`outgoing flow can be monitored for indications, such as an
`HTTP (HyperText Transfer Protocol) GET() command, that
`the incoming flow will soon require much more bandwidth.
`The bandwidth is then re-allocated proactively.
`In a third aspect, the present invention presents different
`ways of delaying message acknowledgements on a selected,
`lower priority communications flow. A Layered Service
`Provider (LSP) may be placed between the protocol stack,
`run in the kernel of the receiver's operating system, and the
`application receiving the incoming communications flow. To
`delay acknowledgements, the LSP can be directed to insert
`'sleep' commands that increase the amount of time it takes
`for the application to receive incoming messages or to
`increase the amount of time it takes for outgoing acknowl
`edgements to be sent out over the communications link.
`In yet another aspect, the present invention decides when
`to “undo' a bandwidth reallocation performed previously.
`This is appropriate because the re-allocation methods avail
`able to the receiver may sometimes introduce inefficiencies
`in overall bandwidth use and should therefore be undone
`when their purpose has been served.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`While the appended claims set forth the features of the
`present invention with particularity, the invention, together
`with its objects and advantages, may be best understood
`from the following detailed description taken in conjunction
`with the accompanying drawings of which:
`FIG. 1 is a block diagram showing an exemplary com
`munications environment with a receiver computing device
`acting as a communications gateway for another computing
`device, an Internet Service Provider's Access Point server,
`and a few sender computing devices on the Internet;
`FIG. 2 is schematic diagram generally illustrating an
`exemplary computing system that Supports the present
`invention;
`FIG.3 is a flow chart illustrating an exemplary method for
`a receiver to allocate bandwidth among its incoming com
`munications flows:
`FIG. 4a is a flow chart depicting one method available to
`a user for setting priorities for communications flows com
`ing into a receiver, and FIG. 4b is an exemplary Screen
`display usable with the method depicted in FIG. 4a,
`FIGS. 5a through 5c are flow charts showing a few
`methods that a receiver can use to decide whether to
`re-allocate bandwidth among its incoming communications
`flows, in FIG. 5a the decision is based on the total amount
`of bandwidth used by all incoming flows, FIG. 5b's method
`monitors an interactive flow for indications that it will soon
`need more bandwidth, and in FIG. 5c QOS parameters are
`set and monitored;
`FIG. 6 is a flow chart showing an exemplary method a
`receiver can use to select one of its incoming communica
`tions flows for reduced bandwidth use;
`FIG. 7 is a schematic diagram of an exemplary system on
`a receiver for allocating bandwidth among the receiver's
`incoming communications flows; and
`
`Cloudflare - Exhibit 1082, page 16
`
`
`
`US 7,039,715 B2
`
`5
`FIGS. 8a through 8c are flow charts depicting various
`methods that a receiver may use to decide whether to “undo'
`the effects of a bandwidth re-allocation made previously,
`FIG. 8a's method monitors the total amount of bandwidth
`used by all incoming flows, in FIG. 8b's method the decision
`is based on a passage of time during which an interactive
`flow does not make a request for more bandwidth, and in
`FIG. 8c QOS parameters are monitored.
`
`DETAILED DESCRIPTION OF THE
`INVENTION
`
`10
`
`6
`Point server 110. The present invention is applicable in
`situations beyond the Internet 112, but the Internet 112 is a
`good exemplary environment for discussing the invention
`because the Internet 112 provides a wealth and variety of
`content. For example, using its link to the Internet 112, the
`receiver 102 may be downloading a file. Such as an ency
`clopaedia article, from the file server 114. At the same time,
`a user of the receiver 102 may be listening to a live concert
`broadcast by the real-time music server 116. Also at the
`same time, users of the receiver 102 and of the secondary
`receiver 104 may be browsing the subset of the Internet 112
`called the World WideWeb, searching for relevance among
`the Web sites 118. All of the communications coming into
`the home 100, including the file download, the live concert
`broadcast, and the browsed Web content, flow over the
`common link 108 and share, sometimes haphazardly, its
`limited bandwidth. Because haphazard sharing does not
`always accord with the needs of the users of the receivers
`102 and 104, the invention is directed to methods for these
`users to directly influence the sharing.
`The receiver 102 of FIG. 1 may be of any architecture.
`FIG. 2 is a block diagram generally illustrating an exemplary
`computer system that Supports the present invention. The
`computer system of FIG. 2 is only one example of a suitable
`environment and is not intended to Suggest any limitation as
`to the scope of use or functionality of the invention. Neither
`should the receiver 102 be interpreted as having any depen
`dency or requirement relating to any one or combination of
`components illustrated in FIG. 2. The invention is opera
`tional with numerous other general-purpose or special
`purpose computing environments or configurations.
`Examples of well known computing systems, environments,
`and configurations Suitable for use with the invention
`include, but are not limited to, personal computers, servers,
`hand-held or laptop devices, multiprocessor Systems, micro
`processor-based systems, set-top boxes, programmable con
`Sumer electronics, network PCs, minicomputers, mainframe
`computers, and distributed computing environments that
`include any of the above systems or devices. In its most
`basic configuration, the receiver 102 typically includes at
`least one processing unit 200 and memory 202. The memory
`202 may be volatile (such as RAM), non-volatile (such as
`ROM or flash memory), or some combination of the two.
`This most basic configuration is illustrated in FIG. 2 by the
`dashed line 204. The receiver 102 may have additional
`features and functionality. For example, the receiver 102
`may include additional storage (removable and non-remov
`able) including, but not limited to, magnetic and optical
`disks and tape. Such additional storage is illustrated in FIG.
`2 by removable storage 206 and non-removable storage 208.
`Computer-storage media include Volatile and non-volatile,
`removable and non-removable, media implemented in any
`method or technology for storage of information Such as
`computer-readable instructions, data structures, program
`modules, or other data. Memory 202, removable storage
`206, and non-removable storage 208 are all examples of
`computer-storage media. Computer-storage media include,
`but are not limited to, RAM, ROM, EEPROM, flash
`memory, other memory technology, CD-ROM, digital ver
`satile disks, other optical storage, magnetic cassettes, mag
`netic tape, magnetic disk storage, other magnetic storage
`devices, and any other media that can be used to store the
`desired information and that can be accessed by the receiver
`102. Any such computer-storage media may be part of the
`receiver 102. The receiver 102 may also contain communi
`cations channels 210 that allow it to communicate with other
`computing devices. Communications channels 210 are
`
`15
`
`25
`
`30
`
`35
`
`Turning to the drawings, wherein like reference numerals
`refer to like elements, the present invention is illustrated as
`being implemented in a suitable computing environment.
`The following description is based on embodiments of the
`invention and should not be taken as limiting the invention
`with regard to alternative embodiments that are not explic
`itly described herein.
`In the description that follows, the present invention is
`described with reference to acts and symbolic representa
`tions of operations that are performed by one or more
`computing devices, unless indicated otherwise. As such, it
`will be understood that such acts and operations, which are
`at times referred to as being computer-executed, include the
`manipulation by the processing unit of the computing device
`of electrical signals representing data in a structured form.
`This manipulation transforms the data or maintains them at
`locations in the memory system of the computing device,
`which reconfigures or otherwise alters the operation of the
`device in a manner well understood by those skilled in the
`art. The data structures where data are maintained are
`physical locations of the memory that have particular prop
`erties defined by the format of the data. However, while the
`invention is being described in the foregoing context, it is
`not meant to be limiting as those of skill in the art will
`appreciate that various of the acts and operations described
`hereinafter may also be implemented in hardware.
`The exemplary home environment 100 of FIG. 1 contains
`a receiver computing device 102 and a secondary receiver
`computing device 104 tied together by a local area network
`(LAN) 106. These computing devices are called “receivers'
`because the present invention focuses on methods accessible
`to receivers of incoming communications flows to allocate
`bandwidth among those flows without explicitly communi
`cating with the senders that are delivering content over those
`flows. Despite their designation, the receivers 102 and 104
`may be simultaneously sending content to other computing
`devices.
`The home 100 is shown as having one common commu
`50
`nications link 108 that serves both receivers 102 and 104.
`The common link 108 may be based on any number of
`technologies, different technologies providing different
`characteristics, especially different amounts of bandwidth.
`The common link 108 may, for example, be a temporary
`dial-up modern line or wireless connection or a more
`permanent DSL or cable modern link. In any case, while the
`common link 108 is directly connected to the receiver 102,
`the bandwidth of the common link 108 is shared between
`both receivers 102 and 104 in the home 100. The r