throbber
12/18/2014
`
`The case for persistent-connection HTTP
`
`The case for persistent-connection
`Full Text PDFNjtotbsjcle
`
`HTTP
`
`Author
`
`Jeffrey
`
`Published in
`
`1upuv uiaital cawoment Coroorevon Westeri Researcl
`250 Mw ersity Aven
`CA
`
`Labo ato
`
`Palo Al
`
`Proceeding
`SIGCOMM 95 Proceedings of the conference on Applications
`architectures and protocols for computer
`technologies
`communication
`
`Pages 299-313
`ACM New York NY USA 1995
`ISBN 0-89791-711-1
`
`table of cortents
`
`doi
`
`1145/217382.217455
`
`1995 Article
`
`Bib iometrics
`Downloads 65eks
`Downloads 12 Months 22
`Downloads cumulative 1139
`Odation Count 65
`
`SGNN
`
`3GNUP
`
`Tools and Resources
`
`Rowe
`
`Re ri scions
`
`TOO Service
`
`SavetoBnder
`
`Export Formats
`
`BibTeX
`
`dNo
`
`ACM Ref
`
`Upcoming Conference
`SICOMM5
`
`Share
`
`Tags air
`mentation
`communications
`
`cerformance
`
`ork
`
`Newsletter
`ACM SIGCOMM Computer Communication Review nomeoaw
`Volume 25 Issue
`Oct 1995
`Pages 299-313
`New York NY USA
`doi101145/21
`
`bleo
`
`oiteis
`
`39121 455
`
`Feedback
`
`Switch to flggejLef no tabs
`
`Abstract
`
`Authors
`
`References
`
`Cited By
`
`Index Terms
`
`Pubkaton
`
`Revews
`
`Comments
`
`Tabe of Contents
`
`The success of the World-Wide Web is largely due to the simplicity hence ease of
`the Hypertext Transfer Protocol
`implementation of
`HTTP HTTP however makes
`new TCP
`use of network and server
`by creating
`resources and adds unnecessary latencies
`inefficient
`connection for each request Modifications to HTTP have been proposed that would transport multiple requests over each TCP connection
`These modifications
`have led to debate over their actual
`impact on users on servers and on the network This paper reports the results
`of log-driven simulations of several variants of the proposed modifications which demonstrate the value of persistent connections
`
`Powered by THE ACM GUIDE TO COMP
`
`The ACM Digital Library is published by the Association for Computing Machinery Copyright
`Terms of Usace
`Code of Eth cs Contact Us
`Privacy Policy
`
`2014 ACM Inc
`
`Useful downloads Adobe Reader
`
`QuickTme
`
`Windows Media Player
`
`Real Playe
`
`http//dl.acm .org/citation.cfmid21 7465
`
`1/2
`
`Petitioner IBM – Ex. 1061, p. 1
`
`

`
`The Case for Persistent-Connection HTTP
`
`Mogul
`Jeffrey
`Digital Equipment Corporation Western Research Laboratory
`250 University Avenue Palo Alto CA 94301
`mogulwrl.dec.com
`
`Abstract
`
`The success of the World-Wide Web is largely due to
`the simplicity hence ease of implementation of the Hy
`pertext Transfer Protocol HTTP HTTP however
`makes inefficient use of network and server
`resources
`new TCP
`and adds unnecessary
`latencies by creating
`to HTTP
`for each request Modifications
`connection
`have been proposed that would transport multiple re
`quests over each TCP connection
`These modifications
`have led to debate over their actual
`impact on users on
`servers and on the network
`reports the
`This paper
`results of
`log-driven simulations of several variants of
`the proposed modifications which demonstrate the
`value of persistent connections
`
`Introduction
`People use the World Wide Web because
`it gives quick
`and easy access
`tremendous variety of information in
`to
`remote locations Users do not like to wait for their results
`they tend to avoid or complain about Web pages that take
`long time to retrieve Users care about Web latency
`Perceived latency comes from several sources Web ser
`vers can take
`long time to process
`request especially if
`they are overloaded or have slow disks Web clients can
`add delay if they do not quickly parse the retrieved data and
`for the user Latency caused by client or sewer
`display it
`slowness
`can in principle be solved simply by buying
`faster computer or faster disks or more memory
`The main contributor to Web latency however
`is net
`work communication The Web is useful precisely because
`it provides remote access and transmission of data across
`time
`Some of
`depends on
`distance
`takes
`this delay
`bandwidth you can reduce this delay by buying
`higher-
`bandwidth link But much of
`seen by Web
`the latency
`im
`users comes from propagation delay and you cannot
`certain point no matter
`delay past
`prove propagation
`how much money you have While caching can help many
`Web access are compulsory misses
`light we should at
`If we cannot
`increase the speed of
`least minimize the number of network round-trips required
`The Hypertext Transfer Protocol
`for an interaction
`
`the
`
`describes
`
`HTTP
`is currently used in the Web incurs many
`as it
`more round trips than necessaiy see section
`researchers have proposed modifying HTTP to
`Several
`round-trips 21 271 Some
`eliminate unnecessary network
`people have questioned the impact of these proposals on
`network sewer and client performance This paper reports
`on simulation experiments driven by traces collected from
`an extremely busy Web sewer
`that support
`the proposed
`HTTP modifications
`According to these simulations the
`improve users perceived performance
`modifications will
`network loading and sewer
`resource utilization
`The paper begins with an ovewiew of HTTP section
`and an analysis of its flaws section
`Section
`describes
`the proposed HTTP modifications and section
`some of
`issues of
`the modified
`design
`potential
`Section
`describes the design of the simulation
`protocol
`experiments and section
`describes the results
`Overview of the HTTP protocol
`The HTTP protocol
`layered over
`reliable
`bidirectional byte stream normally TCP 23I Each HTTP
`request sent from the client to the
`interaction consists of
`from the sewer
`to the
`followed by
`sewer
`response sent
`client Request and response parameters are expressed in
`format although HTTP may convey non-
`simple ASCII
`ASCII data
`An HTTP request
`includes several elements
`rre.thod
`such as GET PUT POST etc
`Uniform Resource
`Locator URL
`set of Hypertext Request HTRQ
`headers with which the client specifies things such as the
`kinds of documents it
`is willing to accept authentication
`information etc and an optional Data field used with cer
`tain methods such as PUT
`The sewer parses the request and takes action according
`to the specified method It
`then sends
`response to the
`status code to indicate if
`the request
`including
`client
`succeeded or if not why not
`set of object headers
`returned by the sew
`the object
`meta-information about
`Data field containing the file requested or the
`er and
`output generated by
`sewer-side script
`URLs may refer to numerous document
`types but
`the Hypertext Markup Language
`format
`primary
`HTJVIL 2I HTML supports the use of hyperlinks links
`also supports the use of in
`to other documents
`HTJVIL
`lined images URLs referring to digitized images usually
`in the Graphics Interchange Format GIF
`or JPEG for
`mat which should be displayed along with the text of the
`HTML file by the users browser
`For example if
`an
`corporate logo and
`HTJVIL page includes
`photograph of
`
`is
`
`is
`
`the
`
`Petitioner IBM – Ex. 1061, p. 2
`
`

`
`the
`
`that
`
`the companys president
`this would be encoded as two
`inlined images The browser would therefore make three
`HTTP requests to retrieve the HTML page and the two
`images
`Analysis of HTTPs inefficiencies
`now analyze the way
`interaction between
`HTTP clients and sewers appears on the network with em
`phasis on how this affects latency
`Figure 3-1 depicts the exchanges at the beginning of
`interaction the retrieval of an HTJVIL document
`typical
`least one uncached inlined image
`In this figure
`with at
`time runs down the page and long diagonal arrows show
`from client to sewer or vice versa
`These
`packets sent
`arrows are marked with TCP packet
`types note that most
`of
`the packets carry acknowledgements
`but
`the packets
`marked ACK cany only an acknowledgement
`and no new
`FIN and SYN packets in this example never carry
`data
`data although in principle they sometimes could
`
`Client
`
`Server
`
`ORTT
`Client opens
`TCP connection
`1RTT
`Client sends
`HTTP request
`for HTML
`
`2RTT
`Client parse
`HTML
`Client opens
`TCP connection
`
`3RTT
`Client sends
`HTTP request
`for image
`
`4RTT
`
`Image begins
`to arrive
`
`SYN
`
`AC
`
`DAT
`
`SYN
`
`ACK
`
`DAT
`
`ACK
`
`DA
`
`rver reds
`from disk
`
`FIN
`
`ACK
`
`ACK
`
`rver reds
`from disk
`
`Figure 3-1 Packet exchanges and round-trip times
`for HTTP
`
`Shorter vertical arrows show local delays at either client
`or sewer
`the causes of these delays are given in italics
`Other client actions are shown in roman type to the left of
`the Client timeline
`
`Also to the left of the Client timeline horizontal dotted
`show the mandatory round trip times RTTs
`lines
`through the network imposed by the combination of the
`HTTP and TCP protocols
`These mandatory
`round-trips
`result from the dependencies between various packet ex
`changes marked with solid arrows
`The packets shown
`with gray arrows are required by the TCP protocol but do
`is not re
`the receiver
`latency because
`not directly affect
`for them before proceeding with other ac
`quired to wait
`
`tivity
`
`The mandatory round trips are
`The client opens the TCP connection
`resulting in
`an exchange of SYN packets as part of TCPs three-
`way handshake procedure
`The client transmits an HTTP request to the sewer
`the sewer may have to read from its disk to fulfill
`the request and then transmits the response to the
`In this example we assume that the response
`client
`single data packet al
`is small enough to fit
`into
`though in practice it might not
`The sewer
`then
`closes the TCP connection
`although usually the
`client need not wait for the connection
`termination
`before continuing
`to ex
`After parsing the returned HTJVIL document
`the URLs for inlined images the client opens
`tract
`new TCP connection
`sewer
`resulting in
`another exchange of SYN packets
`The client again transmits an HTTP request
`this
`inlined image The sewer obtains
`time for the first
`the image file and starts transmitting it to the client
`Therefore the earliest time at which the client could start
`inlined image would be four network
`displaying the first
`requested the document
`round-trip times after the user
`Each additional
`least two further
`inlined image requires at
`In practice for documents larger than can fit
`round trips
`small number of packets additional delays will be
`encountered
`
`to the
`
`into
`
`3.1 Other inefficiencies
`
`least two network round trips
`In addition to requiring at
`per document or inlined image the HTTP protocol
`as cur
`rently used has other inefficiencies
`new TCP connection
`Because the client sets up
`each HTTP request
`there are costs in addition to network
`latencies
`
`for
`
`tables
`
`full
`
`its
`
`amount
`of
`Connection
`requires
`setup
`certain
`processing overhead at both the sewer and the client
`This typically includes allocating new port numbers
`and resources
`and creating
`thta
`appropriate
`the
`structures Connection teardown also requires some
`processing time although perhaps not as much
`The TCP connections may be active for only
`few
`the TCP specification
`the
`seconds but
`requires that
`that closes the connection remember certain per-
`host
`information for four minutes 23I Many
`connection
`and use
`implementations violate this specification
`much shorter timer
`busy sewer could end up
`with
`of
`connections
`in
`this
`TIJVIE WAIT state either leaving no room for
`new connections or perhaps imposing excessive con
`nection table management costs
`Current HTTP practice
`also means that most of
`these
`TCP connections
`few thousand bytes of data
`carry only
`looked at retrieval size distributions for two different ser
`In one the mean size of 200000 retrievals was
`vers
`median of 1770
`12925
`bytes with
`bytes
`ignoring
`12727 zero-length retrievals the mean was 13767 bytes
`and the median was 1946 bytes In the other the mean of
`1491876 retrievals was 2394 bytes and the median 958
`bytes ignoring 83406 zero-length retrievals the mean was
`2535 bytes the median 1025 bytes
`In the first sample
`45% of the retrievals were for GIF files the second sample
`
`Petitioner IBM – Ex. 1061, p. 3
`
`

`
`network
`
`included more than 70% GIF files
`The increasing use of
`JPEG images will
`tend to reduce image sizes
`TCP does
`available
`not
`utilize the
`fully
`bandwidth for the first
`few round-trips of
`connection
`This is because modem TCP implementations use
`tech
`to avoid network
`nique called slow-start
`congestion
`The slow-start approach requires the TCP sender to open its
`congestion window gradually doubling the number of
`round-trip time TCP does not
`reach full
`packets each
`the effective window size is at
`the
`least
`throughput until
`product of the round-trip delay and the available network
`TCP
`bandwidth
`This means that
`slow-start
`restricts
`throughput which is good for congestion
`avoidance but
`bad for short-connection
`long-
`completion latency
`distance TCP connection may have
`to transfer tens of
`thousands of bytes beforn achieving full bandwidth
`Proposed HTTP modifications
`The simplest change proposed for the HTTP protocol
`to use one TCP connection for multiple requests
`These
`requests could be for both inlined images and independent
`Web pages
`client would open an HTTP connection to
`sewer and then send requests along this connection when
`it wishes
`The sewer would send responses in the
`ever
`opposite direction
`HTTP P-HTTP avoids
`This persistent-connection
`most of the unnecessary round trips in the current HTTP
`For example once
`client has
`rntrieved
`an
`protocol
`HTJVIL file it may generate requests for all
`inlined
`the
`images and send them along the already-open TCP connec
`new connection
`establishment
`tion without waiting for
`and without
`handshake
`first waiting for the responses to
`requests We call this pipelining
`any of the individual
`Figure 4-1 shows the timeline for
`simple non-pipelined
`example
`
`is
`
`Client
`
`Server
`
`ORTT
`Client sends
`HTTP request
`for HTML
`
`1RTT
`
`Client parse
`HTML
`Client sends
`HTTP request
`for image
`
`2RTT
`
`_-
`
`Image begins
`to arrive
`
`ACK
`
`DAT
`
`ACK
`
`DAT
`
`DAT
`
`ACK
`
`DAT
`
`rver reds
`from disk
`
`rver reds
`from disk
`
`Figure 4-1 Packet exchanges and round-trip times
`for P-HTTP interaction
`
`HTTP allows the sewer
`to mark the end of
`response in
`one of several ways including simply closing the connec
`tion In P-HTTP the sewer would use one of the other
`mechanisms either sending Content-length header be-
`
`special delimiter after
`
`the
`
`fore the data or transmitting
`data
`While
`sewer normally neither
`client is actively using
`end would close the TCP connection
`Idle TCP connec
`resources and so either
`tions however consume end-host
`to close the connection at any point One
`end may choose
`connection only when it
`would expect
`client to close
`new sewer although it might main
`shifts its attention to
`few sewers
`tain connections
`client might also be
`to
`helpful and close its
`connections
`after
`long idle
`TCP connection while
`client would not close
`period
`an HTTP request
`is in progress unless the user gets bored
`slow sewer
`with
`the number of
`sewer however cailnot easily control
`that may want
`Therefore sewers may
`to use it
`clients
`to close idle TCP connections
`to maintain sufficient
`For example
`new requests
`resources for processing
`sewer may run out of TCP connection descriptors or may
`run out of processes or threads for managing individual
`connections When this happens
`sewer would close one
`or more idle TCP connections
`One might expect
`least-
`recently used LRU policy to work well
`sewer might
`that have been idle for more than
`in order to maintain
`pool of avail
`
`have
`
`also close connections
`given idle timeout
`able resources
`sewer would not close
`connection in the middle of
`processing an HTTP request However
`request may have
`been transmitted by the client but not yet received when the
`sewer decides to close the connection Or the sewer may
`the client has failed and time out
`decide that
`connection
`clients must be
`in progress In any event
`request
`for TCP connections
`to disappear at arbitrary
`preparnd
`times and must be able to rn-establish the connection
`and
`retry the HTTP request
`prematumly closed connection
`should not be treated as an error an error would only be
`to re-establish the connection fails
`
`with
`
`signalled if
`
`the attempt
`
`tical
`
`limit
`
`4.1 Protocol negotiation
`Since millions of HTTP clients and tens of thousands of
`HTTP sewers are almady in use it would not be feasible to
`insist on
`transition from the cur
`globally instantaneous
`rent HTTP protocol
`to P-HTTP Neither would it be prac
`in parallel since this would
`to run the two protocols
`information available to the two com
`the range of
`munities We would like P-HTTP sewers to be usable by
`current-HTTP
`clients
`We would also like current-HTTP
`sewers to be usable
`by P-HTTP clients One could define the modified HTTP
`so that when
`P-HTTP client contacts
`sewer
`attempts to use P-HTTP protocol
`then falls
`if
`that
`back on the current HTTP protocol
`network round-trip and seems wasteful
`P-HTTP clients instead can use an existing HTTP design
`to ignore HTRQ fields it does
`sewer
`feature that rnquires
`client would send its first HTTP request
`not understand
`using one of these
`fields to indicate that
`the
`it speaks
`HTTP protocol
`currnnt-HTTP
`sewer would simply ig
`nore this field and close the TCP connection after respond
`P-HTTP sewer would instead leave the connection
`ing
`open and indicate in its reply headers that
`the
`modified protocol
`
`it
`
`first
`
`fails it
`This adds an extra
`
`it speaks
`
`Petitioner IBM – Ex. 1061, p. 4
`
`

`
`protocol
`
`4.2 Implementation
`status
`We have already published
`study of an experimental
`the P-HTTP
`In that
`implementation of
`that P-HTTP
`paper we
`only minor
`showed
`required
`modifications to existing client and sewer software and that
`mechanism worked
`The
`negotiation
`effectively
`the
`modified protocol yielded significantly
`latencies than HTTP over both WAN and LAN networks
`Since this implementation has not yet been widely adopted
`however we were unable to determine how its large-scale
`use would affect sewer and network loading
`
`lower
`
`retrieval
`
`Design issues
`number of concerns have been raised regarding
`HTTP Some relate to the feasibility of the proposal others
`simply reflect the need to choose parameters appropriately
`Many of
`issues were raised in electronic mail by
`these
`members of the IETF working group on HTTP these mes
`sages are available in an archive
`121
`The first
`two issues discussed in this section relate to the
`
`correctness of
`
`the modified protocol
`
`the rest address its
`
`performance
`
`is
`
`5.1 Effects on reliability
`Several reviewers have mistakenly suggested that allow
`to close TCP connections at will could im
`ing the sewer
`The proposed protocol does not allow the
`pair reliability
`sewer
`connection may
`to close connections arbitrarily
`only be closed after the sewer has finished responding to
`one request and before it has begun to act on
`subsequent
`TCP connection
`Because the act of closing
`request
`serialized with the transmission of any data by sewer the
`client is guaranteed to receive any response sent before the
`sewer closes the connection
`race may occur between the clients transmission of
`new request and the sewers termination of the TCP con
`nection
`see the connection
`In this case the client will
`closed without
`Therefore the client
`receiving
`response
`the transmitted request was not
`will be fully aware that
`and can
`and
`the connection
`received
`retransmit the request
`acted on the
`Similarly since the sewer will not have
`even with non
`to use
`safe
`this protocol
`request
`idempotent operations such as the use of forms to order
`products
`sewer crash during the
`Regardless of the protocol used
`execution of
`operation could potentially
`non-idempotent
`cause an inconsistency The cure for this is not to compli
`cate the network protocol but rather to insist
`the sewer
`that
`commit such operations to stable storage before respond
`ing The NFS specification 26I imposes the same require
`ment
`
`simply re-open
`
`is
`
`the
`
`through an
`
`second request
`The client could
`
`and may also be used to provide centralized caching for
`11221
`community of users
`technique that allows P-HTTP
`Section 4.1 described
`systems to interoperate with HTTP systems without adding
`extra round-trips What happens to this scheme if both the
`implement P-HTTP but
`client and sewer
`proxy between
`them implements HTTP 28I The sewer believes that
`the
`to hold the TCP connection open but
`client wants it
`proxy expects the sewer
`to terminate the reply by closing
`Because the negotiation between client and
`the connection
`sewer is done using HTRQ fields that existing proxies must
`ignore the proxy cannot know what
`is going on
`The
`forever probably many minutes and
`proxy will wait
`the user will not be happy
`P-HTTP sewers could solve this problem by using an
`adaptive timeout scheme in which the sewer obsewes
`to discover which clients are safely able to
`client behavior
`use P-HTTP
`The sewer would keep
`list of client
`IP
`addresses each entry would also contain an idle timeout
`value initially
`small value such as one second If
`set
`to
`client requests the use of P-HTTP the sewer would hold
`the connection open but only for the duration of the per-
`second re
`client idle timeout
`client ever transmits
`If
`quest on the same TCP connection
`the sewer would in
`crease the associated idle timeout from the default value to
`maximum value
`P-HTTP client reaching the sewer
`Thus
`HTTP-only proxy would encounter
`1-second additional
`delays and would never see
`reply to
`given TCP connection
`transmitted on
`second reply to realize that an HTTP-only
`use this lack of
`proxy is in use and subsequently the client would not at
`tempt to negotiate use of P-HTTP with this sewer
`P-HTTP client whether
`it reaches the sewer
`through
`P-HTTP proxy or not might see the TCP connection closed
`too soon but if
`it ever makes multiple requests in brief
`the sewers timeout would increase and the client
`interval
`would gain the full benefit of P-HTTP
`The simulation results in section
`this ap
`suggest
`proach should yield most of the benefit of P-HTTP It may
`in actual use however for example some HTTP-only
`fail
`proxies may forward multiple requests received on
`single
`connection without being able to return multiple replies
`This would trick the
`sewer
`into holding the connection
`open but would prevent
`the client from receiving
`replies
`5.3 Connection lifetimes
`One obvious question is whether
`too many open connections
`in the persistent-connection
`is no because
`model
`The glib answer
`sewer could
`
`that
`
`all
`
`the
`
`the sewers would have
`
`5.2 Interactions with current proxy servers
`Many users reach the Web via proxy sewers or
`relays
`proxy sewer accepts HTTP requests for any
`URL parses the URL to determine the actual sewer for that
`URL makes an HTTP request
`to that sewer obtains the
`reply and returns the
`reply to the original client
`This
`is used to transit firewall
`
`security barriers
`
`technique
`
`is pushed
`11f the proxy forwards response data as soon as it
`by the server then the user would not actually perceive any extra
`delay This is because P-HTTP servers always indicate the end of
`delimiter so the P-HTTP
`response using content-length
`or
`the end of the response even if
`the proxy does
`
`client will detect
`
`not
`
`Petitioner IBM – Ex. 1061, p. 5
`
`

`
`at any time and so would not
`close an idle connection
`necessarily have more connections open than in the current
`model This answer evades
`the somewhat harder question
`connection would live long enough to carry
`of whether
`significantly more than one HTTP request or whether
`the
`servers would be closing connections almost as fast as they
`do now
`locality of reference will make
`Intuition suggests that
`this work
`number of re
`That
`is clients tend to send
`and as
`server
`succession
`in relatively quick
`quests to
`long as the total number of clients simultaneously using
`is small
`the connections
`should be useful
`for
`server
`multiple HTTP requests
`The simulations see section
`support this
`
`resource
`
`is
`
`5.4 Server resource utilization
`HTTP servers consume several kinds of
`in
`resources
`cluding CPU time active
`and associated
`connections
`threads or processes and protocol control block PCB
`table space for both open and TIJVIE_WAIT
`connections
`How would the persistent-connection model affect
`utilization
`If an average TCP connection carries more than one suc
`cessful HTTP transaction one would expect
`this to reduce
`server CPU time requirements
`The time spent actually
`requests would probably not change but
`processing
`the
`time spent opening and closing connections
`and launching
`new threads or processes would be reduced
`For example
`some HTTP servers create
`new process for each connec
`tion Measurements
`the cost of process crea
`suggest
`that
`fraction of the total CPU
`tion accounts for
`significant
`should avoid much of
`time and so persistent connections
`this cost
`Because we expect
`to close idle con
`P-HTTP server
`busy server one on which idle con
`nections as needed
`long enough to be closed by the idle
`nections never
`timeout mechanism will use up as many connections as the
`Therefore the maximum number of
`configuration allows
`open connections and threads or processes
`parameter
`statistic to be measured
`to be set rather than
`that is how
`The choice of the idle timeout parameter
`long an idle TCP connection
`should be allowed to exist
`under heavy load from
`does not affect server performance
`many clients
`the
`can affect
`server
`resource usage
`if
`is smaller than the maximum-
`number of active clients
`connection parameter This may be important if the server
`has other functions besides HTTP service or if the memory
`used for connections and processes could be applied to bet
`ter uses such as file caching
`The number of PCB table entries required is the sum of
`to the num
`two components
`value roughly proportional
`ber of open connections states including ESTABLISHED
`CLOSING etc and
`to the number of
`value proportional
`closed in the past four minutes TIME WAIT
`connections
`For example on
`that handles 100
`connections
`server
`second
`duration of one
`connections
`each with
`per
`second the PCB table will contain
`few hundred entries
`and 24000 TIME WAIT
`related to open connections
`However
`this same server
`entries
`followed the
`if
`persistent-connection model with mean of ten HTTP re
`
`last
`
`It
`
`fect
`
`the PCB table would contain
`quests per active connection
`only 2400 T11VIE WAIT entries
`PCB tables may be organized in
`number of different
`ways 16J Depending
`on the data structures chosen
`the
`huge number of TIME_WAIT entries may or may not af
`PCB-table entry which must
`the cost of looking up
`be done once for each received TCP packet Many existing
`PCB
`from 4.2BSD use
`derived
`linear-list
`systems
`table 15J and so could perform quite badly under
`heavy
`connection rate In any case PCB entries consume storage
`The simulation results in section
`show that persistent-
`connection HTTP significantly reduces the number of PCB
`table entries required
`
`clients
`
`5.5 Server congestion control
`An HTTP client has little
`information about how busy
`any given server might be This means that an overloaded
`HTTP server can be bombarded with requests that
`it cannot
`immediately handle leading to even greater overload and
`similar problem afflicts naive im
`congestive collapse
`of NFS 14J The server could cause
`plementations
`the
`to slow down somewhat by accepting their TCP
`connections but not
`immediately processing the associated
`This might require the server to maintain
`very
`requests
`in the ESTABLISHED
`large number of TCP connections
`to use several TCP con
`state especially if clients attempt
`nections at once see section
`TCP connection
`P-HTTP client has established
`Once
`the server can automatically benefit from TCPs
`however
`flow-control mechanisms which prevent
`from
`the client
`than the server can process them
`sending requests faster
`So while P-HTTP cannot
`the rate at which new clients
`limit
`attack an overloaded server it does limit
`the rate at which
`The simulation results
`any given client can make requests
`which imply that even very busy
`presented in section
`HTTP servers see only
`small number of distinct clients
`during any brief interval suggest
`that controlling the per-
`client arrival rate should largely solve the server congestion
`problem
`
`5.6 Network resources
`HTTP interactions
`consume network
`resources Most
`obviously HTTP consumes bandwidth but
`IP also imposes
`and may include per-
`costs on the network
`per-packet
`How
`connection costs e.g for firewall decision-making
`to P-HTTP change
`would
`consumption patterns
`shift
`in the number of TCP connec
`The expected reduction
`tions established would certainly reduce the number of
`overhead packets and would presumably reduce the to
`tal number of packets transmitted The reduction in header
`traffic may also reduce the bandwidth load on low-
`bandwidth links but would probably be insignificant
`for
`high-bandwidth links
`should im
`to longer-lived TCP connections
`The shift
`prove the congestion behavior of the network by giving the
`TCP end-points
`information about
`the state of
`the
`better
`TCP senders will spend proportionately less time
`network
`regime 13J and more time in the
`in the slow-start
`avoidance
`The latter
`regime
`congestion
`is generally
`less likely to cause network congestion
`
`Petitioner IBM – Ex. 1061, p. 6
`
`

`
`to longer TCP connections
`the same time
`At
`shift
`and more rapid sewer
`hence larger congestion windows
`increase short-term bandwidth requirements
`responses will
`compared to current HTTP usage
`In the current HTTP
`several
`round-trip times apart
`requests are
`spaced
`HTTP many requests and replies could be streamed at full
`network bandwidth
`This may affect
`the behavior of the
`network
`
`in
`
`to
`
`5.7 Users perceived performance
`The ultimate measure of the success of modified HTTP
`is its effect on the users perceived performance UPP
`time required
`Broadly this can be expressed as the
`series of Web pages
`retrieve and display
`This differs
`from simple retrieval
`includes the cost of
`latency since it
`rendering text and images
`design that minimizes mean
`retrieval latency may not necessarily yield the best UPP
`document
`contains both text and
`For example if
`inlined images it may be possible to render the text
`several
`before fully retrieving all of the images if
`the user agent
`the image bounding boxes early enough
`can discover
`Doing so may allow the user to start reading the text before
`if some of
`the
`complete
`images arrive especially
`the
`Thus the order in which
`off-screen
`images are initially
`the client receives information from the sewer can affect
`UPP
`Human factors researchers have shown that users of in
`teractive systems prefer response times below two to four
`seconds 25I delays of this magnitude cause their attention
`Two seconds
`to wander
`represents just 28 cross-U.S
`round-trips at the best-case RTT of about 70 msec
`Users may also be quite sensitive to high variance in
`UPP Generally users desire predictable performance 17I
`user may prefer
`That
`system with moderately high
`is
`mean retrieval time and low variance to one with lower
`much higher variance
`mean retrieval
`time but
`Since
`loss can increase the effective RTT to
`congestion or packet
`hundreds or thousands of milliseconds this leaves HTTP
`very few round-trips to spare
`
`Competing and complementary approaches
`HTTP is not
`Persistent-connection
`the only possible
`solution to the latency problem The NdSsape browser
`existing HTTP
`approach
`takes
`using
`different
`the
`protocol but often opening multiple connections
`in parallel
`For example if an HTML file includes ten inlined images
`NdSsape opens an HTTP connection to retrieve the HTML
`file then might open ten more connections
`in parallel
`retrieve the ten image files By parallelizing the TCP con
`lot of
`this approach eliminates
`nection overheads
`the
`unnecessary latency without requiring implementation of
`new protocol
`The multi-connection approach has several drawbacks
`for network conges
`First it seems to increase the chances
`tion apparently for this reason NetSsape limits the number
`of parallel connections
`user-specifiable
`limit defaulting
`to four Several parallel TCP connections
`are more likely
`than one connection
`to self-congest
`Second the NdSsape approach does not allow the TCP
`end-points to learn the state of the network
`That
`is while
`
`to
`
`P-HTTP eliminates the cost of slow-start after the first re
`series NetSsape must pay this cost for every
`in
`and for every
`of parallel
`image
`group
`
`quest
`HTML
`
`file
`
`allows
`
`retrievals
`The multi-connection
`sometimes
`approach
`NdSsape to render the text surrounding at least the first
`is the number of parallel connections
`images where
`before much of the image data arrives
`Some image for
`the head of the
`mats include bounding-box information at
`file Nd2ape can use this to render the text long before the
`entire images are available thus improving UPP
`This is not
`the only way to discover
`image sizes early in
`the retrieval process For example P-HTTP could include
`new method allowing the client to request
`set of image
`the images Or the
`boxes before requesting
`bounding
`HTML
`format
`could be modified to include
`optional
`image-size information as has been proposed for HTJVIL
`version 3.0 24I
`Either alternative
`could provide
`the
`bounding-box information even sooner
`than the multi-
`connection
`approach All such proposals have advantages
`and disadvantages
`and are the subject of continuing debate
`in the IETF working group on HTTP
`TCP
`Several people have suggested using Tr

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket