throbber
This page
`is legacy
`content.
`
`Check out the current
`usenIix
`Website.
`
`
`
`USENIX
`
`Home *« About USENIX « Events
`
`* Membership
`
`* Students
`

`
`HotOS IX Paper
`
`[HotOS |X Program Index]
`
`TCP offload is a dumb idea whose time has come
`
`Jeffrey C. Mogul
`Hewlett-Packard Laboratories
`
`Palo Alto, CA, 94304
`JeffMogul@acm.org
`
`Abstract
`
`Network interface implementors have repeatedly attempted to offload TCP processing from the host
`CPU. Theseefforts metwithlittle success, because they were based onfaulty premises. TCP offload
`per se is neither of much overall benefit nor free from significant costs and risks. But TCP offload in
`the service of very specific goals might actually be useful. In the context of the replacementof
`storage-specific interconnect via commoditized network hardware, TCP offload (and more generally,
`offloading the transport protocol) appropriately solves an important problem.
`
`Introduction
`
`TCP [18] has been the main transport protocol for the Internet Protocol stack for twenty years. During
`this time, there has been repeated debate over the implementation costs of the TCP layer.
`
`Onecentral question of this debate has been whether it is more appropriate to implement TCPin
`host CPU software, or in the networkinterface subsystem. The latter approachis usually called “TCP
`Offload" (the category is sometimes referred to as a “TCP Offload Engine," or TOE), although it in fact
`includesall protocol layers below TCP,as well. Typical reasons given for TCP offload include the
`
`06973-00001/8717886.1
`
`ALA07370910
`
`Alacritech, Ex. 2041 Page 1
`
`Alacritech, Ex. 2041 Page 1
`
`

`

`reduction of host CPU requirements for protocol stack processing and checksumming, fewer
`interrupts to the host CPU, fewer bytes copied over the system bus, and the potential for offloading
`computationally expensive features such as encryption.
`
`TCP offload poses somedifficulties, including both purely technical challenges (either generic to all
`transports or specific to TCP), and some moresubtle issues of technology deployment.
`
`In some variants of the argumentin favor of TCP offload, proponents assert the need for transport-
`protocoloffload but recognize the difficulty of doing this for TCP, and have proposed deploying new
`transport protocols that support offloading. For example, the XTP protocol [8] was originally designed
`specifically for efficient implementation in VLSI, although later revisions of the specification [23] omit
`this rationale.
`
`To this day, TCP offload has never firmly caught on in the commercial world (except sometimesas a
`stopgap to add TCP support to immature systems[16]), and has been scorned by the academic
`community and Internet purists. This paper starts by analyzing why TCP offload has repeatedly failed.
`
`The lack of prior success with TCP offload does not, however, necessarily imply that this approach is
`categorically without merit. Indeed, the analysis of past failures points out that novel applications of
`TCP might benefit from TCP offload, but for reasons notclearly anticipated by early proponents. TCP
`offload does appear to be appropriately suited when used in the larger context in which storage-
`interconnect hardware, such as SCSI or FiberChannel, is on the verge of being replaced by Ethernet-
`based hardware and specific upper-level protocols (ULPs), such as iSCSI. These protocols can exploit
`‘Remote Direct Memory Access" (RDMA)functionality provided by network interface subsystems.
`This paper endsby analyzing how TCP offload (and more generally, offloading certain transport
`protocols) can prove useful, not as a generic protocol implementation strategy, but as a component
`in an RDMA design.
`
`This paper is not a defense of RDMA. Rather, it argues that the choice to use RDMA moreclearly
`justifies offloading the transport protocol than has any previous application.
`
`WhyTCPoffload is a dumb idea
`
`TCP offload has been unsuccessful in the past for two kinds of reasons: fundamental performance
`issues, and difficulties resulting from the complexities of deploying TCP offload in practice.
`
`Fundamentalperformanceissues
`
`Although TCP offload is usually justified as a performance improvement, in practice the performance
`benefits are either minimized or actually negated, for many reasons:
`
`Limited processing requirements:
`
`Processing TCP headers simply doesn't (or shouldn't) take many cycles. Jacobson [11] showed how to
`use “headerprediction" to process the common case for a TCP connection in very few instructions.
`The overhead of the TCP protocol per se does notjustify offloading. Clark et al. [9] showed more
`generally that TCP should not be expensive to implement.
`
`06973-00001/8717886.1
`
`ALA07370911
`
`Alacritech, Ex. 2041 Page 2
`
`Alacritech, Ex. 2041 Page 2
`
`

`

`Moore's Law:
`
`Adding a transport protocol implementation to a Network Interface Controller (NIC) requires
`considerably more hardware complexity than a simple MAC-layer-only NIC. Complexity increases
`time-to-market, and because Moore's Law rapidly increases the performance of general-purpose CPU
`chips, complex special-purpose NIC chips can fall behind CPU performance. The TOE can becomethe
`bottleneck, especially if the vendor cannot afford to utilize the latest fab. (On the other hand, using a
`general-purpose CPU as a TOE could lead to a poor tradeoff between cost and performance[1].)
`
`Partridge [17] pointed out that the Moore's Law issue could beirrelevant once each NICchip is fast
`enoughto handle packetsatfull line rate; further improvements in NIC performance might not
`matter (except to reduce power consumption). Sarkar et a/. [21], however, showed that current
`protocol-offload NIC system products are not yet fast enough. Their results also imply that any extra
`latency imposed byprotocol offload in the NIC will hurt performancefor real applications. Moore's
`Law considerations may plague even “‘full-line-rate'"' NICs until they are fast enough to avoid adding
`muchdelay.
`
`Complex interfaces to TOEs:
`
`O'Dell [14] has observed that ‘the problem has always been that the protocol for talking to the front-
`end processorand gluing it onto the API was just as complex (often moreso,in fact) as the protocol
`being ‘offloaded'." Similarly, Partridge [16] observed that “The idea was that you passed your data
`overthe bus to an NIC that did all the TCP work for you. However,it didn't give a performance
`improvementbecauseto a large degree,it recreated TCP over the bus. Thatis, for each write, you
`had to add a bus header, including context information (identifying the process and TCP connection
`IDs) and then ship the packet downto the board. On inbound, you had to pass up the process and
`TCP connection info and then the kernel had to demux the busunit of data to the right process (and
`do all that nasty memory alignmentstuff to put it into the process's buffer in the right place)." While
`better approaches are now known,in general TOE designers had trouble designing an efficient host
`interface.
`
`Suboptimal buffer management:
`
`Although a TOE can deliver a received TCP data segment to a chosen location in memory,this still
`leaves ULP protocol headers mingled with ULP data, unless complex features are included in the TOE
`interface.
`
`Connection management:
`
`The TOE must maintain connection state for each TCP connection, and must coordinatethis state
`with the host operating system. Especially for short-lived connections, any savings gained from less
`host involvementin processing data packet is wasted by this extra connection management
`overhead.
`
`Resource management:
`
`If the transport protocol resides in the NIC, the NIC and the host OS must coordinate responsibility for
`resources such as data buffers, TCP port numbers, etc. The ownership problem for TCP buffersis
`
`06973-00001/8717886.1
`
`ALA07370912
`
`Alacritech, Ex. 2041 Page 3
`
`Alacritech, Ex. 2041 Page 3
`
`

`

`more complex than the seemingly analogous problem for packet buffers, because outgoing TCP
`buffers must be held until acknowledged, and received buffers sometimes must be held pending
`reassembly. Resource management becomeseven harder during overload, when host OS policy
`decisions must be supported. Noneof these problems are insuperable, but they reduce the benefits
`of offloading.
`
`Event management:
`
`Much of the cost of processing a short TCP connection comes from the overhead of managing
`application-visible events [2]. Protocol offload does nothing to reduce the frequency of such events,
`and so fails to solve one of the primary costs of running a busy Web server (for example).
`
`Muchsimpler NIC extensionscan be effective:
`
`Numerousprojects have demonstrated that instead of offloading the entire transport protocol, a NIC
`can be more simply extendedso as to support extremely efficient TCP implementations. These
`extensions typically eliminate the need for memorycopies, and/or offload the TCP checksum
`(eliminating the need for the CPU to touch the data in many cases, and thus avoiding data cache
`pollution). For example, Dalton et al. [10] described a NIC supporting a single-copy host OS
`implementation of TCP. Chaseetal. [7] summarize several approaches to optimizing end-system TCP
`performance.
`
`These criticisms of TCP offload apply most clearly when onestarts with a well-tuned, highly scalable
`host OS implementation of TCP. TCP offload might be an expedient solution to the problems caused
`by second-rate host OS implementations, but this is notitself an architectural justification for TOE.
`
`Deploymentissues
`
`Evenif TCP offload werejustified by its performance,it creates significant deployment, maintenance,
`and managementproblems:
`
`Scaling issues:
`
`Someservers must maintain huge numbersof connections [2]. Modern host operating systems now
`generally place no limits except those based on RAMavailability. If the TOE implementation has lower
`limits (perhaps constrained by on-board RAM), this could limit system scalability. Scaling concerns
`also apply to the IP routing table.
`
`Bugs:
`
`Protocol implementations have bugs. Mature implementations have fewerbugs, butstill require
`patches from time to time. Updating the firmware of a programmable TOE could be moredifficult
`than updating a host OS. Clearly, non-programmable TOEs are even worsein this respect[1].
`
`Quality Assurance (QA):
`
`System vendors musttest complete systemsprior to shipping them. Use of TOE increases the number
`of complex componentsto be tested, and (especially if the TOE comes from a different supplier)
`increases the difficulty of locating bugs.
`
`06973-00001/8717886.1
`
`ALA07370913
`
`Alacritech, Ex. 2041 Page 4
`
`Alacritech, Ex. 2041 Page 4
`
`

`

`Finger-pointing:
`
`Whena TCP-related bug appearsin a traditional system, it is not hard to decide whether the NICis at
`fault, because non-TOE NICs perform fairly simple functions. With a system using TCP offloading,
`deciding whether the bugis in the NIC or the host could be much harder.
`
`Subversion of NIC software:
`
`O'Dell has argued that the programmability of TOE NICs offers a target for malicious
`modifications [14]. This argument is somewhat weakenedbythereality that many(if not most) high-
`speed NICs are already reprogrammable, but the extra capabilities of a TOE NIC might increase the
`options for subversion.
`
`System managementinterfaces:
`
`System administrators prefer to use a consistent set of managementinterfaces (Uls and commands).
`Especially if the TOE and OS comefrom different vendors, it might be hard to provide a consistent,
`integrated management interface. Also, TOE NICs might not provide as much statevisibility to system
`managersas can be provided by host OS TCP implementations.
`
`Concerns about NIC vendors:
`
`NIC vendorshave typically been smaller than host OS vendors,with less sophistication about overall
`system design and fewer resources to apply to support and maintenance. If a TOE NIC vendor fails or
`exits the market, customerscan be left without support.
`
`While noneof these concerns are definitive arguments against TOE, they have tended to outweigh
`the limited performancebenefits.
`
`Analysis: mismatched applications
`
`While it might appear from the preceding discussion that TCP offload is inherently useless, a more
`accurate statement would be that past attempts to employ TCP offload were mismatchedto the
`applications in question.
`
`Traditionally, TCP has been used either for WAN networking applications (email, FTP, Web) or for
`relatively low-bandwidth LAN applications (Telnet, X/11). Often, as is the case with email and the
`Web, the TCP connectionlifetimes are quite short, and the connection count at a busy (server)
`system is high.
`
`Because these are seen as the important applications of TCP, they are often used as the rationale for
`TCP offload. But these applications are exactly those for which the problems of TCP offload
`(scalability to large numbers of connections, per-connection overhead, low ratio of protocol
`processing cost to intrinsic network costs) are most obvious. In other words, in most WAN
`applications, the end-host TCP-related costs are insignificant, except for the connection-management
`costs that are either unsolved or worsenedby TOE.
`
`06973-00001/8717886.1
`
`ALA07370914
`
`Alacritech, Ex. 2041 Page 5
`
`Alacritech, Ex. 2041 Page 5
`
`

`

`The implication of this observation is that the sweet spot for TCP offload is not for traditional TCP
`applications, but for applications that involve high bandwidth, low-latency, long-duration
`connections.
`
`Why TCPoffload's time has come
`
`Computers generate high data rates on three kinds of channels (besides networks): graphics systems,
`storage systems, and interprocessor interconnects. Historically, these rates have been provided by
`special-purpose interface hardware, whichtradesflexibility and price for high bandwidth and high
`reliability.
`
`For storage especially, the cost and limitations of special-purpose connection hardwareis increasingly
`hard to justify, in the face of much cheaper Gbit/sec (or faster) Ethernet hardware. Replacing fabrics
`such as SCSI and Fiber Channel with switched Ethernet connections between storage and hosts
`promises increased configuration flexibility, more interoperability, and lower prices.
`
`However,replicating traditional storage-specific performance using traditional network protocol
`stacks would be difficult, not because of protocol processing overheads, but because of data copy
`costs - especially since host busses are now often the main bottleneck. Traditional network
`implementations require one or more data copies, especially to preserve the semantics of system
`calls such as read() and write(). These APIs allow applications to choose when and howdata buffers
`appear in their address spaces. Even with in-kernel applications (such as NFS), complete copy
`avoidanceis not easy.
`
`Several OS designs have been proposed to support traditional APIs and kernel structures while
`avoiding all unnecessary copies. For example, Brustoloni [4,5] has explored several solutions to these
`problems.
`
`Nevertheless, copy-avoidance designs have not been widely adopted, dueto significant limitations.
`For example, when network maximum segmentsize (MSS) values are smaller than VM pagesizes,
`whichis often the case, page-remapping techniquesare insufficient (and page-remapping often
`imposes overheadsofits own.) Brustoloni also points out that “many copy avoidance techniquesfor
`network I/O are not applicable or may even backfire if applied tofile I/O." [4]. Other designs that
`eliminate unnecessary copies, such as |/O Lite [15], require the use of new APIs (and henceforce
`application changes). Dalton et al. [10] list some other difficulties with single-copy techniques.
`
`Remote Direct Memory Access (RDMA)offers the possibility of sidestepping the problems with
`software-based copy-avoidance schemes. The NIC hardware(or at any rate, software resident on the
`NIC) implements the RDMA protocol. The kernel or application software registers buffer regions via
`the NIC driver, and obtains protected buffer reference tokens called region IDs. The software
`exchangesthese region IDs with its connection peer, via RDMA messagessent over the transport
`connection. Special RDMA message directives ("verbs") enable a remote system to read or write
`memory regions namedby the region IDs. The receiving NIC recognizes and interprets these
`directives, validates the region IDs, and performs protected data transfers to or from the named
`regions.+
`
`06973-00001/8717886.1
`
`ALA07370915
`
`Alacritech, Ex. 2041 Page 6
`
`Alacritech, Ex. 2041 Page 6
`
`

`

`In effect, RDMA provides the same low-overhead access between storage and memory currently
`provided by traditional DMA-baseddisk controllers.
`
`(Some people have proposed factoring an RDMAprotocolinto twolayers. A Direct Data Placement
`(DDP) protocol simply allows a sender to cause the receiving NIC to place data in the right memory
`locations. To this DDP functionality, a full RDMA protocol adds a remote-read operation: system A
`sends a message to system B, causing the NIC at B to transfer data from one of B's buffers to one of
`A's buffers without waking up the CPU at B. David Black [3] argues that a DDP protocol byitself can
`provide sufficient copy avoidance for many applications. Most of the points | will make about RDMA
`also apply to a DDP-only approach.)
`
`An RDMA-enabled NIC (RNIC) needs its own implementation ofall lower-level protocols, since to rely
`on the host OS stack would defeat the purpose. Moreover, in order for RDMAto substitute for
`hardwarestorageinterfaces, it must provide highly reliable data transfer, so RDMA must be layered
`overa reliable transport such as TCP or SCTP [22]. This forces the RNIC to implement the transport
`layer.
`
`Therefore, offloading the transport layer becomesvaluable notfor its own sake, but rather because
`that allows offloading of the RDMAlayer. And offloading the RDMA layer is valuable because, unlike
`traditional TCP applications, RDMA applications arelikely to use a relatively small numberof low-
`latency, high-bandwidth transport connections, precisely the environment where TCP offloading
`might be beneficial. Also, RDMA allows the RNIC to separate ULP data from ULP control(i.e., headers)
`and therefore simplifies the received-buffer placement problems of pure TCP offload.
`
`For example, Magoutis et a/. [13] show that the RDMA-based Direct Access File System can
`outperform even a zero-copy implementation of NFS, in part because RDMA also helps to enable
`user-level implementation of the file system client. Also, storage access implies the use of large ULP
`messages, which amortize offloading's increased per-packet costs while reaping the reduced per-byte
`costs.
`
`Although muchof the work on RDMAhas focussed on storage systems, high-bandwidth graphics
`applications (e.g., streaming HDTV videos) have similar characteristics. A video-on-demand
`connection might use RDMA bothat the server (for access to the stored video) and at the client (for
`rendering the video).
`
`Implications for operating systems
`
`Because RDMAis explicitly a performance optimization, not a source of functional benefits, it can
`only succeed if its design fits comfortably into many layers of a complete system: networking, I/O,
`memory architecture, operating system, and upper-level application. A misfit with any of these layers
`could obviate any benefits.
`
`In particular, an RNIC design done without any consideration for the structures of real operating
`systems will not deliver good performance and flexibility. Experience from an analogouseffort, to
`offload DES cryptography, showedthat overlooking the way that software will use the device can
`
`06973-00001/8717886.1
`
`ALA07370916
`
`Alacritech, Ex. 2041 Page 7
`
`Alacritech, Ex. 2041 Page 7
`
`

`

`eliminate much of the potential performancegain [12]. Good hardwaredesign is certainly not
`impossible, but it requires co-developmentwith the operating system support.
`
`RDMA aspects requiring such co-developmentinclude:
`
`Getting the semantics right:
`
`RDMA introduces manyissues related to buffer ownership, operation completion, and errors.
`Membersof the various groupstrying to designs RDMA protocols (including the ROMA
`Consortium [19] and the IETF's RDDP Working Group [20]) have haddifficulty resolving many basic
`issues in these designs. These disagreements might imply the lack of sufficiently mature principles
`underlying the mixed use of remotely- and locally-managed buffers.
`
`OS-to-RDMA interfaces:
`
`These interfaces include, for example, buffer allocation; mapping and protection of buffers; and
`handling exceptions beyond what the RNIC can deal with (such as routing and ARP information for a
`new peer address).
`
`Application-to-RDMAinterfaces:
`
`These interfaces include, for example, buffer ownership; notification of RDMA completion events;
`andbidirectional interfaces to RDMA verbs.
`
`Network configuration and management:
`
`RNICs will require IP addresses, subnet masks, etc., and will have to report statistics for use by
`network managementtools. Ideally, the operating system should provide a ‘single system image" for
`network managementfunctions, even though it includes several independent network stack
`implementations.
`
`Defenses against attacks:
`
`an RNIC acts as an extension of the operating system's protection mechanisms, and thus should
`defend against subversions of these mechanisms. The RNIC could refuse access to certain regions of
`memory knowntostore kernel codeor data structures, except in narrowly-defined circumstances
`(e.g., bootstrapping).
`
`Since the RNIC includes a TCP implementation, there will be temptation to use that as a pure TOE
`path for non-RDMATCP connections, instead of the kernel's own stack. This temptation must be
`resisted, because it will lead to over-complex RNICs, interfaces, and host OS modifications. However,
`an RNIC might easily support certain simple features that have been proposed[5] for copy-avoidance
`in OS-based networkstacks.
`
`Difficulties
`
`06973-00001/8717886.1
`
`ALA07370917
`
`Alacritech, Ex. 2041 Page 8
`
`Alacritech, Ex. 2041 Page 8
`
`

`

`RDMAintroduces several tricky problems, especially in the area of security. Prior storage-networking
`designs assumed a closed, physically secure network, but IP-based RDMA potentially leaves a host
`vulnerable to the entire world.
`
`- Offloading the transport protocol exacerbates the security preblem by adding more opportunities for
`. bugs. Many {if not most) security holes discovered recently are implementation bugs, not
`specification bugs. Even if an RDMA protocol design can be shown to be secure, this does not imply
`that all of its implementations would be secure. Hackers actively find and exploit bugs, and an ROMA
`bug could be much more severe than traditional protocol-stack bugs, because it might allow
`unbounded and unchecked access to host memory.
`
`RDMAsecurity therefore cannot be provided by sprinkling some IPSec pixie dust over the protocol: It
`will require attention to all layers of the system.
`
`The use of TCP below RDMAis controversial, because it requires TCP modifications (or a thin
`intermediate layer whose implementation is entangled with the TCP layer) in order to reliably mark
`RDMA message boundaries. While SCTP is widely accepted as inherently better than TCP as a
`transport for RDMA, some vendors believe that TCP is adequate, and intend to ship RDMA/TCP
`implementations long before offloaded SCTP layers are mature. This paper's main point is not that
`TCP offload is a good idea, but rather that transport-protocol offload is appropriate for RNICs. TCP
`might simply represent the best available choice for several years.
`
`Conclusions
`
`TCP offload has been “‘a’solution in search of a problem" for several decades. This paper identifies
`several inherent reasons why general-purpose TCP offload has repeatedly failed. However, as
`hardware trends change the feasibility and economics of network-based storage connections, RBMA
`will become a significant and appropriate justification for TOEs.
`
`RDMA's remotely-managed network buffers could be an innovation analogous to novel memory
`consistency models: an attempt toe focus on necessary features for real applications, giving up the
`simplicity of a narrow interface for the potential of significant performance scaling. But as in the case
`of relaxed consistency, we may see a period where variants are proposed, tested, evolved, and
`sometimes discarded. The principles that must be developed are squarely in the domain of operating
`systems.
`
`Acknowledgments
`
`| would like to thank David Black, Craig Partridge, and especially Jeff Chase, as well as the anonymous
`reviewers, for their helpful comments.
`
`Bibliography
`
`1
`
`06973-00001/8717886.1
`
`ALA07370918
`
`Alacritech, Ex. 2041 Page 9
`
`Alacritech, Ex. 2041 Page 9
`
`

`

`B. 5. Ang.
`An evaluation of an attempt at offloading TCP/IP protocol processing onto an i96QRN-based INIC.
`Tech, Rep. HPL-2001-8, HP Labs, Jan. 2001.
`
`2 G
`
`. Banga and J. C, Mogul.
`- §calable kernel performance for Internet servers under realistic loads.
`In Proc. 1998 USENIX Annual Technical Conf., pages 1-12, New Orleans, LA, June 1998. USENIX.
`
`3.
`
`D. Black.
`
`.
`
`Personal communication, 2003.
`
`4 4
`
`. Brustoloni.
`Interoperation of copy avoidance in network and file 1/0.
`in Proc. INFOCOM ‘99, pages 534-542, New York, NY, Mar. 1999, IEEE.
`
`5 J
`
`. Brustoloni and P. Steenkiste.
`
`Effects of buffering semantics on I/O performance.
`in Proc, OSDI-II, pages 277-291, Seattle, WA, Oct. 1996, USENIX.
`
`6 J
`
`. S$. Chase.
`' High Performance TCP/IP Networking (Mahbub Hassan andRaj Jain, Editors), chapter 13, TCP
`implementation.
`Prentice-Hall.
`
`In preparation.
`
`7 J
`
`. 5, Chase, A. J. Gallatin, and K. G. Yocum.
`End-system optimizations for high-speed TCP.
`IEEE Communications, 39{4):68-74, Apr. 2001.
`
`8 G
`
`. Chesson.
`
`XTP/PE overview.
`In Proc. IEEE 13th Conf. on Local Computer Networks, pages 292-296, Oct. 1988.
`
`$
`
`06973-00001/8717886.1
`
`ALA07370919
`
`Alacritech, Ex. 2041 Page 10
`
`Alacritech, Ex. 2041 Page 10
`
`

`

`D, B. Clark, V. Jacobson, J) Romkey, and H. Salwen.
`An analysis of TCP processing overhead.
`IEEE Communications Magazine, 274(6).23-29, June 1989,
`
`19
`
`€, Dalton, G.Watson, D, Banks, C. Calamvokis, A. Edwards, andJ, Lumley.
`Afterburner: Architectural support for high performance protocols.
`IEEE Network Magazine, 7(4):36-43, 1995.
`
`11
`
`V. Jacobson.
`
`| 4BSD TCP header prediction,
`Computer Communication Review, 20{2):13-15, Apr. 1990.
`
`12
`
`M. Lindemann and S. W.Smith.
`Improving DES coprocessor throughputfor short operations.
`In Proc, 10th USENIX Security Symp., Washington, DC, Aug. 2001.
`
`13
`
`K. Magoutis, S. Addetia, A. Fedorova, M. Seltzer, J. Chase, A. Gallatin, R. Kisley, R. Wickremesinghe,
`and E, Gabber,
`
`Structure and performance of the Direct Access File System.
`In Proc. USENIX 2002 Annual Tech. Conf, pages 1-14, Monterey, CA, June 2002.
`
`14
`
`;
`M.O'Dell.-
`Re: how bad an idea is this?
`
`Message on TSV mailinglist, Nov. 2002.
`
`15
`
`V. 5. Pal, P. Bruschel, and W. Zwaenepcel.
`10-Lite: a unified 1/O buffering and caching system.
`ACM Trans. Camputer Systems, 18(1):37-66, Feb. 2000.
`
`16
`
`C. Partridge.
`Re: how bad an idea is this?
`
`Message on TSV mailing list, Nov. 2002.
`
`17
`
`06973-00001/8717886.1
`
`ALA07370920
`
`Alacritech, Ex. 2041 Page 11
`
`Alacritech, Ex. 2041 Page 11
`
`

`

`C. Partridge.
`Personal communication, 2003.
`
`18
`
`J. B. Postel.
`Transmission Control Protocol.
`
`RFC 793, Information SciencesInstitute, Sept. 1981.
`
`19
`
`RDMA Consortium.
`
`http://www.rdmaconsortium.org.
`
`20
`
`Remote Direct Data Placement Working Group.
`http://www.ietf.org/html.charters/rddp-charter.html.
`
`21
`
`P. Sarkar, S. Uttamchandani, and K. Voruganti.
`Storage over IP: Does hardware support help?
`In Proc. 2nd USENIX Conf. on File and Storage Technologies, pages 231-244, San Francisco, CA, March
`2003.
`
`22
`
`R. Stewart, Q. Xie, K. Morneault, C. Sharp, H. Schwarzbauer,T. Taylor, |. Rytina, M. Kalla, L. Zhang, and
`V. Paxson.
`Stream Control Transmission Protocol.
`
`RFC 2960, Network Working Group, Oct. 2000.
`
`23
`
`T. Strayer.
`Xpress Transport Protocol, Rev. 4.0b.
`XTP Forum, 1998.
`
`Aboutthis document...
`
`TCP offload is a dumb idea whose time has come
`
`This document was generated using the LaTeX2HTML translator Version 2K.1beta (1.47)
`
`Copyright © 1993, 1994, 1995, 1996, Nikos Drakos, Computer Based Learning Unit, University of
`Leeds.
`
`
`Copyright © 1997, 1998, 1999, Ross Moore, Mathematics Department, Macquarie University, Sydney.
`
`06973-00001/8717886.1
`
`ALA07370921
`
`Alacritech, Ex. 2041 Page 12
`
`Alacritech, Ex. 2041 Page 12
`
`

`

`The command line arguments were:
`latex2html -split 0 -no_navigation -no_footnode -numbered_footnotes hotos9web.tex
`
`The translation was initiated by Jeffrey Mogul on 2003-04-21
`
`
`Footnotes
`
`.. regions.+
`
`Muchof this paragraph was adapted, with permission, from a forthcoming book chapter by Jeff
`Chase[6].
`
`Jeffrey Mogul 2003-04-21
`
`
`
`This paper wasoriginally published in the
`Proceedings of HotOS IX: The 9th
`Workshop on HotTopics in Operating
`Systems, May 18-21, 2003, Lihue, Hawaii,
`USA
`
`Last changed: 26 Aug. 2003 aw
`
`HotOSIX Program Index
`
`HotOS IX Home
`
`USENIX home
`
`06973-00001/8717886.1
`
`ALA07370922
`
`Alacritech, Ex. 2041 Page 13
`
`Alacritech, Ex. 2041 Page 13
`
`

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket