`
`UNITED STATES PATENT AND TRADEMARK OFFICE
`______________________
`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`______________________
`
`
`CAVIUM, INC.
`Petitioner
`
`v.
`
`ALACRITECH, INC.
`Patent Owner
`
`________________________
`
`Case IPR. No. Unassigned
`U.S. Patent No. 7,124,205
`Title: NETWORK INTERFACE DEVICE THAT FAST-PATH PROCESSES
`SOLICITED SESSION LAYER READ COMMANDS
`________________________
`
`DECLARATION OF BILL LIN IN SUPPORT OF PETITION
`FOR INTER PARTES REVIEW OF
`U.S. PATENT NO. 7,124,205
`UNDER 37 C.F.R. § 1.68
`
`
`
`
`Mail Stop “PATENT BOARD”
`Patent Trial and Appeal Board
`U.S. Patent and Trademark Office
`P.O. Box 1450
`Alexandria, VA 22313-1450
`
`
`
`Petition for Inter Partes Review of 7,124,205
`Ex. 1003 (“Lin Decl.”)
`
`TABLE OF CONTENTS
`
`Page
`
`
`I.
`INTRODUCTION AND QUALIFICATIONS ............................................... 1
`II. MATERIALS RELIED ON IN FORMING MY OPINION........................... 2
`III. UNDERSTANDING OF THE GOVERNING LAW ..................................... 3
`A. Anticipation ........................................................................................... 3
`B.
`Invalidity by Obviousness ..................................................................... 4
`IV. LEVEL OF ORDINARY SKILL IN THE ART ............................................. 5
`V. OVERVIEW OF THE TECHNOLOGY ......................................................... 7
`A.
`Layered Network Protocols ................................................................... 7
`B.
`Offloading Protocol Processing .......................................................... 10
`1.
`Offloaded Protocols .................................................................. 14
`2.
`Portions of the Protocol Offloaded ........................................... 14
`3.
`Partial Offload and Fast Paths ................................................... 15
`4.
`Offload Implementation ............................................................ 18
`5.
`Protocol Offload Summary ....................................................... 21
`Additional Background Technologies for Protocol
`Offloading ........................................................................................... 21
`1.
`DMA ......................................................................................... 21
`2.
`Interrupts ................................................................................... 23
`Block-Level Storage Area Networks .................................................. 24
`File-Level Network-Attached Storage ................................................ 24
`1.
`SMB and NetBIOS.................................................................... 24
`VI. OVERVIEW OF THE 205 PATENT ............................................................ 25
`VII. 205 PATENT PROSECUTION HISTORY .................................................. 30
`VIII. CLAIM CONSTRUCTIONS ........................................................................ 31
`A.
`Legal Standard ..................................................................................... 31
`IX. THE PRIOR ART .......................................................................................... 31
`A.
`Thia ...................................................................................................... 31
`
`D.
`E.
`
`C.
`
`i
`
`
`
`Petition for Inter Partes Review of 7,124,205
`Ex. 1003 (“Lin Decl.”)
`
`SMB ..................................................................................................... 36
`B.
`Carmichael ........................................................................................... 40
`C.
`X. OBVIOUSNESS COMBINATIONS – MOTIVATIONS TO
`COMBINE ..................................................................................................... 41
`A.
`Thia in Combination with SMB .......................................................... 41
`B.
`Thia in Combination with SMB and Carmichael ................................ 43
`XI. GROUNDS OF INVALIDITY ..................................................................... 46
`
`
`ii
`
`
`
`
`
`
`I.
`
`
`
`
`
`I, Bill Lin, hereby declare as follows:
`
`INTRODUCTION AND QUALIFICATIONS
`1. My name is Bill Lin. I have been retained on behalf of Petitioner
`
`Cavium, Inc. (“Cavium”) to provide this Declaration concerning technical subject
`
`matter relevant to the petition for inter partes review (“Petition”) concerning U.S.
`
`Patent No. 7,124,205 (Ex.1001, “the 205 Patent”). I reserve the right to supplement
`
`this Declaration in response to additional evidence that may come to light.
`
`2.
`
`I am over 18 years of age. I have personal knowledge of the facts stated
`
`in this Declaration and could testify competently to them if asked to do so.
`
`3. My compensation is not based on the resolution of this matter. My
`
`findings are based on my education, experience, and background in the fields
`
`discussed below.
`
`4.
`
`I am a Professor of Electrical and Computer Engineering and an
`
`Adjunct Professor of Computer Science and Engineering at the University of
`
`California, San Diego (UCSD). I have over 25 years of experience in research and
`
`development in the areas of computer networking and computer design. I have also
`
`testified as an expert witness and consultant in patent and intellectual property
`
`litigations as well as inter partes reviews.
`
`5.
`
`I received a Bachelor of Science in Electrical Engineering and
`
`Computer Sciences from University of California, Berkeley in May 1985; a Master’s
`
`
`
`
`
`Petition for Inter Partes Review of 7,124,205
`Ex.1003 (“Lin Decl.”)
`
`
`
`of Science in Electrical Engineering and Computer Sciences from the University of
`
`California, Berkeley in May 1988; and a Ph.D. in Electrical Engineering and
`
`Computer Sciences from the University of California, Berkeley in May 1991.
`
`6.
`
`I am a named inventor on five patents in the fields of computer
`
`networking and computer design, and I have published over 170 journal articles and
`
`conference papers in top-tier venues and publications.
`
`7.
`
`I have also served or am currently serving as Associate Editor or Guest
`
`Editor for 3 ACM or IEEE journals, as General Chair on 4 ACM or IEEE
`
`conferences, on the Organizing or Steering Committees for 6 ACM or IEEE
`
`conferences, and on the Technical Program Committees of over 44 ACM or IEEE
`
`conferences.
`
`8. My Curriculum Vitae, which is filed as a separate Exhibit (Ex.1004),
`
`contains further details on my education, experience, publications, and other
`
`qualifications to render this opinion as expert.
`
`II. MATERIALS RELIED ON IN FORMING MY OPINION
`9.
`In addition to reviewing U.S. Patent No. 7,124,205, I also reviewed and
`
`considered the prosecution history of the 205 Patent (Ex.1002). I reviewed all of the
`
`references forming the grounds of both Petitions regarding the 205 Patent.
`
`Specifically, for the 205 Patent, Thia (Ex.1015), SMB (Ex.1055), SatranI (Ex.1056),
`
`2
`
`
`
`Petition for Inter Partes Review of 7,124,205
`Ex.1003 (“Lin Decl.”)
`
`
`
`SatranII (Ex.1057), and Carmichael (Ex.1053). I also considered the background
`
`materials cited herein.
`
`III. UNDERSTANDING OF THE GOVERNING LAW
`10.
`I understand that a patent claim is invalid if it is anticipated or obvious
`
`in view of the prior art. I further understand that invalidity of a claim requires that
`
`the claim be anticipated or obvious from the perspective of a person of ordinary skill
`
`in the relevant art at the time the invention was made.
`
`A. Anticipation
`11.
`I have been informed that a patent claim is invalid as anticipated under
`
`35 U.S.C. § 102 if each and every element of a claim, as properly construed, is found
`
`either explicitly or inherently in a single prior art reference.
`
`12.
`
`I have been informed that a claim is invalid under 35 U.S.C. § 102(a) if
`
`the claimed invention was known or used by others in the U.S., or was patented or
`
`published anywhere, before the applicant’s invention. I further have been informed
`
`that a claim is invalid under 35 U.S.C. § 102(b) if the invention was patented or
`
`published anywhere, or was in public use, on sale, or offered for sale in this country,
`
`more than one year prior to the filing date of the patent application (critical date). I
`
`further have been informed that a claim is invalid under 35 U.S.C. § 102(e) if an
`
`invention described by that claim was disclosed in a U.S. patent granted on an
`
`3
`
`
`
`Petition for Inter Partes Review of 7,124,205
`Ex.1003 (“Lin Decl.”)
`
`
`
`application for a patent by another that was filed in the U.S. before the date of
`
`invention for such a claim.
`
`B.
`13.
`
`Invalidity by Obviousness
`I have been informed that a patent claim is invalid as “obvious” under
`
`35 U.S.C. § 103 if it would have been obvious to one of ordinary skill in the art,
`
`taking into account (1) the scope and content of the prior art, (2) the differences
`
`between the prior art and the claims, (3) the level of ordinary skill in the art, and (4)
`
`any so called “secondary considerations” of non-obviousness, which include: (i)
`
`“long felt need” for the claimed invention, (ii) commercial success attributable to
`
`the claimed invention, (iii) unexpected results of the claimed invention, and (iv)
`
`“copying” of the claimed invention by others. I further understand that it is improper
`
`to rely on hindsight in making the obviousness determination. My analysis of the
`
`prior art is made as of the time the invention was made.
`
`14.
`
`I have been informed that a claim can be obvious in light of a single
`
`prior art reference or multiple prior art references. I further understand that
`
`exemplary rationales that may support a conclusion of obviousness include:
`
`(A) Combining prior art elements according to known methods to yield
`
`predictable results;
`
`(B) Simple substitution of one known element for another to obtain
`
`predictable results;
`
`4
`
`
`
`Petition for Inter Partes Review of 7,124,205
`Ex.1003 (“Lin Decl.”)
`
`
`(C) Use of known technique to improve similar devices (methods, or
`
`
`
`products) in the same way;
`
`(D) Applying a known technique to a known device (method, or product)
`
`ready for improvement to yield predictable results;
`
`(E) “Obvious to try” – choosing from a finite number of identified, predictable
`
`solutions, with a reasonable expectation of success;
`
`(F) Known work in one field of endeavor may prompt variations of it for use
`
`in either the same field or a different one based on design incentives or other market
`
`forces if the variations are predictable to one of ordinary skill in the art;
`
`(G) Some teaching, suggestion, or motivation in the prior art that would have
`
`led one of ordinary skill to modify the prior art reference or to combine prior art
`
`reference teachings to arrive at the claimed invention.
`
`IV. LEVEL OF ORDINARY SKILL IN THE ART
`15.
`I have been informed that factors that may be considered in determining
`
`the level of ordinary skill in the art may include: (A) “type of problems encountered
`
`in the art;” (B) “prior art solutions to those problems;” (C) “rapidity with which
`
`innovations are made;” (D) “sophistication of the technology;” and (E) “educational
`
`level of active workers in the field.” I also understand that, every factor may not be
`
`present for a given case, and one or more factors may predominate. Here, the 205
`
`Patent is directed to an apparatus and methods for network protocol offload. In my
`
`5
`
`
`
`Petition for Inter Partes Review of 7,124,205
`Ex.1003 (“Lin Decl.”)
`
`
`
`experience, systems such as those capable of protocol offload are not designed by a
`
`single person but instead require a design team with wide ranging skills and
`
`experience including computer architecture, network design, software development
`
`and hardware development. Moreover, the design team typically would have
`
`comprised individuals with advanced degrees and some industry experience, or
`
`significant industry experience.
`
`16.
`
`In my opinion, a person of ordinary skill in the art at the time of the
`
`claimed inventions would be a person with at least the equivalent of a B.S. degree
`
`in computer science, computer engineering or electrical engineering with at least
`
`five years of industry experience, including experience in computer architecture,
`
`network design, network protocols, software development, and hardware
`
`development. An individual with an advanced degree in a relevant field, such as
`
`computer or electrical engineering, would require less experience in the
`
`development and use of memory devices and systems.
`
`17.
`
`I reserve the right to amend or supplement this declaration if the Board
`
`adopts a definition of a person of ordinary skill other than that described above,
`
`which may change my conclusion or analysis. But should the Board adopt a higher
`
`standard, it would not change my opinion that the claims are invalid.
`
`18. My opinion below explains how a person of ordinary skill in the art
`
`would have understood the technology described in the references I have identified
`
`6
`
`
`
`Petition for Inter Partes Review of 7,124,205
`Ex.1003 (“Lin Decl.”)
`
`
`
`herein around the 1997 time period, which is the approximate date when the
`
`application to which the 205 Patent claims priority was filed. I was a person of at
`
`least ordinary skill in the art in 1997.
`
`V. OVERVIEW OF THE TECHNOLOGY
`19. Along with the 205 Patent, Petitioner has also filed for Inter Partes
`
`Review of U.S. Patent Nos. 7,673,072, 7,237,036, 7,337,241, and 8,805,948
`
`(“Related Patents”). These patents belong to the same family as the 205 Patent and
`
`are directed to substantially the same technology. Dr. Horst has provided
`
`declarations in support of the petitions for IPR for the Related Patents, and, as part
`
`of those declarations, Dr. Horst prepared an overview of the technology. I have
`
`reviewed this overview and I have adopted and incorporated (with his permission)
`
`language from his overview into my declaration.1
`
`A. Layered Network Protocols
`20. Computer networks based on layered protocol architectures have been
`
`well-known since at least the 1980s. As explained in Dr. Horst’s Declaration (see ¶
`
`22), the primary goal of computer networking is to provide fast, reliable data
`
`communications between computer systems. Interoperability has been accomplished
`
`
`1 Note that I did not review any other sections of Dr. Horst’s declaration. Nor did I
`
`assist Dr. Horst with his declaration (and vice versa).
`
`7
`
`
`
`Petition for Inter Partes Review of 7,124,205
`Ex.1003 (“Lin Decl.”)
`
`
`
`through adherence to standards, and performance has steadily increased through new
`
`technology and optimizations of hardware and software.
`
`21. As explained in Dr. Horst’s Declaration (see Ex.1092, ¶¶ 23-24), the
`
`two most dominant models of protocol layers are the OSI model and the TCP/IP
`
`model.2 The OSI model defines a seven-layer protocol stack whereas the TCP/IP
`
`model defines a simpler four-layer protocol stack. The OSI model includes physical,
`
`data link, network, transport, session, presentation and application layers. The figure
`
`below shows the relationship between the OSI layering and the TCP/IP layering.
`
`
`2 References on TCP/IP use different terminology to describe the layer under IP.
`
`The data link layer is also called the “host-to-network layer” in Tanenebaum96 and
`
`the “interface layer” in Stevens2. Some Alacritech patents use “data link layer,”
`
`“link layer” and “MAC layer.” Prior art references use many of these terms and also
`
`sometimes use the name of a specific implementation (e.g., Ethernet, ATM).
`
`8
`
`
`
`
`
`Petition for Inter Partes Review of 7,124,205
`Ex.1003 (“Lin Decl.”)
`
`
`
`Available at http://mitigationlog.com/how-tcpip-and-reference-osi-model-works/.
`22. As explained in Dr. Horst’s Declaration (see ¶ 25), at a conceptual level,
`
`each layer is responsible for only for its respective functions. This enables, for
`
`example, hiding the complexity of the physical data connection (that is, actually
`
`transmitting the data onto the physical wires) from layers above the physical, data
`
`link, and network layers above. Likewise, the lower layers must transmit the data
`
`on the physical wires, but need not worry about what application the data belongs to
`
`or how the user data has been partitioned into individual packets.
`
`23.
`
`In the figure above, the bottom “host-to-network” layer of the TCP/IP
`
`model has roughly the same functions as the “data link” and “physical” layers of the
`
`OSI reference model. The next two layers up, the “transport” and “Internet” layers
`
`9
`
`
`
`Petition for Inter Partes Review of 7,124,205
`Ex.1003 (“Lin Decl.”)
`
`
`
`have roughly the same functions as their similarly named counterparts in the OSI
`
`model (namely, the “transport” and “network” layers).
`
`24.
`
`In the TCP/IP model, the “network” (Internet) layer is called the
`
`“Internet Protocol” (IP) layer, which provides for Internet addressing and routing
`
`functions. The “transport” layer is concerned with conveying a message from one
`
`application to another over a network. The two most widely used transport protocols
`
`are TCP and UDP, either of which can be used to transport application-layer
`
`messages. Whereas TCP provides a connection-oriented service to its applications
`
`that includes guaranteed delivery and congestion control, UDP provides a simpler
`
`connectionless service that does not provide for reliability or flow control.
`
`25. Above the “transport” and “network” layers, the OSI reference model
`
`has two additional layers, the session and presentation layers. The session layer
`
`controls the connections between computers, and the presentation layer establishes
`
`context between application-layer entities. These two layers are not present in the
`
`TCP/IP model.
`
`B. Offloading Protocol Processing
`26. The idea of offloading protocol processing from the host in order to
`
`increase performance was known at least as early as 1974, as shown by RFC 647
`
`(Ex.1019) and RFC 929 (Ex.1009). In RFC 647, front-end protocol offload was
`
`10
`
`
`
`Petition for Inter Partes Review of 7,124,205
`Ex.1003 (“Lin Decl.”)
`
`
`
`considered for standardization. RFC 647 also discusses rigid and flexible front-end
`
`(FE) alternatives.
`
`“FRONT-ENDING”
`In what might be thought of as the greater network community, the
`consensus is so broad that the front-ending is desirable that the topic
`needs almost no discussion here. Basically, a small machine (a PDP-11
`is widely held to be most suitable) is interposed between the IMP and
`the host in order to shield the host from the complexities of the NCP.
`
`Ex.1019, RFC 647 at .002.
`27. Similarly, RFC 929 dealt with a possible standard for interfacing
`
`between an Outboard Processing Environment (OPE) and a host.
`
`There are two fundamental motivations for doing outboard processing.
`One is to conserve the Hosts' resources (CPU cycles and memory) in a
`resource sharing intercomputer network, by offloading as much of the
`required networking software from the Hosts to Outboard Processing
`Environments (or "Network Front-Ends") as possible. The other is to
`facilitate procurement of implementations of the various intercomputer
`networking protocols for the several types of Host in play in a typical
`heterogeneous
`intercomputer network, by employing common
`implementations in the OPE.
`
`Ex.1009, RFC 929 at .002.
`The interaction between the Host and the OPE must be capable of
`providing a suitable
`interface between processes (or protocol
`
`11
`
`
`
`
`
`Petition for Inter Partes Review of 7,124,205
`Ex.1003 (“Lin Decl.”)
`
`
`interpreters) in the Host and the off-loaded protocol interpreters in the
`OPE. This interaction must not, however, burden the Host more
`heavily than would have resulted from supporting the protocols
`inboard, lest the advantage of using an OPE be overridden.
`
`Id. at .003.
`28. The 1984 proposal to standardize offload implementations in RFC 929
`
`is evidence that there was already much activity in offload implementations at that
`
`time.
`
`The mediation level parameter is an indication of the role the Host
`wishes the OPE to play in the operation of the protocol. The extreme
`ranges of this mediation would be the case where the Host wished to
`remain completely uninvolved, and the case where the Host wished to
`make every possible decision. The specific interpretation of this
`parameter is dependent upon the particular off-loaded protocol.
`
`The concept of mediation level can best be clarified by means of
`example. A full inboard implementation of the Telnet protocol places
`several responsibilities on the Host. These responsibilities include
`negotiation and provision of protocol options, translation between local
`and network character codes and formats, and monitoring the well-
`known socket for incoming connection requests. The mediation level
`indicates whether these responsibilities are assigned to the Host or to
`the OPE when the Telnet implementation is outboard. If no OPE
`mediation is selected, the Host is involved with all negotiation of the
`Telnet options, and all format conversions.
`
`12
`
`
`
`
`
`Petition for Inter Partes Review of 7,124,205
`Ex.1003 (“Lin Decl.”)
`
`
`With full OPE mediation, all option negotiation and all format
`conversions are performed by the OPE. An intermediate level of
`mediation might have ordinary option negotiation, format conversion,
`and socket monitoring done in the OPE, while options not known to the
`OPE are handled by the Host.
`
`The parameter is represented with a single ASCII digit. The value 9
`represents full OPE mediation, and the value 0 represents no OPE
`mediation. Other values may be defined for some protocols (e.g., the
`intermediate mediation level discussed above for Telnet). The default
`value for this parameter is 9.
`
`Ex.1009, RFC 929 at .015-.016.
`
`29. More than a decade passed between the publication of RFC 929 and the
`
`priority date of the earliest Alacritech provisional application, and during that time,
`
`protocol offload was the subject of many papers and systems of the typed anticipated
`
`by RFC 929. As explained in Dr. Horst’s declaration (see ¶ 54) the implementations
`
`can be separated into three general categories: 1) The set of protocols to be offloaded
`
`(e.g., TCP/IP, VMTP, OSI), 2) the portions of the protocol that are offloaded (e.g.
`
`full offload, partial offload, fast path offload, no offload), 3) the offload
`
`implementation
`
`(e.g.,
`
`parallel
`
`processor,
`
`custom
`
`processor,
`
`standard
`
`microprocessor).
`
` The references discussed below include many different
`
`combinations of these three factors. However, all of the combinations discussed
`
`13
`
`
`
`Petition for Inter Partes Review of 7,124,205
`Ex.1003 (“Lin Decl.”)
`
`
`
`were the result of design choices from a small number of options. It would have been
`
`obvious to modify one of the three factors of the particular implementation and
`
`produce predictable results. Or to put it differently, those of skill in the art would
`
`have recognized that the extent of offloading could be changed for a given
`
`implementation.
`
`1. Offloaded Protocols
`30. As explained in Dr. Horst’s Declaration (see ¶¶ 55-60), protocol offload
`
`implementations for different protocol stacks were known by the mid-1990s. These
`
`included OSI protocol offload (for example, Thia (Ex.1015) and Woodside
`
`(Ex.1038)), TCP/IP protocol offload (for example, Bach (Ex.1020), Erickson
`
`(Ex.1005), Morris (Ex.1021), Cooper (Ex.1022), Kung (Ex.1023), Rütsche
`
`(Ex.1017) and Chesson (Ex.1024)), VMTP and XTP Protocol Offload (for example,
`
`Kanakia (Ex.1025), Chesson (Ex.1024)), and multi-protocol offload (for example,
`
`Erickson (Ex.1005) and Kung (Ex.1023) and Cooper (Ex.1026)).
`
`2.
`Portions of the Protocol Offloaded
`31. As explained in Dr. Horst’s Declaration (see ¶ 61), the portion of the
`
`protocol offloaded can be between full and partial offload.
`
`32. As explained in Dr. Horst’s Declaration (see ¶¶ 62-63), one type of
`
`offload is checksum offload.
`
`33. A checksum offload is described, for example, in Dalton Afterburner.
`
`14
`
`
`
`
`
`Petition for Inter Partes Review of 7,124,205
`Ex.1003 (“Lin Decl.”)
`
`
`To support the use of the on-card memory as clusters, we have written
`a small number of functions. The most important is a special copy
`routine, functionally equivalent to the BSD function bcopy. It is
`optimized for moving data over the I/O bus, and also optionally uses
`the card’s built-in unit to calculate the IP checksum of the data it moves.
`Another function converts a single-copy cluster into a chain of normal
`clusters and mbufs; it also calculates the checksum.
`
`Ex.1027, Dalton at .011 (emphasis added).
`
`34. As explained in Dr. Horst’s Declaration (see ¶¶ 64-65), other types of
`
`offloads included full offload (for example, Murphy (Ex.1028), Bach (Ex.1020),
`
`MacLean (Ex.1029), Cooper (Ex.1022), Rütsche92 (Ex.1017), Rütsche93 (Ex.1018)
`
`and multi-level offload (for example, Chesson (Ex.1024)).
`
`3.
`Partial Offload and Fast Paths
`35. The performance of TCP/IP, or for that matter most communication
`
`protocols, can be improved by adapting the header prediction algorithm that was
`
`proposed in 1988 by Van Jacobson, which led to many different types of partial
`
`offloads, including a TCP/IP implementation (i.e., BSD 4.3 Reno) in which the code
`
`is partitioned into one module for the commonly executed path (the fast path) and
`
`another module to handle the more complex cases and exception handling (the slow
`
`path).
`
`15
`
`
`
`Petition for Inter Partes Review of 7,124,205
`Ex.1003 (“Lin Decl.”)
`
`
`36. For example the BSD 4.4-Lite distribution included code for
`
`
`
`implementing the header prediction algorithm.
`
`Most IP packets carry no options. Of the 20-byte header, 14 of the bytes
`will be the same for all IP packets sent by a particular TCP connection.
`The IP length, ID, and checksum fields (6 bytes total) will probably be
`different for each packet. Also, if a packet carries any options, all
`packets for that TCP connection will be likely to carry the same options.
`
`The Berkeley implementation of UNIX makes some use of this
`observation, associating with each connection a template of the IP and
`TCP headers with a few of the fixed fields filled in. To get better
`performance, we designed an IP layer that created a template with all
`the constant fields filled in. When TCP wished to send a packet on that
`connection, it would call IP and pass it the template and the length of
`the packet. Then IP would block-copy the template into the space for
`the IP header, fill in the length field, fill in the unique ID field, and
`calculate the IP header checksum.
`
`This idea can also be used with TCP, as was demonstrated in an earlier,
`very simple TCP implemented by some of us at MIT [6]. In that TCP,
`which was designed to support remote login, the entire state of the
`output side, including the unsent data, was stored as a preformatted
`output packet. This reduced the cost of sending a packet to a few lines
`of code.
`
`A more sophisticated example of header prediction involves applying
`the idea to the input side. In the most recent version of TCP for Berkeley
`
`16
`
`
`
`
`
`Petition for Inter Partes Review of 7,124,205
`Ex.1003 (“Lin Decl.”)
`
`
`UNIX, one of us (Jacobson) and Mike Karels have added code to
`precompute what values should be found in the next incoming packet
`header for the connection. If the packets arrive in order, a few simple
`comparisons suffice to complete header processing.
`
`Ex.1030, Clark at .003.
`37. As explained in Dr. Horst’s Declaration (see ¶¶ 68-69), the 1995 book
`
`by Stevens (Stevens2) walks through the Jacobson BSD header prediction code
`
`including the conditions for selecting the fast or slow path.
`
`38. Stevens2 identifies six conditions for using the fast path:
`
`1. The connection must be established.
`2. The following four control flags must not be on: SYN, FIN, RST, or
`URG. The ACK flag must be on.
`3.-6. [Conditions to assure that the received segments are in-order]
`
`Ex.1013, Stevens2 at .962-.963.
`
`39. Many other works built on the Jacobson BSD header prediction code,
`
`such as Biersack (Ex.1016), which describes a TCP protocol offload with fast and
`
`slow paths, as well as Thia (Ex.1015), which built on Jacobson BSD header
`
`prediction algorithm by implementing a fast path using an OSI protocol offload.
`
`40. As explained in Dr. Horst’s Declaration (see ¶¶ 70-71), the Jacobson
`
`header prediction code forms the basis of what Alacritech offloads to its intelligent
`
`17
`
`
`
`Petition for Inter Partes Review of 7,124,205
`Ex.1003 (“Lin Decl.”)
`
`
`
`network interface card according to its 1997 provisional application. See also
`
`Ex.1031, Alacritech 1997 Provisional Application at .057.
`
`4. Offload Implementation
`41. As explained in Dr. Horst’s Declaration (see ¶¶ 72-73), offload
`
`implementations include dedicating one or more processors to protocol processing.
`
`For example, Tanenbaum96 discussed offloading to an interface card.
`
`The hardware and/or software within the transport layer that does the
`work is called the transport entity. The transport entity can be in the
`operating system kernel, in a separate user process, in a library package
`bound into network applications, or on the network interface card.
`
`Ex.1006, Tanenbaum96 at .498 (emphasis added).
`
`42. As explained in Dr. Horst’s Declaration (see ¶¶ 74-81), several groups
`
`proposed offload implementations based on multiprocessor configurations or
`
`dedicated microprocessor implementations, including the Nectar system, the parallel
`
`protocol system, and a Gb/s Multimedia Protocol Adapter, as well as offload
`
`adapters based on microprocessors, as discussed in Kanakia (Ex.1025), MacLean
`
`(Ex.1029), and Rütsche92 (Ex.1017).
`
`The Nectar communication processor together with its host can be
`viewed as a
`(heterogeneous)
`shared-memory multiprocessor.
`Dedicating one processor of a multiprocessor host to communication
`tasks can achieve some of the benefits of the Nectar approach, but this
`
`18
`
`
`
`
`
`Petition for Inter Partes Review of 7,124,205
`Ex.1003 (“Lin Decl.”)
`
`
`constrains the choice of host operating system and hardware. In
`contrast, the Nectar communication processor has been used with a
`variety of hosts and host operating systems.
`
`Ex.1022, Cooper at .006.
`
`In this paper our goal is to demonstrate that a careful implementation
`of a standard transport protocol stack on a general purpose
`multiprocessor architecture allows efficient use of the bandwidth
`available in today’s high-speed networks. As an example, we chose to
`implement the TCP/IP protocol suite on our 4-processor prototype of
`the PPE.
`
`Ex.1017, Rütsche92 at .009.
`
`In this paper we present a new multiprocessor communication
`subsystem architecture, the Multimedia Protocol Adapter (MPA),
`which is based on the experience with the Parallel Protocol Engine
`(PPE) [Kaiserswerth 92] and is designed to connect to a 622 Mb/s ATM
`network. The MPA architecture exploits the inherent parallelism
`between the transmitter and receiver parts of a protocol and provides
`support for the handling of new multimedia protocols.
`
`Ex.1018, Rütsche93 at .001.
`
`The prototype Network Adapter Board (NAB) has been designed using
`Motorola's MC68020 as the on-board processor, running at 16 Mhz
`clock rate; it uses about 200 hundred standard MSI and LSI
`components. The current version is designed for connecting two VMP
`
`19
`
`
`
`
`
`Petition for Inter Partes Review of 7,124,205
`Ex.1003 (“Lin Decl.”)
`
`
`multiprocessor systems with a 100 megabit/sec point-to-point
`connection.
`
`Ex.1025, Kanakia at .010.
`
`The internal functions and data flows of the protocol accelerator shown
`in Figure 2. We use a dual CPU approach to protocol processing, with
`one CPU subsystem dedicated to the transmission, and the other to the
`reception. The transmit and receive CPUs are both 68020 (25 MHz)
`based, each with its own private resources: ROM, parallel I/O, interrupt
`circuitry and 128 kilobytes of random access memory (RAM). In
`addition there is 128 kilobytes of RAM shared by both CPUs which is
`also accessible to the two host busses, VME and VSB.
`
`Ex.1029, MacLean at .004.
`
`The selection of the inmos2 T9000 [inmos 91] is based on our good
`experience with the transputer family of processors in the PPE. The
`most significant improvements of the T9000 over the T425 fo