throbber
THIS IS TO CERTIFY THAT ANNEXED HERETO IS A TRUE COPY FROM
`THE RECORDS OF THE UNITED STATES PATENT AND TRADEMARK
`OFFICE OF THOSE PAPERS OF THE BELOW IDENTIFIED PATENT
`APPLICATION THAT NIET THE REQUIRENIENTS TO BE GRANTED A
`FILING DATE UNDER 35 USC 111.
`
`
`
`
`
`
`
`
`
`
`
`
`\/V\’\x
`.m“ ilk/WW,‘
`
`'
`
`By Authority of the
`COMMISSIONER OF PATENTS AND TRADEMARKS
`
`5h 501%
`
`E. BORNETT
`
`Certifying Officer
`
`
`
`Pl 485983
`
`
`
`
`
`
`mymg@WHMTH. EgPRESENlfiggLIIAEECOgm:g5
`
`UNITED STATES DEPARTMENT OF COMMERCE
`United States Patent and Trademark Office
`
`November 03, 2001
`
`APPLICATION NUMBER: 09/675, 484
`FILING DATE: September 29, 2000
`PCT APPLICATION NUMBER: PCT/US01/30150
`
`
`
`
`
`
`
`
`~mummwsmaflammmmmuupmaaaaasnsném:
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`DELL Ex.1072.001
`Ex.1072.001
`
`DELL
`
`

`

` 00/60/60IIIHJIIlllllllllllfllllllllllllll0.0a.01szsor\‘
`
`UTILITY PATENT APPLICATION TRANSMITTAL
`_.
`,
`.
`
`Attorney Docket No.
`ALA-010A
`
`TO THE ASSISTANT COMMISSIONER FOR PATENTS:
`
`.co Ea
`
`Transmitted herewith is a patent application identified as follows:
`
`Inventors: Daryl D. Starr, Clive M. Philbriclg, and Laurence B. Boucher.
`
`Title:
`
`INTELLIGENT NETWORK STORAGE INTERFACE SYSTEM
`
`m an
`
`Tim is
`
`( X ) Original Patent Application.
`
`Enclosed are:
`
`50
`4
`1
`23
`(X)
`(X)
`
`pages Specification
`pages Claims
`page Abstract
`sheets Drawings
`check in the amount of $858.00
`self—addressed, stamped postcard
`
`J
`
`CLAIMS AS FILED
`
`FOR
`Total Claims
`Indeendent Claims
`Mnltile De-endent Claims ifaa .h'cable
`Assi uent Recordin_ Fee
`Basic Finn Fee
`
`‘
`
`NO- EXTRA
`
`RATE
`$18.00
`$78.00
`
`‘
`
`TotalFilin- Fee
`
`
` 20231.
`
`Respectfully submitted,
`
`I hereby certify that this is being deposited with the US.
`Postal Service “Express Mail Post Office to Addressee”
`service under 37 CFR § 1.10 on the date indicated below
`and is addressed to: Box PATENT APPLICATION,
`Assistant Commissionerfor Patents, Washington, DC %
`Mark Lauer
`. % °
`Reg. No. 36,578
`Mark Lauer
`,
`7041 K011 Center Parkway
`Suite 280
`Pleasanton, CA 94566
`Tel: (925) 484-9295
`Fax: (925) 484-9291
`
`'
`
`Date ofDeposit:
`
`2 “g E ”2&3
`
`Express Mail Label No.: EK532538280US
`
`Please Use the Address Identified by Customer Number 24501 for all Correspondence
`
`DELL Ex.1072.002
`Ex.1072.002
`
`DELL
`
`

`

`
`
`INTELLIGENT NETWORK STORAGE INTERFACE SYSTEM
`
`Inventors
`
`Daryl D. Starr, Clive M. Philbrick, Laurence B. Boucher.
`
`Background
`
`Over the past decade, advantages of and advances in network computing have
`
`encouraged tremendous growth ofcomputer networks, which has in tum spurred more
`
`advances, growth and advantages. With this growth, however, dislocations and
`
`bottlenecks have occurred in utilizing conventional network devices. For example, a
`
`CPU of a computer connected to a network may spend an increasing proportion of its
`
`time processing network communications, leaving less time available for other work. In
`
`particular, demands for moving file data between the network and a storage unit of the
`
`computer, such as a disk drive, have accelerated. Conventionally such data is divided
`
`into packets for transportation over the network, with each packet encapsulated in layers
`
`ofcontrol information that are processed one layer at a time by the CPU of the receiving
`
`computer. Although the speed of CPUs has constantly increased, this protocol processing
`
`of network messages such as file transfers can consume most of the available processing
`
`power of the fastest commercially available CPU.
`
`This situation may be even more challenging for a network file server whose
`
`primary fimction is to store and retrieve files, on its attached disk or tape drives, by
`
`transferring file data over the network. As networks and databases have grown, the
`
`volume of information stored at such servers has exploded, exposing limitations of such
`
`server-attached storage. In addition to the above—mentioned problems ofprotocol
`
`processing by the host CPU, limitations of parallel data charmels such as conventional
`
`small computer system interface (SCSI) interfaces have become apparent as storage
`
`needs have increased. For example, parallel SCSI interfaces restrict the number of
`
`storage devices that can be attached to a server and the distance between the storage
`devices and the server.
`
`As noted in the book by Tom Clark entitled “Designing Storage Area Networks,”
`
`(copyright 1999) incorporated by reference herein, one solution to the limits of server-
`
`10
`
`15
`
`20
`
`25
`
`30
`
`ALA-OlOA; Express Mail EK532538280US
`
`l
`
`DELL Ex.1072.003
`Ex.1072.003
`
`DELL
`
`

`

`
`
`
`attached parallel SCSI storage devices involves attaching other file servers to an existing
`
`local area network (LAN) in front of the network server. This network-attached storage
`
`(NAS) allows access to the NAS file servers from other servers and clients on the
`
`network, but may not increase the storage capacity dedicated to the original network
`
`server. Conversely, NAS may increase the protocol processing required by the original
`
`network server, since that server may need to communicate with the various NAS file
`
`servers. In addition, each of the NAS file servers may in turn be subject to the strain of
`protocol processing and the limitations of storage interfaces.
`
`Storage area networking (SAN) provides another solution to the growing need for
`
`file transfer and storage over networks, by replacing daisy-chained SCSI storage devices
`
`with a network of storage devices connected behind a server. Instead of conventional
`
`network standards such as Ethernet or Fast Ethernet, SANS deploy an emerging
`
`networking standard called Fibre Channel (FC). Due to its relatively recent introduction,
`
`however, many commercially available FC devices are incompatible with each other.
`
`Also, a PC network may dedicate bandwidth for communication between two points on
`
`the network, such as a server and a storage unit, the bandwidth being wasted when the
`
`points are not communicating.
`
`NAS and SAN as known today can be differentiated according to the form of the
`
`data that is transferred and stored. NAS devices generally transfer data files to and from
`
`other file servers or clients, whereas device level blocks of data may be transferred over a
`
`SAN. For this reason, NAS devices conventionally include a file system for converting
`
`. between files and blocks for storage, whereas a SAN may include storage devices that do
`
`not have such a file system.
`
`Alternatively, NAS file servers can be attached to an Ethemet-based network
`
`dedicated to a server, as part of an Ethernet SAN. Marc Farley further states, in the book
`
`“Building Storage Networks,” (copyright 2000) incorporated by reference herein, that it
`
`is possible to run storage protocols over Ethernet, which may avoid Fibre Channel
`
`incompatibility issues. Increasing the number of storage devices connected to a server by
`
`employing a network topology such as SAN, however, increases the amount of protocol
`
`processing that must be performed by that server. As mentioned above, such protocol
`
`processing already strains the most advanced servers.
`
`10
`
`15
`
`2O
`
`25
`
`30
`
`ALA-010A; Express Mail EK532538280US
`
`2
`
`DELL Ex.1072.004
`Ex.1072.004
`
`DELL
`
`

`

`
`"itsthis11'
`
`
`An example of conventional processing of a network message such as a file
`
`transfer illustrates some of the processing steps that slow network data storage. A
`
`network interface card (NIC) typically provides a physical connection between a host and
`
`a network or networks, as well as providing media access control (MAC) functions that
`
`allow the host to access the network or networks. When a network message packet sent
`
`to the host arrives at the NIC, MAC layer headers for that packet are processed and the
`
`packet undergoes cyclical redundancy checking (CRC) in the NIC. The packet is then
`
`sent across an input/output (I/O) bus such as a peripheral component interconnect (PCI)
`
`bus to the host, and stored in host memory. The CPU then processes each of the header
`
`layers of the packet sequentially by running instructions from the protocol stack. This
`
`requires a trip across the host memory bus initially for storing the packet and then
`
`subsequent trips across the host memory bus for sequentially processing each header
`
`layer. Afier all the header layers for that packet have been processed, the payload data
`
`fiom the packet is grouped in a file cache with other similarly-processed payload packets
`
`of the message. The data is reassembled by the CPU according to the file system as file
`
`blocks for storage on a disk or disks. After all the packets have been processed and the
`
`message has been reassembled as file blocks in the file cache, the file is sent, in blocks of
`
`data that may be each built from a few payload packets, back over the host memory bus
`
`and the I/O bus to host storage for long term storage on a disk, typically via a SCSI bus,
`
`10
`
`15
`
`20
`
`that is bridged to the I/O bus.
`
`Alternatively, for storing the file on a SAN, the reassembled file in the file cache
`
`is sent in blocks back over the host memory bus and the I/O bus to an I/O controller
`
`configured for the SAN. For the situation in which the SAN is a PC network, a
`
`specialized FC controller is provided which can send the file blocks to a storage device
`
`25
`
`on the SAN according to Fibre Channel Protocol (FCP). For the situation in which the
`
`file is to be stored on a NAS device, the file may be directed or redirected to the NAS
`
`device, which processes the packets much as described above but employs the CPU,
`
`protocol stack and file system of the NAS device, and stores blocks of the file on a
`
`storage unit of the NAS device.
`
`30
`
`Thus, a file that has been sent to a host from a network for storage on a SAN or
`
`NAS connected to the host typically requires two trips across an 1/0 bus for each
`
`ALA-010A; Express Mail EK53253 828OUS
`
`3
`
`DELL Ex.1072.005
`Ex.1072.005
`
`DELL
`
`

`

`
`
`
`message packet of the file. In addition, control information in header layers of each
`
`packet may cross the host memory bus repeatedly as it is temporarily stored, processed
`
`one layer at a time, and then sent back to the I/O bus. Retrieving such a file from storage
`
`on a SAN in response to a request from a client also conventionally requires significant
`
`processing by the host CPU and file system.
`
`Summary
`
`An interface device such as an intelligent network interface card (INIC) for a
`
`local host is disclosed that provides hardware and processing mechanisms for
`
`10
`
`accelerating data transfers between a network and a storage unit, while control of the data
`
`transfers remains with the host. The interface device includes hardware circuitry for
`
`processing network packet headers, and can use a dedicated fast—path for data transfer
`
`between the network and the storage unit, the fast-path set up by the host. The host CPU
`
`and protocol stack avoid protocol processing for data transfer over the fast-path, releasing
`host bus bandwidth from‘many demands ofthe network and storage subsystem. The
`
`15
`
`storage unit, which may include a redundant array of independent disks (RAID) or other
`
`configurations of multiple drives, may be connected to the interface device by a parallel
`
`channel such as SCSI or by a serial channel such as Ethernet or Fibre Channel, and the
`
`interface device may be connected to the local host by an I/O bus such as a PCI bus. An
`
`additional storage unit may be attached to the local host by a parallel interface such as
`
`SCSI.
`
`A file cache is provided on the interface device for storing data that may bypass
`
`the host, with organization of data in the interface device file cache controlled by a file
`
`system on the host. With this arrangement, data transfers between a remote host and the
`
`storage units can be processed over the interface device fast-path without the data passing
`
`between the interface device and the local host over the [/0 bus. Also in contrast to
`
`conventional communication protocol processing, control information for fast-path data
`
`does not travel repeatedly over the host memory bus to be temporarily stored and then
`
`processed one layer at a time by the host CPU. The host may thus be liberated fiom
`
`involvement with a vast majority of data traffic for file reads or writes on host controlled
`
`20
`
`25
`
`30
`
`storage units.
`
`ALA-010A; Express Mail EK532538280US
`
`4
`
`DELL Ex.1072.006
`Ex.1072.006
`
`DELL
`
`

`

`.
`
`Additional interface devices maybe connected to the host via the 110 bus, with
`
`each additional interface device having a file cache controlled by the host file system, and
`
`providing additional network connections and/or being connected to additional storage
`
`units. With plural interface devices attached to a single host, the host can control plural
`storage networks, with a vast majority ofthe data flow to and from the host—controlled
`
`“
`
`5
`
`networks bypassing host protocol processing, travel across the I/O bus, travel across the
`
`host bus, and storage in the host memory. In one example, storage units may be
`
`connected to such an interface device by a Gigabit Ethemet network, offering the speed
`
`and bandwidth of Fibre Channel without the drawbacks, and benefiting from the large
`
`10
`
`installed base and compatibility ofEthernet-based networks.
`
`Brief Description of the Drawings
`
`FIG. 1 is a plan view diagram of a network storage system including a host
`
`computer connected to plural networks by an intelligent network interface card (INIC)
`
`15
`
`having an I/O controller and file cache for a storage unit attached to the INIC.
`
`FIG. 2 is a plan View diagram of the functioning of an INIC and host computer in
`
`FIG. 1.
`
`FIG. 5 is a plan View diagram of a network storage system including a host
`
`computer connected to plural networks and plural storage units by plural INICs managed
`
`.
`
`25
`
`by the host computer.
`FIG. 6 is a plan view diagram ofa network storage system including a host
`
`computer connected to plural LANs and plural SANS by an intelligent network interface
`
`card (INIC) without an I/O controller.
`
`FIG. 7 is a plan view diagram of a one of the SANS of FIG. 6, including Ethernet-
`
`30
`
`SCSI adapters coupled between a network line and a storage unit.
`
`FIG. 8 is a plan view diagram of one of the Ethemet-SCSI adapters of FIG. 6.
`
`ALA-010A; Express Mail EKS 3253 8280US
`
`5
`
` message packet to a network in response to a request from the network by the system of
`
`transferring data between plural networks according to the present invention.
`
`FIG. 3 is a flowchart depicting a sequence of steps involved in receiving a
`
`message packet from a network by the system of FIG. 1.
`
`20
`
`FIG. 4 is a flowchart depicting a sequence of steps involved in transmitting a
`
`~
`
`.
`
`DELL Ex.1072.007
`Ex.1072.007
`
`DELL
`
`

`

`FIG. 9 is a plan View diagram of a network storage system including a host
`
`computer connected to plural LANs and plural SANS by plural INICs managed by the
`
`host computer.
`
`FIG. 10 is a diagram of hardware logic for the INIC embodiment shown in FIG. 1,
`
`including a packet control sequencer and a fly—by sequencer.
`
`FIG. 11 is a diagram of the fly—by sequencer of FIG. 10 for analyzing header
`
`bytes as they are received by the INIC.
`
`FIG. 12 is a diagram of the specialized host protocol stack ofFIG. 1 for creating
`
`and controlling a communication control block for the fast-path as well as for processing
`
`10
`
`packets in the slow path.
`
`‘
`
`FIG. 13 is a diagram of a Microsoft® TCP/IP stack and Alacritech command
`
`driver configured for NetBios communications.
`
`FIG. 14 is a diagram of a NetBios communication exchange between a client and
`
`server having a network storage unit.
`
`FIG. 15 is a diagram of hardware functions included in the INIC ofFIG. 1.
`
`FIG. 16 is a diagram of a trio ofpipelined microprocessors included in the INIC
`
`of FIG. 15, including three phases with a processor in each phase.
`
`15
`
`"it:it:
`
`20
`
`16.
`
`FIG. 17A is a diagram of a first phase of the pipelined microprocessor ofFIG. 16.
`
`FIG. 17B is a diagram of a second phase of the pipelined microprocessor of FIG.
`
`FIG. 17C is a diagram of a third phase of the pipelined microprocessor of FIG. 16.
`
`FIG. 18 is a diagram of a plurality of queue storage units that interact with the
`
`microprocessor of FIG. 16 and include SRAM and DRAM.
`
`FIG. 19 is a diagram of a set of status registers for the queue storage units of FIG.
`
`25
`
`18.
`
`.
`
`FIG. 20 is a diagram of a queue manager that interacts with the queue storage
`
`units and status registers of FIG. 18 and FIG. 19.
`
`FIGS. 21A—D are diagrams of various stages of a least-recently—used register that
`
`is employed for allocating cache memory.
`
`30
`
`FIG. 22 is a diagram of the devices used to operate the least-recently—used register
`
`of FIGS. 21A—D.
`
`ALA-010A; Express Mail EKS 3253 8280US
`
`6
`
`DELL Ex.1072.008
`Ex.1072.008
`
`DELL
`
`

`

`
`
`
`FIG. 23 is another diagram of the INIC of FIG. 15.
`
`FIG. 24 is a more detailed diagram of the receive sequencer 2105 of FIG. 23.
`
`Detailed Description
`
`An overview of a network data communication system in accordance with the
`
`present invention is shown in FIG. 1. A host computer 20 is connected to an interface
`
`device such as intelligent network interface card (INIC) 22 that may have one or more
`
`ports for connection to networks such as a local or wide area network 25, or the Internet
`
`28. The host 20 contains a processor such as central processing unit (CPU) 30 connected
`
`to a host memory 33 by a host bus 35, with an operating system, not shown, residing in
`
`memory 33, for overseeing various tasks and devices, including a file system 23. Also
`
`stored in host memory 33 is a protocol stack 38 of instructions for processing of network
`
`communications and a INIC driver 39 that communicates between the INIC 22 and the
`
`protocol stack 38. A cache manager 26 runs under the control of the file system 23 and
`
`an optional memory manager 27, such as the virtual memory manager of Windows® NT
`
`or 2000, to store and retrieve file portions, termed file streams, on a host file cache 24.
`
`The host 20 is connected to the INIC 22 by an I/O bus 40, such as a PCI bus,
`
`which is coupled to the host bus 35 by a host I/O bridge 42. The INIC includes an
`
`interface processor 44 and memory 46 that are interconnected by an INIC bus 48. INIC
`
`10
`
`15
`
`_20
`
`bus 48 is coupled to the I/O bus 40 with an INIC I/O bridge 50. Also connected to INIC
`
`bus 48 is a set of hardware sequencers 52 that provide upper layer processing of network
`
`messages. Physical connection to the LAN/WAN 25 and the Internet 28 is provided by
`
`conventional physical layer hardware PHY 58. Each of the PHY 58 units is connected to
`
`a corresponding unit of media access control (MAC) 60, the MAC units each providing a
`
`25
`
`30
`
`conventional data link layer connection between the ]NIC and one of the networks.
`
`A host storage unit 66, such as a disk drive or collection of disk drives and
`
`corresponding controller, may be coupled to the I/O bus 40 by a conventional I/O
`
`controller 64, such as a SCSI adapter. A parallel data charmel 62 connects controller 64
`
`to host storage unit 66. Alternatively, host storage unit 66 may be a redundant array of
`
`independent disks (RAID), and I/O controller 64 may be a RAID controller. An I/O
`
`driver 67, tag, a SCSI driver module, operating under command of the file system 23
`
`ALA-010A; Express Mail EK53253 8280US
`
`7
`
`DELL Ex.1072.009
`Ex.1072.009
`
`DELL
`
`

`

`interacts with controller 64 to read or write data on host storage unit 66. Host storage
`
`unit 66 preferably contains the operating system code for the host 20, including the file
`
`system 23, which may be cached in host memory 33.
`
`An INIC storage unit 70, such as a disk drive or collection of disk drives and
`
`corresponding controller, is coupled to the INIC bus 48 via a matching interface
`
`controller, INIC I/O controller 72, which in turn is connected by a parallel data channel
`
`75 to the INIC storage unit. INIC I/O controller 72 may be a SCSI controller, which is
`
`connected to INIC storage unit 70 by a parallel data channel 75. Alternatively, INIC
`
`storage unit 70 may be a RAID system, and I/O controller 72 may be a RAJD controller,
`
`10
`
`with multiple or branching data channels 75. Similarly, I/O controller 72 may be a SCSI
`
`controller that is connected to a RAID controller for the INIC storage unit 70. In another
`
`
`"ml;u',.,1l
`
`
`15
`
`20
`
`implementation, INIC storage unit 7O is attached to a Fibre Channel (FC) network 75,
`
`and I/O controller 72 is a PC controller. Although INIC I/O controller 72 is shown
`
`connected to INIC bus 48, 1/0 controller 72 may instead be connected to I/O bus 40.
`
`INIC storage unit 70 may optionally contain the boot disk for the host 20, fi'om which the
`
`operating system kernel is loaded. INIC memory 46 includes frame buffers 77 for
`
`temporary storage of packets received from or transmitted to a network such as
`
`LAN/WAN 25. INIC memory 46 also includes an interface file cache, INIC file cache
`
`80, for temporary storage of data stored on or retrieved fiom INIC storage unit 70.
`
`Although INIC memory 46 is depicted in FIG. 1 as a single block for clarity, memory 46
`
`may be formed of separate units disposed in various locations in the INIC 22, and may be
`
`composed of dynamic random access memory (DRAM), static random access memory
`
`(SRAM), read only memory (ROM) and other forms of memory.
`
`25
`
`30
`
`The file system 23 is a high level soflware entity that contains general knowledge
`
`ofthe organization of information on storage units 66 and 70 and file caches 24 and 80,
`
`and provides algorithms that implement the properties and performance of the storage
`
`architecture. The file system 23 logically organizes information stored on the storage
`
`units 66 and 70, and respective file caches 24 and 80, as a hierarchical structure of files,
`
`although such a logical file may be physically located in disparate blocks on different
`
`disks of a storage unit 66 or 70. The file system 23 also manages the storage and
`
`retrieval of file data on storage units 66 and 70 and file caches 24 and 80. 1/0 driver 67
`
`ALA—010A; Express Mail EK532538280US
`
`8
`
`DELL Ex.1072.010
`Ex.1072.010
`
`DELL
`
`

`

`
`
`software operating on the host 20 under the file system interacts with controllers 64 and
`
`72 for respective storage units 66 and 70 to manipulate blocks of data, i.e., read the
`
`blocks from or write the blocks to those storage units. Host file cache 24 and INIC file
`
`cache 80 provide storage space for data that is being read from or written to the storage
`
`units 66 and 7O, with the data mapped by the file system 23 between the physical block
`
`format ofthe storage units 66 and 70 and the logical file format used for applications.
`
`Linear streams ofbytes associated with a file and stored in host file cache 24 and INIC
`
`file cache 80 are termed file streams. Host file cache 24 and INIC file cache 80 each
`
`contain an index that lists the file streams held in that respective cache.
`
`The file system 23 includes metadata that may be used to determine addresses of
`
`file blocks on the storage units 66 and 70, with pointers to addresses offile blocks that
`
`have been recently accessed cached in a metadata cache. When access to a file block is
`
`requested, for example by a remote host on LAN/WAN 25, the host file cache 24 and
`
`INIC file cache 80 indexes are initially referenced to see whether a file stream
`
`corresponding to the block is stored in their respective caches. If the file stream is not
`
`found in the file caches 24 or 80, then a request for that block is sent to the appropriate
`
`storage unit address denoted by the metadata. One or more conventional caching
`
`algorithms are employed by cache manager 26 for the file caches 24 and 80 to choose
`
`which data is to be discarded when the caches are full and new data is to be cached.
`
`Caching file streams on the INIC file cache 80 greatly reduces the traffic over both I/O
`
`bus 40 and data channel 75 for file blocks stored on INIC storage unit 70.
`
`When a network packet that is directed to the host 20 arrives at the INIC 22, the
`
`headers for that packet are processed by the sequencers 52 to validate the packet and
`
`create a summary or descriptor of the packet, with the summary prepended to the packet
`
`and stored in frame buffers 77 and a pointer to the packet stored in a queue. The
`
`summary is a status word (or words) that describes the protocol types ofthe packet
`
`headers and the results of checksurnming. Included in this word is an indication whether
`
`or not the frame is a candidate for fast-path data flow. Unlike prior art approaches, upper
`
`layer headers containing protocol information, including transport and session layer
`
`infonnation, are processed by the hardware logic of the sequencers 52 to create the
`
`10
`
`15
`
`20
`
`25
`
`30
`
`ALA-010A; Express Mail EK532538280US
`
`9
`
`DELL Ex.1072.011
`Ex.1072.011
`
`DELL
`
`

`

`
`
`
`summary. The dedicated logic circuits of the sequencers allow packet headers to be
`
`processed virtually as fast as the packets arrive fiom the network.
`The INIC then chooses whether to send the packet to the host memory 33 for
`
`“slow-pat ” processing of the headers by the CPU 30 running protocol stack 38, or to
`
`send the packet data directly to either INIC file cache 80 or host file cache 24, according
`
`to a “fast-path.” The fast-path may be selected for the vast majority of data traffic having
`
`plural packets per message that are sequential and error-free, and avoids the time
`
`consuming protocol processing of each packet by the CPU, such as repeated copying of
`
`the data and repeated trips across the host memory bus 35. For the fast—path situation in
`
`10
`
`which the packet is moved directly into the INIC file cache 80, additional trips across the
`
`host bus 35 and the I/O bus 40 are also avoided. Slow—path processing allows any
`
`packets that are not conveniently transferred by the fast—path ofthe INIC 22 to be
`
`processed conventionally by the host 20.
`
`In order to provide fast-path capability at the host 20, a connection is first set up
`
`15
`
`with the remote host, which may include handshake, authentication and other connection
`
`initialization procedures. A communication c'ontrol block (CCB) is created by the
`
`protocol stack 38 during connection initialization procedures for connection-based
`
`messages, such as typified by TCP/[P or SPX/IPX protocols. The CCB includes
`
`connection information, such as source and destination addresses and ports. For TCP
`
`20
`
`connections a CCB comprises source and destination media access control (MAC)
`
`addresses, source and destination IP addresses, source and destination TCP ports and
`
`TCP variables such as timers and receive and transmit windows for sliding window
`
`protocols. After a connection has been set up, the CCB is passed by INIC driver 39 from
`
`the host to the INIC memory 46 by writing to a command register in that memory 46,
`
`25
`
`where it may be stored along with other CCBs in CCB cache 74. The INIC also creates a
`
`hash table corresponding to the cached CCBs for accelerated matching of the CCBs with
`
`packet summaries.
`
`When a message, such as a file write, that corresponds to the CCB is received by
`
`the INIC, a header portion of an initial packet of the message is sent to the host 20 to be
`
`30
`
`processed by the CPU 30 and protocol stack 38. This header portion sent to the host
`
`contains a session layer header for the message, which is known to begin at a certain
`
`ALA-010A; Express Mail EK53253 8280US
`
`10
`
`DELL Ex.1072.012
`Ex.1072.012
`
`DELL
`
`

`

`offset of the packet, and optionally contains some data from the packet. The processing
`
`ofthe session layer header by a session layer of protocol stack 38 identifies the data as
`
`belonging to the file and indicates the size of the message, which are used by the file
`
`system to determine Whether to cache the message data in the host file cache 24 or INIC
`
`file cache 80, and to reserve a destination for the data in the selected file cache. If any
`
`data was included in the header portion that was sent to the host, it is then stored in the
`
`destination. A list of buffer addresses for the destination in the selected file cache is sent
`
`to the INIC 22 and stored in or along with the CCB. The CCB also maintains state
`
`information regarding the message, such as the length ofthe message and the number and
`
`order of packets that have been processed, providing protocol and status information
`
`regarding each of the protocol layers, including which user is involved and storage space
`
`for per-transfer information.
`
`Once the CCB indicates the destination, fast-path processing of packets
`
`corresponding to the CCB is available. After the above-mentioned processing of a
`
`subsequently received packet by the sequencers 52 to generate the packet summary, a
`
`hash of thepacket summary is compared with the hash table, and ifnecessary with the
`
`CCBs stored in CCB cache 74, to detennine whether the packet belongs to a message for
`
`which a fast-path connection has been set up. Upon matching the packet summary with
`
`the CCB, assuming no exception conditions exist, the data of the packet, without network
`
`or transport layer headers, is sent by direct memory access (DMA) unit 68 to the
`
`destination in file cache 80 or file cache 24 denoted by the CCB.
`
`At some point afier all the data from the message has been cached as a file stream
`
`in INIC file cache 80 or host file cache 24, the file stream of data is then sent, by DMA
`
`10
`
`15
`
`20
`
`
`
`
`unit 68 under control of the file system 23, from that file cache to the INIC storage unit
`
`25
`
`70 or host storage unit 66, under control of the file system. Commonly, file streams
`
`cached in host file cache 24 are stored on INIC storage unit 66, while file streams cached
`
`in INIC file cache 80 are stored on INIC storage unit 70, but this arrangement is not
`
`necessary. Subsequent requests for file transfers may be handled by the same CCB,
`
`assuming the requests involve identical source and destination IP addresses and ports,
`
`30
`
`with an initial packet of a write request being processed by the host CPU to determine a
`
`location in the host file cache 24 or INIC file cache 80 for storing the message. It is also
`
`ALA-010A; Express Mail EK532538280US
`
`ll
`
`DELL Ex.1072.013
`Ex.1072.013
`
`DELL
`
`

`

`
`
`
`possible for the file system to be configured to earmark a location on INIC storage unit
`
`70 or host storage unit 66 as the destination for storing data from a message received
`
`from a remote host, bypassing the file caches.
`
`An approximation for promoting a basic understanding of the present invention is
`
`depicted in FIG. 2, which segregates the main paths for information flow for the network
`
`data storage system of FIG. 1 by showing the primary type of information for each path.
`
`FIG. 2 shows information flow paths consisting primarily of control information with
`
`thin arrows, information flow paths consisting primarily of data with thick white arrows,
`
`and information flow paths consisting ofboth control information and data with thick
`
`black arrows. Note that host 20 is primarily involved with control information flows,
`
`while the ]NIC storage unit 70 is primarily involved with data transfer.
`
`Information flow between a network such as LAN/WAN 25 and the INIC 22 may
`
`include control information and data, and so is shown with thick black arrow 85.
`
`Examples of information flow 81 between network such as LAN/WAN 25 and the INIC
`
`22 include control information, such as connection initialization dialogs and
`
`acknowledgements, as well as file reads or writes, which are sent as packets containing
`
`_ file data encapsulated in control information. The sequencers 52 process control
`
`information from file writes and pass data and control information to and from INIC
`
`frame buffers 77, and so those transfers are represented with thick black arrow 88.
`
`Control information regarding the data stored in flame buffers 77 is operated on by the
`
`processor 44, as shown by thin arrow 90, and control information such as network
`
`connection initialization packets and session layer headers are sent to the protocol stack
`38, as shown by thin arrow 92. When a connection has beeniset up by the host, control
`
`information regarding that connection, such as a CCB, may be passed between host
`
`protocol stack 38 and INIC memory 46, as shown by thin arrow 94. Temporary storage
`
`of data being read from or written to INIC storage unit 70 is provided by INIC file cache
`
`80 and frame buffers 77, as illustrated by thick white arrows 96 and 98. Control and
`
`knowledge of all file streams that are stored on INIC file cache 80 is provided by file
`
`system 23, as shown by thin arrow 91. In an embodiment for which host storage unit 66
`
`does not store network accessible data, file system information is passed between host
`
`file cache 24 and host storage unit 66, as shown by arrow 81. Other embodiments, not
`
`10
`
`15
`
`20
`
`25
`
`30
`
`ALA-

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket