throbber
IN THE UNITED STATES PATENT AND TRADEMARK OFFICE
`PATENT TRIAL & APPEAL BOARD
`
`
`
`
`
`In re Patent of: Geoffrey B. Hoese et al.
`U.S. Patent No.: 7,051,147
`Issue Date:
` May 23, 2006
`Appl. No.:
`
`10/658,163
`Filing Date:
` September 9, 2003
`Title:
`Storage Router and Method for Providing Virtual Local Storage
`
`
`DECLARATION OF PROFESSOR JEFFREY S. CHASE, Ph.D.
`
`
`I, Prof. Jeffrey S. Chase, Ph.D., declare as follows:
`
`I.
`
`
`Background and Qualifications
`
`(1.) My name is Jeffrey S. Chase. I am a Professor at Duke University in
`
`the Computer Science Department. I have studied and practiced in the field of
`
`computer science for over 30 years, and have taught Computer Science at Duke
`
`since 1995.
`
`(2.)
`
`I received my Doctor of Philosophy (Ph.D.) degree in the field of
`
`Computer Science from the University of Washington in 1995. I received my
`
`Masters of Science (M.S.) degree in Computer Science from the University of
`
`Washington and my Bachelor of Arts (B.A.) degree in Mathematics and Computer
`
`Science from Dartmouth College.
`
`(3.) Before and during graduate school, I worked as a software design
`
`engineer at Digital Equipment Corporation, developing operating system kernel
`
`functionality for storage systems and network storage. During the period 1985-
`
`
`
`Ͳ 1 Ͳ
`
`Oracle Ex. 1010, pg. 1
`
`

`

`1992, I helped develop the Network File System software for Digital’s Unix
`
`operating system, was the lead developer for file storage elements of Digital’s first
`
`multiprocessor Unix kernel, and was a kernel engineer for a hierarchical storage
`
`system product.
`
`(4.) Upon receiving my Ph.D. degree, I joined the faculty of Duke
`
`University in the Department of Computer Science as an Assistant Professor. As a
`
`professor I teach undergraduate and graduate courses in operating systems,
`
`distributed systems, and networking. I have also participated in a number of
`
`industry collaborations. For example, during the 2003-2004 academic year, I
`
`worked as a visiting scholar at Hewlett-Packard Laboratories, resulting in two
`
`patents directed towards the design of storage systems. Other industry
`
`appointments include a collaborative project in 1996 with AT&T Corporation
`
`(resulting in U.S. Patent No. 5,944,780 entitled “Network with Shared Caching”)
`
`and in 2001 with a group at IBM Corporation (resulting in U.S. Patent No.
`
`6,785,794 entitled “Differentiated Storage Resource Provisioning”) as well as
`
`additional collaborations with IBM resulting, to date, in six additional patents. All
`
`of these collaborations and patents involved managing services that provide
`
`storage to multiple client hosts over a network.
`
`(5.)
`
`I have supervised the research of 12 Ph.D. dissertations in the field of
`
`Computer Science, and along with my graduate students, published over 100 peer-
`
`
`
`Ͳ 2 Ͳ
`
`Oracle Ex. 1010, pg. 2
`
`

`

`reviewed technical publications in scientific journals or conferences in the field of
`
`Computer Science. In addition to Ph.D. dissertations, I have also supervised the
`
`research of 20 graduate students who earned Master’s degrees at Duke.
`
`(6.)
`
`In addition to classroom and research activities, I give external
`
`presentations each year relating to networked systems and network storage. In
`
`1999, I received an invitation to present in a special session on network storage at
`
`the 1999 IEEE Symposium on High-Performance Interconnects. I have served on
`
`the editorial program committee for the leading annual academic conference on
`
`storage systems (Symposium on File and Storage Technologies or FAST) multiple
`
`times and chaired the FAST conference in 2003, and have had similar roles in
`
`dozens of other related academic venues.
`
`(7.) As part of my research, I have developed new techniques for
`
`accessing remote storage devices through high-speed networks, and for managing
`
`virtual storage within networked storage arrays, similar to the goals of U.S. Patent
`
`No. 7,051,147 (“the ‘147 patent”). An exemplary list of publications relevant to
`
`this topic, which also highlight my familiarity with the concept of managing virtual
`
`local storage within storage networks (i.e. the underlying concept of the ‘147
`
`patent) is provided below. These papers are limited to the period from the mid-
`
`1990s and up to 2002, and are in reverse chronological order.
`
`
`
`Ͳ 3 Ͳ
`
`Oracle Ex. 1010, pg. 3
`
`

`

`K. Magoutis, S. Addetia, A. Fedorova, M. Seltzer, J. Chase, A. Gallatin, R.
`Kisley, R.Wickremesinghe, and E. Gabber. Structure and Performance of the
`Direct Access File System. In 2002 USENIX Annual Technical Conference.
`June 2002. Best paper award. Acceptance ratio: 25/105.
`K. Yocum and J. Chase. Payload Caching: High-Speed Data Forwarding for
`Network Intermediaries. In the 2001 USENIX Technical Conference, June
`2001. Acceptance ratio for USENIX-01: 24/82.
`D. Anderson, J. Chase, and A. Vahdat. Interposed Request Routing for
`Scalable Network Storage. In the Fourth ACM/USENIX Symposium on
`Operating System Design and Implementation (OSDI), October 2000.
`Acceptance ratio for OSDI-2000: 24/111. Award paper.
`D. Anderson and J. Chase. Failure-Atomic File Access in an Interposed
`Network Storage System. In the Ninth IEEE International Symposium on High
`Performance Distributed Computing (HPDC-9), August 2000. Acceptance ratio
`for HPDC-9: 32/103
`
`J. Chase, D. Anderson, A. Gallatin, A. Lebeck, and K. Yocum. Network I/O
`with Trapeze. In the 1999 IEEE Symposium on High-Performance
`Interconnects (Hot Interconnects-7), August 1999. Invited for a special session
`on network storage.
`G. Voelker, T. Kimbrel, M. Feeley, A. Karlin, J. Chase, H. Levy.
`Implementing Cooperative Prefetching and Caching in a Globally-
`Managed Memory System. In Proceedings of the 1998 ACM Conference on
`Performance Measurement, Modeling, and Evaluation (SIGMETRICS), June
`1998. Acceptance ratio for SIGMETRICS-98: 25/135.
`D. Anderson, J. Chase, S. Gadde, A. Gallatin, K. Yocum, M. Feeley. Cheating
`the I/O Bottleneck: Network Storage with Trapeze/Myrinet. In Proceedings
`of the 1998 USENIX Technical Conference, June 1998. Acceptance ratio for
`USENIX-98: 23/87.
`K. Yocum, J. Chase, A. Gallatin, and A. Lebeck. Cut-Through Delivery in
`Trapeze: An Exercise in Fast Messaging. In Proceedings of the Sixth IEEE
`International Symposium on High Performance Distributed Computing
`(HPDC-6), August 1997. Acceptance ratio for HPDC-97: 36/76. (10 pages).
`
`Ͳ 4 Ͳ
`
`
`
`
`
`
`
`Oracle Ex. 1010, pg. 4
`
`

`

`(8.)
`
`I am involved with several professional organizations including
`
`memberships in both the Association for Computing Machinery (ACM) and
`
`USENIX.
`
`(9.) A copy of my latest curriculum vitae (C.V.) is attached to this
`
`declaration as Appendix A.
`
`II. Description of the Relevant Field and the Relevant Timeframe
`
`
`(10.) I have carefully reviewed the ‘147 patent.
`
`(11.) For convenience, all of the information that I considered in arriving at
`
`my opinions is listed in Appendix B.
`
`(12.) Based on my review of these materials, I believe that the relevant field
`
`for purposes of the ‘147 patent is virtual local storage provided on remote storage
`
`devices using a storage router. I have been informed that the relevant timeframe is
`
`on or before December 31, 1997.
`
`(13.) As described in Section I above, I have extensive experience in
`
`computer science, networking, and storage systems. Based on my experience, I
`
`have a good understanding of the relevant field in the relevant timeframe.
`
`III. The Person of Ordinary Skill in the Relevant Field in the Relevant
`Timeframe
`
`
`
`(14.) I have been informed that “a person of ordinary skill in the relevant
`
`field” is a hypothetical person to whom an expert in the relevant field could assign
`
`a routine task with reasonable confidence that the task would be successfully
`
`
`
`Ͳ 5 Ͳ
`
`Oracle Ex. 1010, pg. 5
`
`

`

`carried out. I have been informed that the level of skill in the art is evidenced by
`
`prior art references. The prior art discussed herein demonstrates that a person of
`
`ordinary skill in the field, at the time the ‘147 patent was effectively filed, was
`
`aware of block storage systems (disks, RAID, and the SCSI command abstraction),
`
`storage volume management concepts, and networking technologies.
`
`(15.) Based on my experience, I have an understanding of the capabilities
`
`of a person of ordinary skill in the relevant field. I have supervised and directed
`
`many such persons over the course of my career. Further, I had those capabilities
`
`myself at the time the patent was filed.
`
`IV. The ‘147 Patent
`
`(16.) The ‘147 patent describes a storage router that allows a computer
`
`device access to a storage region on a remote storage device. See Exh. 1001 at
`
`Abstract and Fig. 3 (reproduced below). As shown in Fig. 3, a number of
`
`workstations can connect to the storage router to access remote storage devices.
`
`See id. at 4:30-39. Although not defined or adequately described within the ‘147
`
`patent, one of skill in the art at the time of the ‘147 patent would understand that a
`
`storage router is embodied as network equipment configured to receive commands
`
`(e.g., read, write, etc.) from a computing device and appropriately direct the
`
`commands to the appropriate storage device for processing. A storage router can
`
`provide a number of features for implementing remote storage access by a number
`
`
`
`Ͳ 6 Ͳ
`
`Oracle Ex. 1010, pg. 6
`
`

`

`of connected devices, such as bus arbitration, access protection (e.g., locking such
`
`that no simultaneous access may occur), storage device virtualization, data
`
`mirroring, and data caching for accelerated access.
`
`
`
`(17.) According to the ‘147 patent, the storage router enables this access
`
`through a native low level block protocol (NLLBP) interface which is described as
`
`a protocol which “does not involve the overhead of high level protocols and file
`
`systems required by network servers.” See id.; id. at 5: 14-17. As such, rather than
`
`referring to a file name or other abstraction, the storage router routes access
`
`commands that are represented on a data block (e.g., single access size) level – the
`
`level at which a storage device reads or writes a portion of data. In essence, prior to
`
`
`
`Ͳ 7 Ͳ
`
`Oracle Ex. 1010, pg. 7
`
`

`

`reaching the storage router, another computing device, potentially the requesting
`
`computing device, converts an abstract request (e.g., save an update to a file name
`
`as entered by a user of the computing device) to a block-level request encapsulated
`
`in NLLBP. In operation, the performance speed of the storage router is increased
`
`(and the storage router avoids being a bottleneck) because the storage router does
`
`not have to manage levels of file service abstraction, the command already being
`
`issued at the NLLBP access level. See id. The NLLBP messages received by the
`
`storage router from the computing device are used to map an access request to a
`
`storage region of a storage device. See id. at 9: 11-14.
`
`(18.) Further, the ‘147 Patent describes providing virtual storage regions to
`
`connected computing devices by mapping a logical storage address used by a
`
`computing device to a physical storage address upon a remote storage device such
`
`that each computing device is allocated particular region(s) of storage. See id. at 4:
`
`63-66. The storage router may implement access controls to control a computer
`
`device’s access to only those storage regions allocated to the particular computer
`
`device. See id. at 4: 41-48. The logical storage addressing may be implemented
`
`through use of logical unit (LUN) addressing on the side of the computing device.
`
`See id. at 8:47-50.
`
`(19.) The independent claims of the ‘147 patent recite mapping between a
`
`device and storage devices, implementing access controls between the devices and
`
`
`
`Ͳ 8 Ͳ
`
`Oracle Ex. 1010, pg. 8
`
`

`

`storage devices, and allowing the devices to access the storage devices using a
`
`NLLBP. See id. at Claims 1, 6, 10, 14, 21, 28, and 34.
`
`V.
`
`Scientific Principles Underlying the ‘147 Patent
`
`(20.) The ‘147 patent family adds Fibre Channel (or other similar serial
`
`interconnect) host interfaces to a network storage volume server. See Exh. 1001 at
`
`2:24-35; 2:49-54. FC networks were designed for the purpose of accessing remote
`
`storage devices at high speeds, and were introduced commercially for that purpose
`
`before the December 31, 1997 priority date of the ‘147 patent. The use of a FC
`
`network allows high-speed access to “remote” storage devices over the network at
`
`distances greater than SCSI interconnect technologies were capable of at the time.
`
`See id. at 6:31-43. For this purpose, FC networks typically encapsulate the SCSI
`
`command set over a FC-specific transport protocol. The combination of SCSI
`
`command set with Fibre Channel Control Protocol transport, including disk target
`
`identification using “logical units” or LUNs, is commonly known in the art as FCP.
`
`See id. at 5:50-56 and Claims 15, 22, 29, and 36.
`
`(21.) The ‘147 patent family refers to this use of the SCSI command set
`
`over FCP transport as a “native low level block protocol” (NLLBP). In various
`
`arguments in the ‘147 patent family file histories, the petitioners represent to the
`
`USPTO and to Courts that this use of an NLLBP over a distance-capable serial
`
`network (FC) is a distinguishing point of novelty of the ‘147 patent and yields its
`
`
`
`Ͳ 9 Ͳ
`
`Oracle Ex. 1010, pg. 9
`
`

`

`distinguishing benefits. In particular, it is argued that the NLLBP avoids
`
`performance costs associated with various other networking systems used to
`
`transport SCSI command sets or equivalent block read/write requests on virtual
`
`storage volumes in prior-art systems. They distinguish the “storage router” of the
`
`‘147 patent from prior-art network storage servers primarily on this basis. See id. at
`
`2:23-34.
`
`(22.) The ‘147 patent also discloses a “Method for Providing Virtual Local
`
`Storage”. This method involves the allocation of virtual local storage regions from
`
`the storage devices for access by various hosts. These virtual local network storage
`
`regions are variously known in the art as virtual or logical partitions, virtual or
`
`logical disks, storage volumes, or logical units. A host accesses a virtual local
`
`storage region in the same way that it accesses a local disk, by issuing read and
`
`write requests at specified virtual block addresses. A controller system uses tables
`
`or other mapping structures to control access to the volumes and to translate
`
`between host virtual disk addresses and the physical disk addresses where the data
`
`reside.
`
`(23.) A large number of prior-art systems offered support for virtual local
`
`storage on the network in a similar form. Volume management capabilities were
`
`common in RAID systems and related software support at the time; the CRD-5500
`
`RAID controller discussed below is one example. In addition, some well-known
`
`
`
`Ͳ 10 Ͳ
`
`Oracle Ex. 1010, pg. 10
`
`

`

`research works investigated the topic in depth. I cite two prominent examples with
`
`which I am most familiar. One example is “Petal: distributed virtual disks”, by Lee
`
`and Thekkath, in the Conference on Architectural Support for Programming
`
`Languages and Operating Systems (ASPLOS), 1996 (“Petal”). Petal establishes
`
`that the concepts of virtual storage volume allocation on network storage with per-
`
`host access controls were well-known in the art. The USPTO considered Petal as
`
`prior art for a sibling patent of the ‘147 patent in an earlier proceeding. See Exh.
`
`1024 at Office Action of 2/27/05, pp. 3-6. The patent owner distinguished the
`
`sibling patent from Petal primarily on the basis of the use of an NLLBP in the
`
`sibling patent, whereas Petal used the UDP/IP protocol as its transport for its block
`
`command set over a general-purpose ATM network that was not designed
`
`specifically for storage access. See id. Another prominent example is “File Server
`
`Scaling with Network-Attached Secure Disks” by Gibson et al., in the Proceedings
`
`of the ACM International Conference on Measurement and Modeling of Computer
`
`Systems (SIGMETRICS), 1997, and various follow-on works (“NASD”). NASD
`
`established the concepts of direct attachment of storage devices to a network to
`
`avoid the performance overhead of translation through a server, and a mapping of
`
`virtual storage “objects” on network-attached drives.
`
`(24.) To the extent that the use of a “Native Low Level Block Protocol”
`
`distinguishes the subject matter of the ‘147 patent from the large body of prior art,
`
`
`
`Ͳ 11 Ͳ
`
`Oracle Ex. 1010, pg. 11
`
`

`

`it is my opinion that use of a particular high-speed network technology that was
`
`commercially available and used for the purpose for which it was designed was an
`
`engineering decision that would have been obvious to one of skill in the art, and
`
`was not inventive.
`
`(25.) In my own research in the relevant timeframe summarized above, we
`
`developed support for network storage access using a low-level block protocol on
`
`Myrinet, a commercially available high-speed network. The motivation was similar
`
`to the ‘147: to obtain better performance and scaling for network storage access by
`
`using a faster network. Specifically, in 1996 and 1997 I was collaborating with a
`
`team at the University of Washington to develop a system called Global Memory
`
`Service (GMS), which used the memories of other hosts attached to the network as
`
`a high-speed network storage device. Since memory was much faster than the disk
`
`technologies available at the time, the performance of the GMS storage system
`
`depended largely on the performance of the network block access protocol. We
`
`programmed the Myrinet network devices to implement a block transfer protocol
`
`that reduced block read latencies to a few hundred microseconds, much less than
`
`the cost of a disk access. We published a paper relating to the combined system in
`
`1997, and three more in 1998 and 1999. It was obvious to use a faster networking
`
`technology that was commercially available: the research focus was on how to
`
`program the network devices for low-latency block access. Our approach was
`
`
`
`Ͳ 12 Ͳ
`
`Oracle Ex. 1010, pg. 12
`
`

`

`awarded U.S. Patent No. 6,308,228 (“System and method of adaptive message
`
`pipelining”).
`
`VI. Claim Interpretation
`
`(26.) In proceedings before the USPTO, I understand that the claims of an
`
`unexpired patent are to be given their broadest reasonable interpretation in view of
`
`the specification from the perspective of one skilled in the art at the time of the
`
`earliest priority date of the patent. I have been informed that the ‘147 patent has
`
`not expired. In comparing the claims of the ‘147 patent to the known prior art, I
`
`have carefully considered the ‘147 patent and the ‘147 patent file history based
`
`upon my experience and knowledge in the relevant field. In my opinions, as stated
`
`in this declaration, I have applied the broadest reasonable interpretation of the
`
`claims in view of the specification from the perspective of one skilled in the art at
`
`the time of the patent. In my opinion, under their broadest reasonable
`
`interpretation, the majority of the claim terms of the ‘147 patent may be interpreted
`
`as being used in their ordinary and customary sense as one skilled in the relevant
`
`field would understand them.
`
`(27.) In addition, a number of the terms of the ‘147 patent are not well
`
`defined within the relevant field and thus a meaning must be derived through
`
`review of the ‘147 patent itself. In conducting my analysis, upon careful review
`
`
`
`Ͳ 13 Ͳ
`
`Oracle Ex. 1010, pg. 13
`
`

`

`and consideration of the broadest reasonable understanding of the claim terms, I
`
`have utilized the following claim constructions.
`
`Native Low-Level Block Protocol = a protocol that enables the exchange of
`information without the overhead of high-level protocols and file systems
`typically required by network servers such as SCSI commands and Fibre
`Channel encapsulated SCSI commands.
`
`
`
`My analysis is limited to systems and combinations that use Fibre Channel
`
`(FC/FCP) as the host device interface, since it is not in dispute that storage access
`
`over Fibre Channel involves use of a “Native Low-Level Block Protocol”. I have
`
`been provided and have reviewed the infringement contentions submitted by
`
`Patent Owner in district court litigation against Petitioners. See Exh. 1009. I have
`
`been informed that the standard for claim interpretation used in district court
`
`litigation differs from the standard applied in proceedings before the USPTO. I
`
`therefore have not been asked to and do not take any position regarding the
`
`meaning or scope of the ’147 patent claims for purposes of litigation or regarding
`
`Patent Owner’s contentions regarding the scope of the ’147 patent claims. I note
`
`only that, for the reasons discussed below, it is my opinion that each of the prior
`
`art systems discussed below meet the limitations of the ’147 patent claims at least
`
`as far as the claims appear to have been interpreted by Patent Owner in its
`
`infringement contentions.
`
`VII. Discussion of Relevant Patents and Articles
`
`
`
`Ͳ 14 Ͳ
`
`Oracle Ex. 1010, pg. 14
`
`

`

`(28.) I have reviewed the relevant patents and articles listed in Appendix B,
`
`and based on this review, contributed to developing the discussion within the Inter
`
`Partes Review Request comparing each element of claims 14 through 39 of the
`
`‘147 patent to the appropriate disclosures from the prior art references in the
`
`petition from the perspective of one skilled in the art.
`a. CRD-5500 References
`
`(29.) The CRD-5500 SCSI RAID Controller User’s Manual (“CRD-5500
`
`Manual”) describes a modular RAID controller for interfacing a number of host
`
`computing devices with a number of storage units such as a SCSI disk array
`
`storage device. See Exh. 1003 at 1-1, 2-1. The CRD-5500 includes a number of I/O
`
`module slots configured to receive both host channel interface modules
`
`(configured to interface with the host computing devices) and storage device
`
`interface modules (configured to interface with RAID storage devices). See id. at
`
`2-1.
`
`(30.) The CRD-5500 Controller supports a variety of RAID levels,
`
`including striping and/or mirroring services. See id. at 1-5. In general, RAID
`
`services encompass a number of service levels. The most basic service level, RAID
`
`0, allows “data striping” which means that a data storage region can be allocated
`
`across multiple physical disks. This level provides the opportunity for faster data
`
`access, as a single logical partition can be accessed in parallel as long as the data
`
`
`
`Ͳ 15 Ͳ
`
`Oracle Ex. 1010, pg. 15
`
`

`

`blocks being accessed simultaneously reside on physically separate disks.
`
`Furthermore, RAID level 1 and level 0 + 1 provide data mirroring which means, in
`
`essence, identical copies of data are maintained in separate partitions. This feature
`
`presents two benefits – both faster access (because there are two separate disks
`
`containing the same information, each of which could be accessed if not busy) and
`
`data loss prevention (because, in the event of disk failure, there is a backup copy).
`
`(31.) To control access to the RAID partitions configured upon the attached
`
`storage devices, the CRD-5500 controller includes a host logical unit (LUN)
`
`mapping utility for assigning particular address regions (e.g., host LUN numbers)
`
`to particular RAID redundancy groups. See id. at 1-1, 4-5. As illustrated in the
`
`table below, host Channel 0 includes a particular LUN mapping in which the host
`
`connected to Channel 0 (e.g., a particular I/O slot of the CRD-5500 controller) is
`
`assigned to redundancy groups 0, 1, 5, and 6 through 31, each redundancy group
`
`being accessible using a different LUN. See id. at 4-5. An administrator can
`
`allocate a particular disk device as a redundancy group, such that a host LUN maps
`
`to a single physical disk or a partition thereof. See id. at 2-3, 2-4, 3-3, 3-4.
`
`
`
`Ͳ 16 Ͳ
`
`Oracle Ex. 1010, pg. 16
`
`

`

`
`
`(32.) Although the majority of LUNs are mapped to a redundancy group of
`
`the same number, this is not a requirement. See id. As can be seen in the table
`
`above, redundancy group 5 for example is mapped to host LUN 4. See id.
`
`Furthermore, a different host, such as host channel 1, may be mapped to the same
`
`redundancy group via a different LUN. See id. at 4-10. For example, although host
`
`channel 0 accesses redundancy group 1 via LUN 1, host channel 1 may access
`
`redundancy group 1 via LUN 5. See id.
`
`(33.) Additionally, as illustrated in the table above, host LUNs 2, 3, and 5
`
`have no corresponding mapping. See id. If the host at Channel 0 issued a command
`
`to access an address at host LUN 2, the CRD-5500 would deny access. See id. at 4-
`
`10. Because redundancy groups 2, 3, and 4 are not mapped to host channel 0, the
`
`
`
`Ͳ 17 Ͳ
`
`Oracle Ex. 1010, pg. 17
`
`

`

`host corresponding to host channel 0 has no visibility or access to the data stored at
`
`redundancy groups 2, 3, and 4. See id.
`
`(34.) As described by the CRD-5500 Data Sheet (“CRD-5500 DS”),
`
`although not yet described in the user’s manual, the CRD-5500 controller was
`
`designed for compatibility with storage devices and hosts connected to high speed
`
`serial interfaces, such as Fibre Channel by accepting FC-enabled host module
`
`cards and/or FC-enabled storage device module cards without modification to the
`
`CRD-5500 controller hardware. See Exh. 1004 at 1. In essence, in a particular
`
`example, the CRD-5500 controller, as initially implemented and released, was
`
`capable of handling the access speed demanded by a FC host using a FC interface
`
`module having on-board functionality for de-encapsulating FC-encapsulated SCSI
`
`commands for use by the CRD-5500 controller and for encapsulating SCSI
`
`responses in a FCP wrapper prior to forwarding the responses to the FC-enabled
`
`host.
`
`(35.) Further, in support of the goal of “allow[ing] you to easily add new
`
`interfaces or more powerful modules as they become available,” it was obvious to
`
`the design team of the CRD-5500 controller that they could add Fibre Channel host
`
`interface module cards capable of bridging between FC hosts and SCSI storage
`
`devices and vice-versa. See id. at 2. Through the use of such FC-enabled host
`
`interface module cards and FC-enabled storage device interface module cards, the
`
`
`
`Ͳ 18 Ͳ
`
`Oracle Ex. 1010, pg. 18
`
`

`

`CRD-5500 controller would be configured to function as described in independent
`
`claims of the ‘147 Patent, providing bridging of native low-level block protocol
`
`(NLLBP) commands between FC workstations and FC storage devices for virtual,
`
`access-controlled remote storage regions on a RAID partition.
`b. Tachyon: A Gigabit Fibre Channel Protocol Chip to Smith et al.
`
`(“Smith”)
`
`(36.) The Tachyon chip described by Smith was developed as a Fibre
`
`Channel (FC) controller chip for use in FC adapter modules for hosts and switches.
`
`See Exh. 1005 at 3. In a host interface module, the Tachyon chip includes support
`
`for de-encapsulation of Fibre Channel Protocol (FCP) encapsulated SCSI
`
`commands: the common NLLBP used for host-side access in the ‘147 patent. The
`
`Tachyon chip was anticipated for use in mass storage applications. See id. at 4. In
`
`fact, in the preferred embodiment, the ‘147 patent describes using the Tachyon
`
`controller chip for enabling the bridging capability of the storage router. See Ex.
`
`1001 at 6: 15-30.
`
`(37.) The Tachyon chip receives a FCP packet containing a SCSI write
`
`command from a host device and buffers incoming data, discarding the Fibre
`
`Channel envelope. See Ex. 1005 at 6. The header structure resulting from the
`
`buffering operation in this direction would be a SCSI header (e.g., unpacked from
`
`
`
`Ͳ 19 Ͳ
`
`Oracle Ex. 1010, pg. 19
`
`

`

`the FC wrapper). Upon completion of receipt of the data, the Tachyon chip issues a
`
`notification to the host. See id.
`
`(38.) In the event of a read command, the Tachyon chip generates a
`
`Tachyon header structure including FC specific information. See id. at 5. Data read
`
`from a memory device, for example, would be received by the Tachyon, where the
`
`data is encapsulated in a FC header structure for transmission to a host device. See
`
`id. at 5-6.
`c. The CRD-5500 References and Smith Combined
`
`(39.) I have been asked to discuss whether a person of ordinary skill in the
`
`art would have combined the teachings of the CRD-5500 references and the Smith
`
`paper. A storage engineer at the time of the ‘147 Patent would have had strong
`
`motivation to combine the CRD-5500 controller and the Tachyon chip
`
`functionality when designing FC host support into a FC host interface module to
`
`install in a slot of the CRD-5500 controller and FC storage device support into a
`
`FC storage device interface module to install in a slot of the CRD-5500 controller.
`
`At the time of the ‘147 patent priority date, FC was a hot and promising
`
`technology, increasingly used due to the expanded distances achieved. Legacy
`
`storage system deployments, on the other hand, were overwhelmingly SCSI based
`
`at the time. For example, to build new FC-based technology into the pre-existing
`
`storage, a storage engineer would consider designing a FC host interface module,
`
`
`
`Ͳ 20 Ͳ
`
`Oracle Ex. 1010, pg. 20
`
`

`

`as suggested by the CRD-5500 Data Sheet. See Exh. 1004 at 1. In this manner, the
`
`storage network could be extended to FC hosts and/or FC storage devices, allowing
`
`the latest access medium to be used to access an established storage device system.
`
`For example, as discussed in the Compaq Technology Brief: Strategic Direction
`
`for Compaq Fibre Channel-Attached Storage, October 1997, “Fibre Channel [was]
`
`a key technology for high-speed storage interconnect (that is, processor-to-storage
`
`and storage-to-storage communications)” in part because it may be used “to
`
`support multiple physical interface alternatives” as well as “to provide a practical
`
`and inexpensive means for high-speed transfer of large amounts of data”. See
`
`Compaq Technology Brief at 3-4. In the technology brief, Compaq asserts that
`
`“Fibre Channel is overall the best serial interconnect technology for high-
`
`performance, high-availability external storage.” Id. at 8. Before the earliest
`
`priority date of the ‘041 Patent, FC hard disk drives existed on the market, and
`
`more models were being developed to support market demand. See Compaq
`
`Technology Brief at 10; Internet Archive capture of
`
`http://www.storagepath.com/fibre.htm dated January 14 1997. In a particular
`
`example, at least as of January 1997, Storagepath carried the SP-8BFC Series Fibre
`
`Channel Tower Mass Storage System with hot-pluggable FC-AL magnetic drives.
`
`See Internet Archive capture of http://www.storagepath.com/fibre.htm at 1. An
`
`illustration of the combined system, based on Figure 1-2 of the CRD-5500 User
`
`
`
`Ͳ 21 Ͳ
`
`Oracle Ex. 1010, pg. 21
`
`

`

`Manual, is provided below. See CRD-5500 User Manual at 1-3. Because FC as
`
`used in the ‘147 patent simply encapsulates the SCSI command set, LUN-based
`
`mapping could have been translated to a FC host system.
`
`(40.) Because the increasing demand and the marketing value derived from
`
`support of the faster / longer distance transport medium, a skilled artisan would be
`
`encouraged to build the functionality into the CRD-5500. As discussed by Smith,
`
`the FC technology could lend support to both networking and mass storage
`
`
`
`
`
`Ͳ 22 Ͳ
`
`Oracle Ex. 1010, pg. 22
`
`

`

`systems, pointing towards widespread adoption throughout telecommunications
`
`and storage networks. See Exh. 1005 at 3. For example, Smith suggests that
`
`Tachyon support should be built into edge connections (e.g., edge switches and/or
`
`routers) across various styles of networks (e.g., WAN, LAN, etc.) as well as into
`
`end points, such as storage systems, servers, and client devices. See id.
`
`(41.) The Tachyon encapsulation and de-encapsulation features would be
`
`used to communicate between the CRD-5500 controller and multiple FC hosts and
`
`FC storage devices. For example, as discussed in the CRD-5500 User’s Manual,
`
`the CRD-5500 includes up to four slots designated as potential host channels. See
`
`Exh. 1003 at 1-1.
`
`(42.) Functionally the combined CRD-5500 controller supports
`
`communications between the host devices and the storage devices in the following
`
`manner. A read request may be initiated by a host device on a FC channel. Because
`
`the host is transmitting the command via FC, a FC controller within the host
`
`encapsulates

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket