`
`
`UNITED STATES PATENT AND TRADEMARK OFFICE
`____________
`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`____________
`
`ORACLE CORPORATION, NETAPP INC. and
`HUAWEI TECHNOLOGIES CO., LTD.,
`Petitioners,
`
`v.
`
`CROSSROADS SYSTEMS, INC.
`Patent Owner.
`
`____________
`
`Case IPR2014-01209
`Case IPR2014-01207
`Patent No. 7,051,147
`____________
`
`
`
`DECLARATION OF DR. JOHN LEVY, PH.D.
`
`
`1 of 153
`
` CROSSROADS EXHIBIT 2053
`Oracle Corp., et al v. Crossroads Systems, Inc.
` IPR2014-01207 and IPR2014-1209
`
`
`
`
`
`1 of 171
`
`
`
`
`CROSSROADS SUBSTITUTE EXHIBIT 2053 _
`Oracle Corp. et al. v. Crossroads Systems, Inc.
`IPR2014-01207 and IPR2014-1209
`
`
`
`I, John Levy, make the following declaration based on my personal
`
`knowledge and, if called to testify before the Patent Trial and Appeal Board, could
`
`and would testify as follows:
`
`I.
`
`INTRODUCTION
`1.
`
`I have been retained in connection with inter partes review
`
`proceedings IPR2014-01207 and IPR2014-01209, which concern United States
`
`Patent No. 7,051,147 (the “’147 Patent”). This declaration contains my expert
`
`opinions concerning the ’147 Patent, the petitions in these proceedings (the “1207
`
`Petition” and “1209 Petition”, respectively), the prior art identified therein, and the
`
`facts alleged to support these Petitions. I have been asked to evaluate and render
`
`an opinion concerning the grounds of unpatentability on which the present inter
`
`partes reviews has been instituted.
`
`2.
`
`It is my understanding that the Patent Trial and Appeal Board (the
`
`“Board”) instituted the present inter partes review on the following alleged
`
`grounds of unpatentability:
`
`A. Claims 1, 2, 4, 10, 11, and 13 under 35 U.S.C. §
`103(a) for obviousness over Bergsten and Hirai
`
`B. Claim 5 under 35 U.S.C. § 103(a) for obviousness
`over Bergsten, Hirai, and Smith
`
`C. Claim 1, 2, 4, 10, 11, and 13 under 35 U.S.C. § 103(a)
`for obviousness over Kikuchi and Bergsten
`
`
`
`2 of 171
`
`
`
`D. Claim 5 under 35 U.S.C. § 103(a) for obviousness
`over Kikuchi, Bergsten, and Smith
`
`E. Claims 14-39 under 35 U.S.C. § 103(a) for
`obviousness over CRD-5500 User’s Manual, CRD-
`5500 Data Sheet, and Smith
`
`F. Claims 14-39 under 35 U.S.C. § 103(a) for
`obviousness over Kikuchi and Smith
`
`G. Claims 14-39 under 35 U.S.C. § 103(a) for
`obviousness over Bergsten and Hirai
`
`II. QUALIFICATIONS AND COMPENSATION
`A. Background and Experience
`
`3.
`
`I am the sole proprietor of John Levy Consulting, a consulting firm
`
`that specializes in consulting on managing development of high tech products,
`
`including computers and software. I have a Bachelor of Engineering Physics
`
`degree from Cornell University, a Master of Science degree in Electrical
`
`Engineering from California Institute of Technology, and a Ph.D. in Computer
`
`Science from Stanford University.
`
`4.
`
`From 1965 to 1966, at Caltech, my field of study was information
`
`processing systems. My coursework included systems programming, including the
`
`construction of compilers and assemblers. From 1966 to 1972, during my graduate
`
`study at Stanford, my field of study was computer architecture and operating
`
`systems. My coursework included computer systems design, programming and
`
`
`
`3 of 171
`
`
`
`operating systems. During my employment at Stanford Linear Accelerator Center
`
`while I was a graduate student at Stanford University, I was a programmer and I
`
`participated in the design and implementation of a real-time operating system for
`
`use in data acquisition, storage and display. My Ph.D. thesis research related to
`
`computer systems organization and programming of multi-processor computers. I
`
`developed and measured the performance of several parallel programs on a
`
`simulated 16-processor system. I also studied file systems, disk and tape storage
`
`subsystems, and input/output.
`
`5.
`
`I have been an employee and a consultant for over thirty years in the
`
`computer systems, software and storage industry. After earning my doctorate from
`
`Stanford University in Computer Science, I worked as an engineer at a number of
`
`leading companies in the computer industry, including Digital Equipment
`
`Corporation, Tandem Computer, Inc., Apple Computer, Inc., and Quantum
`
`Corporation.
`
`6.
`
`From 1972 to 1974, at Digital Equipment Corporation, I supervised
`
`the development of an input/output channel for high-speed mass storage (disk,
`
`drum and tape), and its implementation for 7 different peripheral units and 3
`
`different computer systems. From 1974 to 1975 I was project engineer leading the
`
`development of a new computer system. From 1975 to 1976, I supervised an
`
`operating system development group. During this time, I reviewed design changes
`
`
`
`4 of 171
`
`
`
`and bug reports and fixes for two operating systems. While working for Digital
`
`Equipment Corporation, I wrote a long-term strategic plan for input/output buses
`
`and controllers and operating systems, including the conversion of most I/O buses
`
`to serial implementations. I am the author of a chapter on computer bus design in
`
`the book Computer Engineering, published in 1978 by Digital Press.
`
`7.
`
`From 1977 to 1979, I was employed at Tandem Computer, Inc., where
`
`I worked on design of future multiprocessor systems. I also worked on problems
`
`related to distributed (networked) systems including rollback and recovery of
`
`distributed databases.
`
`8.
`
`From 1979 to 1982, I was employed at Apple Computer, Inc., where I
`
`worked on the design of a new computer system, the Lisa, which was a precursor
`
`to the Macintosh. I also supervised hardware and software engineers in the
`
`development of a new local area network.
`
`9.
`
`In 1980-81, I taught an upper-division course at San Francisco State
`
`University titled “Input/Output Architecture” which dealt with design of I/O
`
`channels, controllers, storage devices and their associated software.
`
`10. From 1982 to 1992, I consulted for a variety of client companies,
`
`including Apple Computer, Quantum Corporation and Ricoh Co., Ltd., on project
`
`management and product development. Consulting work for Quantum included
`
`working as a temporary supervisor of a firmware development team for a new hard
`
`
`
`5 of 171
`
`
`
`disk drive. During this time I co-authored a paper, cited in my attached CV, on the
`
`design of a file system for write-once optical disk drives, related to work I did for
`
`client Ricoh.
`
`11. From 1993 to 1998, I was employed at Quantum Corporation, a
`
`manufacturer of hard disk drives, where I formed and managed a new group called
`
`Systems Engineering. While in this role I managed, among others, software and
`
`systems engineers who developed hard disk input/output drivers for personal
`
`computers and disk drive performance analysis and simulation software. While at
`
`Quantum, I also led the definition and implementation of high-speed improvements
`
`of the ATA disk interface standard, called Ultra-ATA/33 and /66, which also led to
`
`improvements in the SCSI interface standard. I was also involved in the design of
`
`file systems for hard disks, data compression schemes for disk data, and Ethernet-
`
`connected disk drives. I was Quantum’s representative to the Audio/Video
`
`Working Group of the 1394 (FireWire) Trade Association, a Consumer Electronics
`
`industry standards group, and participated in Quantum’s work in designing disks
`
`that could record and play back video and audio streams without needing an
`
`intervening computer system.
`
`12. My qualifications for forming the opinions set forth in this report are
`
`listed in this section and in Appendix A attached, which is my curriculum vitae.
`
`Appendix A also includes a list of my publications.
`
`
`
`6 of 171
`
`
`
`13.
`
`I am a named inventor on seven United States patents, including
`
`several related to input/output buses and storage subsystems. I have been disclosed
`
`as an expert in over 50 cases and have testified at trial and in depositions. A list of
`
`my testimony is attached hereto as Appendix B. I also have served as a technical
`
`advisor to two United States District Court Judges.
`
`14.
`
`I regularly teach courses such as “Computers – the Inside Story” and
`
`“The Digital Revolution in the Home” at the Fromm Institute for Lifelong
`
`Learning at the University of San Francisco.
`
`B. Compensation
`
`15.
`
`I base my opinions below on my professional training and experience
`
`and my review of documents and materials produced in this litigation. My
`
`compensation for this assignment is $575 per hour. My compensation is not
`
`dependent on the substance of my opinions or my testimony or the outcome of the
`
`above-captioned case.
`
`III.
`
`INFORMATION CONSIDERED IN FORMING OPINION
`16.
`
`I have read the ’147 Patent, and its prosecution history. I have read
`
`the Petition, its exhibits, the Patent Owner’s Preliminary Response to the Petition,
`
`and its exhibits. I have also relied on my own knowledge and experience as well
`
`as published documents in forming my opinions. A list of materials considered in
`
`forming my opinions is attached as Appendix C.
`
`
`
`7 of 171
`
`
`
`IV. ORDINARY SKILL IN THE ART
`17.
`
`It is my understanding that a person of ordinary skill in the art at the
`
`time of the invention is presumed to know the relevant art and would be capable of
`
`understanding the scientific and engineering principles applicable to the pertinent
`
`art. This person of ordinary skill in the art can perform routine tasks in the relevant
`
`field with a reasonable likelihood of success.
`
`18. Based on my experience, a person of ordinary skill in the art in the
`
`context of the patent under review would have a B.S. in electrical engineering or
`
`equivalent and at least three years of experience in design of computer data storage
`
`and networks, or an M.S. or Ph.D. in electrical engineering or the equivalent, and
`
`at least one year of experience in the design of computer data storage and
`
`networks. Unless otherwise stated, my testimony below refers to the knowledge of
`
`one of ordinary skill in the art.
`
`V. TECHNICAL BACKGROUND
`A.
`Small Computer System Interface (SCSI)
`
`19. SCSI, which stands for Small Computer System Interface, is a
`
`standard input/output bus for interconnecting computers and peripheral devices.
`
`Although the term “SCSI” is often used without qualification, there are three
`
`versions of SCSI: SCSI-1, SCSI-2, and SCSI-3. Ex. 2047, SCSI-3 Architecture
`
`Model at 20 (identifying the three different SCSI standards). According to the
`
`
`
`8 of 171
`
`
`
`SCSI-2 standard, “The SCSI protocol is designed to provide an efficient peer-to-
`
`peer I/O bus with up to 16 devices, including one or more hosts.” Ex. 2037, SCSI-
`
`2 standard, at 3. It also describes SCSI as a “local I/O bus.” Id. at 34. SCSI
`
`communication takes place between “initiators” that request performance of
`
`operations by “targets.” Id. at 31, 33. For example, a host computer might act as
`
`SCSI initiator that requests data from a target SCSI disk device on the same SCSI
`
`bus. The SCSI interface protocol provides for the connection of multiple initiators
`
`and multiple targets on a single bus. Id. at 34.
`
`20. While in practice it was unusual to have multiple SCSI initiators on a
`
`bus, it was certainly possible to do so. “Communication on the SCSI bus is allowed
`
`between only two SCSI devices at any given time. . . . There can be any
`
`combination of initiators and targets provided there is at least one of each.” Ex.
`
`2037 at 58.
`
`
`
`9 of 171
`
`
`
`
`
`Id. at 59. “When two SCSI devices communicate on the SCSI bus, one acts as an
`
`initiator and the other acts as a target. The initiator originates an operation and the
`
`target performs the operation.” Id. at 58.
`
`21. Each device on a SCSI bus (targets and initiators) has a unique SCSI
`
`ID on that particular bus. If a device is attached to multiple buses, it can have the
`
`same or different SCSI IDs on each of those buses. See, e.g., CRD-5500 Manual,
`
`Ex. 1003 at 4-3.
`
`
`
`10 of 171
`
`
`
`22.
`
` “Each target has one or more logical units, beginning with logical
`
`unit zero.” Ex. 2037 at 112. A logical unit is a virtual or physical peripheral
`
`device individually addressable at the target. Id. at 32. A logical unit number
`
`(LUN) is used to identify a logical unit at a target. Id.
`
`23.
`
`In SCSI-2, an initiator identifies the devices on the SCSI bus “by
`
`querying each SCSI ID and LUN” using the INQUIRY command. Id. at 112. If
`
`both the initiator and the target support the SCSI-3 command REPORT LUNS, the
`
`initiator can determine the supported LUNs at the target by sending a REPORT
`
`LUNS command to the target. See Ex. 2063, SCSI-3 Primary Commands, at 83.
`
`24. Communication on a SCSI bus involves several phases and the “SCSI
`
`bus can never be in more than one phase at any given time.” Ex. 2037 at 67. The
`
`initial phase is called the BUS FREE phase, which indicates that no device is using
`
`the SCSI bus at that moment. Id. The next phase is called ARBITRATION during
`
`which the bus determines which SCSI device will next gain control of the SCSI
`
`bus. Id. at 68. “Arbitration is defined to permit multiple initiators and to permit
`
`concurrent I/O operations.” Id. at 34. In the event that multiple devices attempt to
`
`control the SCSI bus at the same time, the device with the highest priority SCSI
`
`ID—with SCSI ID 7 having the highest priority—wins the arbitration. Id.
`
`25. After an initiator has gained control of the SCSI bus, the SELECTION
`
`Phase begins, during which an initiator selects a target for communication. Id.
`
`
`
`11 of 171
`
`
`
`The SELECTION phase is generally followed by the information transfer phases
`
`(COMMAND, DATA, STATUS, and MESSAGE). Id. at 69. The information
`
`transfer phases perform the actual communication between the initiator and target.
`
`Once the SELECTION phase has completed, communication will occur only
`
`between the initiator and the selected target until the next BUS FREE phase.
`
`B.
`
`Fibre Channel (FC)
`
`26. Fibre Channel (FC) is a standardized interface used for network and
`
`peripheral communications. FC uses a layered protocol that can be used to
`
`transport arbitrary types of data. Ex. 2064, Fibre Channel Physical and Signaling
`
`Interface (FC-PH), at 33. As one example, Fibre Channel can be used to transport
`
`network file system protocols. See Ex. 1030 at 8 (illustrating a protocol stack for a
`
`Network File System (NFS) using Fibre Channel).
`
`27. The Fibre Channel Standard protocol is organized in 5 layers,
`
`designated FC-0 to FC-4. Ex. 2064, FC-PH, at 33. “FC-0 defines the physical
`
`portions of the Fibre Channel including the fibre, connectors, and optical and
`
`electrical parameters for a variety of data rates and physical media.” Id. “FC-1
`
`defines the transmission protocol which includes the serial encoding, decoding,
`
`and error control.” Id. “FC-2 defines the signaling protocol which includes the
`
`frame structure and byte sequences.” Id. “FC-3 defines a set of services which are
`
`common across multiple ports of a node.” Id. FC-4 is the highest level in the
`
`
`
`12 of 171
`
`
`
`Fibre Channel standards set and defines a mapping between Fibre Channel and the
`
`protocols it encapsulates. Id.
`
`28. Data communication using the Fibre Channel Standard is organized
`
`into exchanges. Ex. 2064, FC-PH, at 59-60. An exchange takes places between
`
`an originator and a responder, and comprises one or more related sequences. Id.
`
`Each sequence is comprised of one or more related data frames. A Fibre Channel
`
`frame is the basic unit of data transfer and is comprised of bytes. Id. at 124. The
`
`FC-2 layer handles exchanges, sequences, and frames. Id. at 52.
`
`29. Below is an illustration of a Fibre Channel frame.
`
`
`
`Ex. 2064, FC-PH, at 124. A Fibre Channel frame begins with a Start_of_Frame
`
`(SOF) delimiter. Id. Immediately following the SOF is the Fibre Channel Frame
`
`Header (depicted below). Id. at 125; see also Ex. 2062, FCP, at 26-27. In the
`
`absence of an optional header, the Frame Header is followed by the Payload, which
`
`represents the data that is being transmitted within the Fibre Channel frame. The
`
`
`
`13 of 171
`
`
`
`Payload is followed by a Cyclic Redundancy Check (CRC), which is used to detect
`
`transmission errors. Id.at 125. The CRC is followed by the End_of_Frame (EOF)
`
`delimiter, which marks the end of the frame. Id. at 125-26.
`
`C.
`
`Fibre Channel Protocol for SCSI (FCP)
`
`30. Fibre Channel Protocol for SCSI (FCP) defines a Fibre Channel
`
`mapping layer (FC-4) to transmit SCSI command, data, and status information
`
`using standard Fibre Channel frames. Ex. 2062 at 9. FCP maps SCSI commands
`
`into Fibre Channel information units. An Information Unit is an “organized
`
`collection of data specified by FC-4 to be transferred as a single Sequence by FC-
`
`2.” Ex. 2064, FC-PH, at 40, 432.
`
`31. One category of FCP Information Unit is named FCP_CMND1 and is
`
`used to transport SCSI Commands in the payload of a Fibre Channel frame. The
`
`format of an FCP_CMND payload of a Fibre Channel frame is illustrated below.
`
`See ¶ 29, above (describing a Fibre Channel frame).
`
`
`1 The ’147 Patent refers to FCP_CMD instead of FCP_CMND. Ex. 1001, 8:1-2.
`
`A person of ordinary skill in the art would understand that this was a typographical
`
`error.
`
`
`
`14 of 171
`
`
`
`
`
`Ex. 2062 at 38. “The FCP logical unit number is the address of the desired logical
`
`unit in the attached subsystem.” Id. The FCP_CNTL field contains various control
`
`flags. Id. at 39-43. The FCP_CDB field contains the command descriptor block
`
`(CDB) for a SCSI command. Id. at 43 (“The FCP_CDB field contains the actual
`
`CDB to be interpreted by the addressed logical unit.”). An example of a SCSI
`
`command is illustrated below.
`
`Ex. 2061 at 56 (illustrating the SCSI-3 READ(10) command). In the READ (10)
`
`CDB, the logical block address (LBA) refers to the starting block that the initiator
`
`is requesting and the transfer length (TL) refers to the number of blocks to be
`
`
`
`transferred.
`
`
`
`15 of 171
`
`
`
`D. Data Storage and Retrieval
`
`32. Typically, the storage medium on a disk storage device is divided into
`
`a number of storage units called blocks. The location of a block of storage is often
`
`referenced by a block number (e.g., the logical block address discussed above).
`
`The size of a disk storage block may vary across systems or disk storage devices.
`
`The size of block is usually chosen based on physical parameters of the storage
`
`medium or based on overhead concerns of the underlying system. For example,
`
`the smaller the block size, the greater the number of addresses that need to be
`
`tracked. On the other hand, larger block sizes result in wasted space whenever the
`
`amount of data to be stored is less than the size of a single storage block.
`
`33. Files are named collections of data. Files are an abstraction
`
`implemented by an operating system. The data of a file is generally stored on a
`
`disk in one or more storage blocks. The data is retrieved when the data is needed
`
`by a user of the computer. The way in which files are stored on a disk is managed
`
`by an operating system component called a file system. Information about files,
`
`called metadata, is also stored in storage blocks on a disk. Usually, the file system
`
`hides from the user the details about how and where files are stored on a disk. Ex.
`
`2057, Tanenbaum, Modern Operating Systems (1st ed. 1992) Chapters 4-5, at 14;
`
`Ex. 2056, Tanenbaum, Modern Operating Systems (3rd ed. 2009), at 254.
`
`
`
`16 of 171
`
`
`
`34.
`
` “[T]he most important issue in implementing file storage is keeping
`
`track of which disk blocks go with which file.” Id. at 272. One method of tracking
`
`which blocks contain the data for a file uses a special “data structure called an i-
`
`node (index-node), which list the attributes and disk addresses of the file’s blocks.”
`
`Id. at 277. I-nodes are typically associated with the Unix operating system and its
`
`derivatives, such as Linux. Other operating systems have equivalent structures for
`
`storing metadata associated with a file. An example i-node is shown below.
`
`
`
`Id. at 277. In the above Figure 4-13, “Address of disk block 0” refers to the logical
`
`block address on disk of the first storage block in which user data is stored for that
`
`file. File attributes for the file are stored in the first portion of the i-node (shown as
`
`“File Attributes” in Figure 4-13) and the remaining portions of the i-node are used
`
`to locate the other blocks containing to the file’s data. The addresses of disk
`
`
`
`17 of 171
`
`
`
`blocks in the above Figure 4-13 may reference any storage location on the storage
`
`device. The disk blocks for a file may or may not be near each other and their
`
`addresses in the i-node may not necessarily be in the order that they occur on the
`
`storage device. Each file block has its own address because each is in a different
`
`location on the storage device.
`
`35. A user is not generally concerned with the details of locating a file’s
`
`blocks, only with finding the file and then using the file’s data. Organization of a
`
`user’s files is typically managed through the use of directories. A directory is
`
`typically implemented as a file, but such directory files are special because they
`
`belong to the file system and contain information that the file system uses to locate
`
`a requested file. A directory contains names of files and possibly names of
`
`directories as well. Depending on the implementation, a directory file may also
`
`include attributes of files listed in the directory. In the example above with i-
`
`nodes, each directory entry need only include the name of a file and a pointer to the
`
`file’s i-node. Attributes of the file other than its name are stored in its i-node. The
`
`figure below illustrates this concept.
`
`
`
`18 of 171
`
`
`
`
`
`See Ex. 2056 at 278. In this figure, the directory entry for the “games” file (or
`
`directory) includes a pointer to an i-node, which contains the file’s metadata. Id. at
`
`281 (Fig. 4-16).
`
`Before a file can be read or written, it must first be opened. When the operating
`
`system opens a file, it locates the blocks corresponding to a file’s data by
`
`referencing an i-node. Users generally find a file by finding the name of the file in
`
`a directory, which as explained above is a special type of file. When a file is
`
`created, its name is added to a directory. Id. at 278.
`
`36. As explained in the ’147 Patent, file operations require translation into
`
`low-level block protocol operations on a disk in order to actually move data to and
`
`from the disk. In typical operating systems, when accessing a file stored on local
`
`storage, the file system invokes a low-level disk driver (a software component in
`
`the operating system) to accomplish the actual data movement. For example,
`
`receiving a file-level Read request from a user program, the file system would refer
`
`to the file’s i-node to find locations on disk of the file’s data blocks, and then
`
`
`
`19 of 171
`
`
`
`translate that file Read request into block-level (e.g., SCSI) Read commands for
`
`the identified blocks. The commands are then transmitted to the disk over the SCSI
`
`interface by a low-level (e.g., SCSI) disk driver module. See, e.g., Ex. 2056 at 346
`
`(I/O Software Layers).
`
`37. The file system writes a file’s new data to disk in three phases: (a)
`
`selecting a free storage block, (b) writing data to that block, and then (c) adding the
`
`address of the block to the file’s i-node. File systems maintain data describing the
`
`location of free data blocks. Ex. 2057 at 171, 307-08. As blocks are used by a file,
`
`the addresses of free blocks are taken off the list of free blocks and added to the list
`
`of addresses in the file’s i-node. As files are deleted or truncated, references to
`
`storage blocks that are no longer in use are removed from a file’s i-node and added
`
`to the list of free blocks. The availability of a file system block is dynamic in
`
`nature. Therefore, a file may not be stored in an ordered and contiguous collection
`
`of blocks within a file system. In the following example, File A uses Blocks 2, 3,
`
`4, 5 and 9 in that order. File B uses disk blocks 6, 7, 1, 11 and 12 in that order.
`
`Blocks 8 and 15 are used by files other than File A and File B. Blocks 0, 13, and
`
`14 are unused blocks (i.e., free).
`
`
`
`20 of 171
`
`
`
`
`
`Figure A
`In this example, it is possible that at the moment File A grew to the point of
`
`requiring a 5th block, file system blocks 6, 7, and 8 were unavailable. Therefore,
`
`File A’s 5th block was written to file system block 9.
`
`21 of 171
`
`
`
`VI. THE ’147 PATENT
`A. Background of the Invention
`
`38. As described in the background of the ’147 Patent, computers access
`
`local storage devices using a native low level block protocol (“NLLBP”). The
`
`Patents-In-Suit describe an NLLBP:
`
`
`
`Ex. 1001, 1:49-54. Thus, NLLBPs allow for simple and direct access to local
`
`storage in a fast and efficient manner.
`
`39.
`
`In the 1997 time frame, NLLBP requests were typically sent to local
`
`storage over parallel buses like SCSI, for example. Such parallel buses could not
`
`carry information very far and did not provide the capability of providing large
`
`distance separations (e.g., in excess of 10 kilometers as described in the patent
`
`under review). Ex. 1001, 2:40-42.
`
`40. At the time of the ’147 Patent, modern computer systems needed
`
`networks that connected multiple computers to multiple remote storage devices
`
`over larger distances that parallel buses could not support. To solve this problem,
`
`modern networks use a serial network transport medium (e.g., a Fibre Channel
`
`22 of 171
`
`
`
`transport medium) which can carry information over much longer distances than
`
`parallel busses.
`
`41. Before Crossroads’ invention of the ’147 Patent, a network server
`
`(also referred to as a network file server) was the basic way networked computers
`
`achieved remote storage. A network file server connects to multiple computers by
`
`a serial network transport medium and to storage devices using a storage transport
`
`medium, such as a SCSI bus. Figure 1 of the patent under review, reproduced
`
`below, illustrates one example of such a system:
`
`
`42. The network file server “provides file system structure, access control,
`
`and other miscellaneous capabilities that include the network interface.” Ex. 1001,
`
`1:54-58. The ’147 Patent describes that the workstations use a “network protocol”
`
`to access data through the network server:
`
`
`
`
`
`23 of 171
`
`
`
`Ex. 1001, 1:56-63. Thus, the workstation would translate its file system protocols
`
`into a “network protocol” which the network server must then translate into low
`
`level requests to the storage device.
`
`43. The ’147 Patent describes what is translated by the network server to
`
`provide network access by workstations.
`
`
`
`Ex. 1001, 3:30-36. A person of ordinary skill in the art at the time of filing would
`
`understand that a workstation with access to server storage must translate its file
`
`system requests into a “network protocol” that includes a high level file system
`
`protocol. And it is the requests in high level file system protocols received from the
`
`workstations that are translated into low level block protocol requests to access
`
`data on storage devices.
`
`44. One example of a protocol that provided remote file access was called
`
`Network File System (NFS). Ex. 2048 at 5. Using Figure 1 of the ’147 Patent as
`
`an example, when a program in a workstation 12 wants to access remote files (files
`
`whose data is stored on remote storage devices), a file system program in the
`
`
`
`24 of 171
`
`
`
`workstation 12 translates its local file system requests to, for example, NFS
`
`commands to be sent to the network server 14 to request a file action. Id. at 20.
`
`These NFS commands invoke software procedures in the network server 14 using
`
`Remote Procedure Calls (RPCs), which are encapsulated and transported over
`
`network transport medium 16. The network server 14 de-encapsulates the RPCs,
`
`then invokes software procedures associated with the received NFS commands to
`
`perform file access on behalf of the workstation’s users. The network server 14
`
`maintains one or more file systems on storage devices 20, which it uses to access
`
`the requested files. The network server 14 accesses the storage devices 20 using
`
`NLLBP.
`
`45. A person of ordinary skill in the art would thus understand that the
`
`network server system described in Background of the Invention and Figure 1 of
`
`the Patents-In-Suit describes a system in which a workstation 12 translates its file
`
`system requests into high level file system protocols, such as NFS commands, sent
`
`to server 14; and it is the high level file system protocols (e.g., the high level
`
`network file system requests in NFS/RPC form) from workstation 12 that are
`
`translated by network server 14 into low level block protocol requests to access
`
`storage.
`
`46. Because it takes the computer time to create a high level network
`
`protocol containing a file system request, and it takes the network server time to
`
`
`
`25 of 171
`
`
`
`construct an NLLBP from that network protocol (which the server needs to do in
`
`order to communicate with the storage device), the network server created a
`
`bottleneck, slowing down access to remote storage. Ex. 1001, 1:61-67, 3:29-38.
`
`B.
`
`The Claims
`
`47. The claimed inventions are drawn to a device called a “storage router”
`
`and methods for providing virtual local storage on remote storage devices, while
`
`providing access controls according to a map and allowing access using NLLBP.
`
`The term “storage router” did not exist in the industry until the Crossroads
`
`invention.
`
`48.
`
`It is my understanding that, in an inter partes review proceeding,
`
`claim terms are given their broadest reasonable interpretation consistent with the
`
`specification, and that claim language should be read in light of the specification as
`
`it would be interpreted by one of ordinary skill in the art.
`
`49.
`
`It is my understanding that the Board did not adopt any constructions
`
`in making its decision to institute the present proceedings. Table 1 below includes
`
`related claim constructions proposed by Petitioners and Patent Owner in the co-
`
`pending Litigation.
`
`
`
`26 of 171
`
`
`
`Term
`
`Remote
`
`Map/Mapping
`
`Table 1
`
`Patent Owner
`
`Petitioner
`
`Indirectly connected through
`at least one serial network
`transport medium
`
`To create a path from a device
`on one side of the storage
`router to a device on the other
`side of the router. A “map”
`contains a representation of
`devices on each side of the
`storage router, so that when a
`device on one side of the
`storage router wants to
`communicate with a device on
`the other side of the storage
`router, the storage router can
`connect the devices.
`
`Indirectly connected through a
`storage router to enable
`network connections from
`[devices/Fibre Channel
`initiator devices/workstations]
`to storage devices at a distance
`greater than allowed by a
`conventional parallel
`interconnect.
`
`To create a known path for
`block-addressed data and
`commands from a particular
`device on one side of the
`storage router to a particular
`remote physical storage device
`on the other side of the router.
`A “map” contains a
`representation of the particular
`devices on each side of the
`storage router, so that before a
`particular workstation/device
`on one side of the storage
`router tries to communicate
`with a particular remote
`physical storage device on the
`other side of the storage
`router, the storage router
`already has stored a path from
`the particular
`workstation/device to the
`particular remote physical
`storage device over which the
`storage router will route
`block-addressed requests and
`data between the devices.
`
`
`
`27 of 171
`
`
`
`Access Controls Controls which limit a
`[device/Fibre Channel
`Initiator device/workstation]’s
`access to a specific subset of
`storage devices or sections of
`a single storage device
`according to a map.
`
`Native Low
`Level Block
`Protocol
`(NLLBP)
`
`A set of rules or standards that
`enable computers to exchange
`information and do not
`involve the overhead of high
`level protocols and file
`systems typically required by
`network servers.
`
`Permit access using the native
`low level, block protocol of
`the virtual local storage
`without involving a translation
`from high level network
`protocols to