`____________
`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`____________
`
`CISCO SYSTEMS, INC. AND QUANTUM CORPORATION
`Petitioner,
`
`v.
`
`CROSSROADS SYSTEMS, INC.
`Patent Owner.
`
`____________
`
`Case IPR2014-01463
`Patent No. 7,934,041
`____________
`
`
`
`DECLARATION OF DR. JOHN LEVY, PH.D.
`
`
`
`
`
`
`CROSSROADS SUBSTITUTE EXHIBIT _
`Cisco Systems et al. v. Crossroads Systems, Inc.
`IPR2014-01463
`
`2027
`
`1
`
`
`
`I, John Levy, make the following declaration based on my personal
`
`knowledge and, if called to testify before the Patent Trial and Appeal Board, could
`
`and would testify as follows:
`
`I.
`
`INTRODUCTION
`1.
`
`I have been retained in connection with inter partes review
`
`proceeding, IPR2014-01463, which concerns United States Patent No. 7,934,041
`
`(the “’041 Patent”). This declaration contains my expert opinions concerning
`
`the ’041 Patent, the petition in this proceeding (the “Petition”), the prior art
`
`identified therein, and the facts alleged to support the Petition. I have been asked
`
`to evaluate and render an opinion concerning the grounds of unpatentability on
`
`which the present IPR has been instituted.
`
`2.
`
`It is my understanding that the Patent Trial and Appeal Board (the
`
`“Board”) instituted the present inter partes review on the following alleged
`
`grounds of unpatentability:
`
`A. Claims 1-14, 16-33, 35-50, and 53 under 35 U.S.C. § 103(a) for
`obviousness over CRD-5500 Manual (Ex.1004) and HP Journal
`(Ex. 1006); and
`
`B. Claims 15, 34, 51, and 52 under 35 U.S.C. § 103(a) for
`obviousness over CRD-5500 Manual (Ex. 1004), HP Journal (Ex.
`1006), and Fibre Channel Standard (Ex. 1007).
`
`2
`
`
`
`II. QUALIFICATIONS AND COMPENSATION
`A. Background and Experience
`3.
`
`I am the sole proprietor of John Levy Consulting, a consulting firm
`
`that specializes in consulting on managing development of high tech products,
`
`including computers and software. I have a Bachelor of Engineering Physics
`
`degree from Cornell University, a Master of Science degree in Electrical
`
`Engineering from California Institute of Technology, and a Ph.D. in Computer
`
`Science from Stanford University.
`
`4.
`
`From 1965 to 1966, at Caltech, my field of study was information
`
`processing systems. My coursework included systems programming, including the
`
`construction of compilers and assemblers. From 1966 to 1972, during my graduate
`
`study at Stanford, my field of study was computer architecture and operating
`
`systems. My coursework included computer systems design, programming and
`
`operating systems. During my employment at Stanford Linear Accelerator Center
`
`while I was a graduate student at Stanford University, I was a programmer and I
`
`participated in the design and implementation of a real-time operating system for
`
`use in data acquisition, storage and display. My Ph.D. thesis research related to
`
`computer systems organization and programming of multi-processor computers. I
`
`developed and measured the performance of several parallel programs on a
`
`3
`
`
`
`simulated 16-processor system. I also studied file systems, disk and tape storage
`
`subsystems, and input/output.
`
`5.
`
`I have been an employee and a consultant for over thirty years in the
`
`computer systems, software and storage industry. After earning my doctorate from
`
`Stanford University in Computer Science, I worked as an engineer at a number of
`
`leading companies in the computer industry, including Digital Equipment
`
`Corporation, Tandem Computer, Inc., Apple Computer, Inc., and Quantum
`
`Corporation.
`
`6.
`
`From 1972 to 1974, at Digital Equipment Corporation, I supervised
`
`the development of an input/output channel for high-speed mass storage (disk,
`
`drum and tape), and its implementation for 7 different peripheral units and 3
`
`different computer systems. From 1974 to 1975 I was project engineer leading the
`
`development of a new computer system. From 1975 to 1976, I supervised an
`
`operating system development group. During this time, I reviewed design changes
`
`and bug reports and fixes for two operating systems. While working for Digital
`
`Equipment Corporation, I wrote a long-term strategic plan for input/output buses
`
`and controllers and operating systems, including the conversion of most I/O buses
`
`to serial implementations. I am the author of a chapter on computer bus design in
`
`the book Computer Engineering, published in 1978 by Digital Press.
`
`4
`
`
`
`7.
`
`From 1977 to 1979, I was employed at Tandem Computer, Inc., where
`
`I worked on design of future multiprocessor systems. I also worked on problems
`
`related to distributed (networked) systems including rollback and recovery of
`
`distributed databases.
`
`8.
`
`From 1979 to 1982, I was employed at Apple Computer, Inc., where I
`
`worked on the design of a new computer system, the Lisa, which was a precursor
`
`to the Macintosh. I also supervised hardware and software engineers in the
`
`development of a new local area network.
`
`9.
`
`In 1980-81, I taught an upper-division course at San Francisco State
`
`University titled “Input/Output Architecture” which dealt with design of I/O
`
`channels, controllers, storage devices and their associated software.
`
`10. From 1982 to 1992, I consulted for a variety of client companies,
`
`including Apple Computer, Quantum Corporation and Ricoh Co., Ltd., on project
`
`management and product development. Consulting work for Quantum included
`
`working as a temporary supervisor of a firmware development team for a new hard
`
`disk drive. During this time I co-authored a paper, cited in my attached CV, on the
`
`design of a file system for write-once optical disk drives, related to work I did for
`
`client Ricoh.
`
`11. From 1993 to 1998, I was employed at Quantum Corporation, a
`
`manufacturer of hard disk drives, where I formed and managed a new group called
`
`5
`
`
`
`Systems Engineering. While in this role I managed, among others, software and
`
`systems engineers who developed hard disk input/output drivers for personal
`
`computers and disk drive performance analysis and simulation software. While at
`
`Quantum, I also led the definition and implementation of high-speed improvements
`
`of the ATA disk interface standard, called Ultra-ATA/33 and /66, which also led to
`
`improvements in the SCSI interface standard. I was also involved in the design of
`
`file systems for hard disks, data compression schemes for disk data, and Ethernet-
`
`connected disk drives. I was Quantum’s representative to the Audio/Video
`
`Working Group of the 1394 (FireWire) Trade Association, a Consumer Electronics
`
`industry standards group, and participated in Quantum’s work in designing disks
`
`that could record and play back video and audio streams without needing an
`
`intervening computer system.
`
`12. My qualifications for forming the opinions set forth in this report are
`
`listed in this section and in Appendix A attached, which is my curriculum vitae.
`
`Appendix A also includes a list of my publications.
`
`13.
`
`I am a named inventor on seven United States patents, including
`
`several related to input/output buses and storage subsystems. I have been disclosed
`
`as an expert in over 50 cases and have testified at trial and in depositions. A list of
`
`my testimony is attached hereto as Appendix B. I also have served as a technical
`
`advisor to two United States District Court Judges.
`
`6
`
`
`
`14.
`
`I regularly teach courses such as “Computers – the Inside Story” and
`
`“The Digital Revolution in the Home” at the Fromm Institute for Lifelong
`
`Learning at the University of San Francisco.
`
`B. Compensation
`15.
`
`I base my opinions below on my professional training and experience
`
`and my review of documents and materials produced in this litigation. My
`
`compensation for this assignment is $575 per hour. My compensation is not
`
`dependent on the substance of my opinions or my testimony or the outcome of the
`
`above-captioned case.
`
`III.
`
`INFORMATION CONSIDERED IN FORMING OPINION
`16.
`
`I have read the ’041 Patent, and its prosecution history. I have read
`
`the Petition, its exhibits, the Patent Owner’s Preliminary Response to the Petition,
`
`and its exhibits. I have also relied on my own knowledge and experience as well
`
`as published documents in forming my opinions. A list of materials I considered in
`
`forming my opinions is attached as Appendix C.
`
`IV. ORDINARY SKILL IN THE ART
`17.
`
`It is my understanding that a person of ordinary skill in the art at the
`
`time of the invention is presumed to know the relevant art and would be capable of
`
`understanding the scientific and engineering principles applicable to the pertinent
`
`7
`
`
`
`art. This person of ordinary skill in the art can perform routine tasks in the relevant
`
`field with a reasonable likelihood of success.
`
`18. Based on my experience, a person of ordinary skill in the art in the
`
`context of the patent under review would have a B.S. in electrical engineering or
`
`equivalent and at least three years of experience in the design of computer data
`
`storage and networks, or an M.S. or Ph.D. in electrical engineering or the
`
`equivalent, and at least one year of experience in design of computer data storage
`
`and networks. Unless otherwise stated, my testimony below refers to the
`
`knowledge of one of ordinary skill in the art.
`
`V. TECHNICAL BACKGROUND
`19. SCSI, which stands for “Small Computer System Interface”, is a
`
`standard input/output bus for interconnecting computer and peripheral devices.
`
`Although the term “SCSI” is often used without qualification, there are three
`
`versions of SCSI: SCSI-1, SCSI-2, and SCSI-3. Ex. 2047, SCSI-3 Architecture
`
`Model, at 20 (identifying the three different SCSI standards). According to the
`
`SCSI-2 standard, “The SCSI protocol is designed to provide an efficient peer-to-
`
`peer I/O bus with up to 16 devices, including one or more hosts.” Ex. 2037,
`
`(SCSI-2 standard) at 3. It also describes SCSI as a “local I/O bus.” Id. at 34.
`
`SCSI communication takes place between “initiators” that request performance of
`
`operations by “targets.” Id. at 31, 33. For example, a host computer might act as a
`
`8
`
`
`
`SCSI initiator that requests data from a target SCSI disk device on the same SCSI
`
`bus. The SCSI interface standard provides for the connection of multiple initiators
`
`and multiple targets on a single bus. Id. at 34; see also id. at 59 (Fig. 14), below.
`
`20. While in practice it was unusual to have multiple SCSI initiators on a
`
`bus, it was certainly possible to do so. “Communication on the SCSI bus is allowed
`
`between only two SCSI devices at any given time. . . . There can be any
`
`combination of initiators and targets provided there is at least one of each.” Ex.
`
`2037 (SCSI-2 Standard) at 58.
`
`9
`
`
`
`
`
`Id. at 59. “When two SCSI devices communicate on the SCSI bus, one acts as an
`
`initiator and the other acts as a target. The initiator originates an operation and the
`
`target performs the operation.” Id. at 58.
`
`21. Each device on a SCSI bus (targets and initiators) has a unique SCSI
`
`ID on that particular bus. If a device is attached to multiple buses, it can have the
`
`same or different SCSI IDs on each of those buses. See, e.g., CRD-5500 Manual,
`
`Ex. 1004 at 4-3. “Each target has one or more logical units, beginning with logical
`
`10
`
`
`
`unit zero.” Ex. 2037 at 112. A logical unit is “a physical or virtual peripheral
`
`device addressable through [the] target.” Ex. 2037 (SCSI-2 Standard) at 32. A
`
`logical unit number (LUN) is used to identify the logical unit at the target. Id. In
`
`SCSI-2, an initiator discovers the devices on the SCSI bus “by querying each SCSI
`
`ID and LUN” using the INQUIRY command. Id. at 112. A SCSI-3 device can
`
`discover LUNs in a similar fashion or, if both the initiator and the target support
`
`the SCSI-3 command REPORT LUNS, the initiator can determine the supported
`
`LUNs at the target by sending a REPORT LUNS command to the target. See Ex.
`
`2063, SCSI-3 Primary Commands, at 83.
`
`VI. THE ’041 PATENT
`A. Background of the Invention
`22. As described in the background of the ’041 Patent, computers access
`
`local storage devices using a native low level block protocol (“NLLBP”). The
`
`Patents-In-Suit describe an NLLBP:
`
`Ex. 1001, 2:2-7. Thus, NLLBPs allow for simple and direct access to local storage
`
`in a fast and efficient manner.
`
`
`
`11
`
`
`
`23.
`
`In the 1997 time frame, NLLBP requests were typically sent to local
`
`storage over parallel buses like SCSI, for example. Such parallel buses could not
`
`carry information very far and did not provide the capability of providing large
`
`distance separations (e.g., in excess of 10 kilometers as described in the patent
`
`under review). Ex. 1001, 2:58-61.
`
`24. Before Crossroads’ invention of the ’041 Patent, modern computer
`
`systems needed networks that connect multiple computers to multiple remote
`
`storage devices over larger distances that parallel busses could not support. To
`
`solve this problem, modern networks use a serial network transport medium (e.g., a
`
`Fibre Channel transport medium) which can carry information over much longer
`
`distances than parallel busses.
`
`25. Before Crossroads’ invention of the ’041 Patent, a network server
`
`(also referred to as a network file server) was the basic way networked computers
`
`achieved remote storage. A network file server connects to multiple computers by
`
`a serial network transport medium and to storage devices using a storage transport
`
`medium, such as a SCSI bus. Figure 1 of the patent under review, reproduced
`
`below, illustrates one example of such a system:
`
`12
`
`
`
`
`26. The network file server “provides a file system structure, access
`
`control and other miscellaneous capabilities that include the network interface.”
`
`Ex. 1001, 2:7-12. The ’041 Patent describes that the workstations use a “network
`
`protocol” to access data through the network server:
`
`
`
`Ex. 1001, 2:10-17. Thus, the workstation would translate its file system protocols
`
`into a “network protocol” which the network server must then translate into low
`
`level requests to the storage device.
`
`27. The ’041 Patent describes what is translated by the network server to
`
`provide network access by workstations.
`
`13
`
`
`
`
`
`Ex. 1001, 3:46-52. A person of ordinary skill in the art at the time of filing would
`
`understand that a workstation with access to server storage must translate its file
`
`system requests into a “network protocol” that includes a high level file system
`
`protocol and it is the requests in high level file system protocols received from the
`
`workstations that are translated into low level block requests to access data on
`
`storage devices.
`
`28. One example of a protocol that provided remote file access was called
`
`Network File System (NFS). Ex. 2048 at 5. Using Figure 1 of the ’041 Patent as
`
`an example, when a program in a workstation 12 wants to access remote files (files
`
`whose data is stored on remote storage devices), a file system program in the
`
`workstation 12 translates its local file system requests to, for example, NFS
`
`commands to be sent to the network server 14 to request a file action. Id. at 20.
`
`29. These NFS commands invoke software procedures in the network
`
`server 14 using Remote Procedure Calls (RPCs), which are encapsulated and
`
`transported over network transport medium 16. The network server 14 de-
`
`encapsulates the RPCs, then invokes software procedures associated with the
`
`14
`
`
`
`received NFS commands to perform file access on behalf of the workstation’s
`
`users. The network server 14 maintains one or more file systems on storage
`
`devices 20, which it uses to access the requested files. The network server 14
`
`accesses the storage devices 20 using NLLBP.
`
`30. A person of ordinary skill in the art would thus understand that the
`
`network server system described in Background of the Invention and Figure 1 of
`
`the Patents-In-Suit describes a system in which a workstation 12 translates its file
`
`system requests into high level file system protocols, such as NFS commands, sent
`
`to server 14; and it is the high level file system protocols (e.g., the high level
`
`network file system requests in NFS/RPC form) from workstation 12 that are
`
`translated by network server 14 into low level block protocol requests to access
`
`storage.
`
`31. Because it takes the computer time to create a high level network
`
`protocol containing a file system request, and it takes the network server time to
`
`construct an NLLBP from that network protocol (which the server needs to do in
`
`order to communicate with the storage device), the network server created a
`
`bottleneck, slowing down access to remote storage. Ex. 1001, 2:14-20, 3:45-54.
`
`B.
`The Claims of the ’041 Patent
`32. The claimed inventions are drawn to a device called a “storage router”
`
`and methods for providing virtual local storage on remote storage devices while
`
`15
`
`
`
`providing access controls according to a map and allowing access using NLLBP.
`
`The term “storage router” did not exist in the industry until the Crossroads
`
`invention.
`
`33.
`
`It is my understanding that, in an inter partes review proceeding,
`
`claim terms are given their broadest reasonable interpretation consistent with the
`
`specification, and that claim language should be read in light of the specification as
`
`it would be interpreted by one of ordinary skill in the art.
`
`34. Table 1 below includes the constructions provisionally adopted by the
`
`Board in making the decision to institute the present proceeding. Table 2 includes
`
`related claim constructions proposed by Petitioner and Patent Owner in the co-
`
`pending Litigation.
`
`Term
`
`Remote
`
`Native Low Level Block
`Protocol (NLLBP)
`
`
`
`Table 1
`
`Petition
`Indirectly connected through a storage router to
`enable connections to storage devices at a
`distance greater than allowed by a conventional
`parallel network interconnect (Pet. at 13-14)
`
`A protocol in which storage space is accessed at
`the block level, such as the SCSI protocol (Pet.
`at 11-12)
`
`16
`
`
`
`Table 2
`
`Term
`Remote
`
`Patent Owner
`Indirectly connected through
`at least one serial network
`transport medium
`
`Map/Mapping
`
`To create a path from a device
`on one side of the storage
`router to a device on the other
`side of the router. A “map”
`contains a representation of
`devices on each side of the
`storage router, so that when a
`device on one side of the
`storage router wants to
`communicate with a device on
`the other side of the storage
`router, the storage router can
`connect the devices.
`
`17
`
`Petitioner
`Indirectly connected through a
`storage router to enable
`network connections from
`[devices/Fibre Channel
`initiator devices/workstations]
`to storage devices at a distance
`greater than allowed by a
`conventional parallel
`interconnect.
`
`To create a known path for
`block-addressed data and
`commands from a particular
`device on one side of the
`storage router to a particular
`remote physical storage device
`on the other side of the router.
`A “map” contains a
`representation of the particular
`devices on each side of the
`storage router, so that before a
`particular workstation/device
`on one side of the storage
`router tries to communicate
`with a particular remote
`physical storage device on the
`other side of the storage
`router, the storage router
`already has stored a path from
`the particular
`workstation/device to the
`particular remote physical
`storage device over which the
`storage router will route
`block-addressed requests and
`data between the devices.
`
`
`
`
`Access Controls Controls which limit a
`[device/Fibre Channel
`Initiator device/workstation]’s
`access to a specific subset of
`storage devices or sections of
`a single storage device
`according to a map.
`
`Native Low
`Level Block
`Protocol
`(NLLBP)
`
`Allowing
`Access . . .
`Using NLLBP
`
`
`
`A set of rules or standards that
`enable computers to exchange
`information and do not
`involve the overhead of high
`level protocols and file
`systems typically required by
`network servers.
`
`Permit access using the native
`low level, block protocol of
`the virtual local storage
`without involving a translation
`from high level network
`protocols to a native low level
`block protocol request.
`
`
`Controls which limit a
`[device/Fibre Channel initiator
`device/workstation]’s access
`to a specific subset of storage
`devices or sections of a single
`storage device according to a
`map for the [device/Fibre
`Channel initiator
`device/workstation].
`
`A set of rules or standards that
`enable computers to exchange
`information and do not
`involve the overhead of high
`level protocols and file
`systems typically required by
`network servers.
`
`To allow native low level
`block protocol requests to be
`routed from the [devices/Fibre
`Channel initiator
`devices/workstations] to the
`remote storage devices.
`
`C. Mapping Limitations
`35. Each of the independent claims under review recites a map limitation:
`
`“a processing device . . . configured to[] maintain a
`Claim 1:
`map to allocate storage space on the remote storage devices to
`devices connected to the first transport medium by associating
`representations of the devices connected to the first transport
`
`18
`
`
`
`medium with representations of storage space on the remote
`storage devices”
`
`“a storage router . . . configured to[] maintain a
`Claim 20:
`map to allocate storage space on the remote storage devices to
`devices connected to the first transport medium by associating
`representations of the devices connected to the first transport
`medium with representations of storage space on the remote
`storage devices”
`
`“maintaining a map at the storage router to allocate
`Claim 37:
`storage space on the remote storage devices to devices
`connected
`to
`the first
`transport medium by associating
`representations of the devices connected to the first transport
`medium with representations of storage space on the remote
`storage devices”
`
`36. A person of ordinary skill in the art would understand that the
`
`mapping associates representations of particular hosts on one side of the storage
`
`router with representations of storage on the other side to allocate storage to
`
`specific devices. That is, the map identifies with specificity the particular host that
`
`has access to storage represented in the map.
`
`37. As described in the specification of the ’041 Patent, in order to
`
`provide access controls, the storage router of the ’041 Patent uses a map that
`
`associates representations of hosts on one side of the storage router with
`
`representations of storage on the other side of the storage router, to define what
`
`19
`
`
`
`storage is available to each particular host. See, e.g., Ex. 1001, 4:41-44, 4:50-53
`
`(describing “storage allocated to each attached workstation” through “mapping
`
`tables or other mapping techniques” so that allocated storage “can only be accessed
`
`by the associated workstation”) (emphasis added); see also id. at 9:21-27 (“The
`
`storage router can use tables to map, for each initiator, what storage access is
`
`available” so that “[i]n this manner, the storage space . . . can be allocated to [each
`
`initiator].”) (emphasis added).
`
`38. Thus, the map will include representations of the devices (e.g., the
`
`workstations) connected to the first transport medium and representations of the
`
`storage to associate the devices with storage (i.e., “to map, for each initiator, what
`
`storage access is available” for that initiator). Id. at 9:21-27.
`
`39. Figure 3 (reproduced below) shows an example of mapping in which
`
`a storage router 56 maps workstations 58 to storage. Storage router 56 uses
`
`“mapping tables or other mapping techniques” (i.e., a map) to associate each of
`
`Workstations A-D with a subset of storage 66, 68, 70 and 72, so that each subset
`
`“is allocated to one of the workstations 58” and “can only be accessed by the
`
`associated workstation 58.” Ex. 1001, 4:41-53 (emphasis added). Similarly,
`
`workstation E can be associated with whole storage device 64. Id. at 4:53-55.
`
`20
`
`
`
`
`
`In Figure 3, workstations 58 are interconnected with storage router 56 by the same
`
`interconnect, i.e., “a common [Fibre Channel high speed serial transport].”
`
`Ex. 1001, 4:28-32. The storage router must identify the particular workstations on
`
`the first transport medium in order to allocate storage to the particular
`
`workstations. Id. at 9:21-27 (“The storage router can use tables to map, for each
`
`initiator, what storage access is available” so that “[i]n this manner, the storage
`
`space . . . can be allocated to [each initiator].”) (emphasis added). In other words,
`
`the map identifies with specificity the particular host that has access to storage
`
`represented in the map to “allocate(s) storage on storage devices to devices on the
`
`first transport medium.”
`
`40.
`
`In the example of Figure 3, the map is used to allocate partitions to
`
`workstations: “each partition is allocated to one of the workstations 58” and
`
`21
`
`
`
`“appear[s] to the associated workstation 58 as local storage.” Ex. 1001, 4:48-53.
`
`Storage allocated in the map is expressly described as a logical storage definition:
`
`[T]he storage space considered by the workstation 58 to
`be its local storage is actually a partition (i.e., logical
`storage definition) of a physically remote storage device
`60, 62 or 64 connected through storage router 56.
`
`Ex. 1001, 5:12-15 (emphasis added).
`
`
`41. The specification goes on to distinguish the logical storage allocated
`
`in the map from physical storage:
`
`In the latter case, management station 76 can be a
`workstation or other computing device with special rights
`such that storage router 56 allows access to mapping
`tables and shows storage devices 60, 62 and 64 as they
`exist physically rather than as they have been allocated.
`
`Id. at 4:65-5:3 (emphasis added). One of ordinary skill in the art would understand
`
`that, in the map, the representations of storage allocated to the workstations are
`
`logical storage definitions.
`
`D. The “Control[ling] Access” Limitations
`42. Each of the independent claims also recites a “control[ling] access”
`
`limitation:
`
`“control access from the devices connected to the
`Claim 1:
`first transport medium to the storage space . . . in accordance
`with the map”
`
`22
`
`
`
`“control access from the devices connected to the
`Claim 20:
`first transport medium to the storage space . . . in accordance
`with the map”
`
`“controlling access from the devices connected to
`Claim 37:
`transport medium
`to
`the storage space . . .
`in
`the first
`accordance with the map”
`
`43. Controlling access as recited in the claims of the ’041 Patent is device
`
`specific and controls a particular device’s access to a specific subset of storage
`
`according to the map. As described in the specification, the storage router controls
`
`access according to the map so that the allocated storage can only be accessed by
`
`the host(s) associated with that storage in the map. See, e.g., id. at 4:50-51 (“These
`
`subsets 66, 68, 70 and 72 can only be accessed by
`
`the associated
`
`workstation . . . .”) (emphasis added); 4:57-59 (“[E]ach workstation 58 has
`
`controlled access to only the specified partition of storage device 62 which forms
`
`virtual local storage for the workstation 58.”) (emphasis added). It can also be
`
`noted that Figure 3 illustrates controlling access both to sections of storage devices
`
`(e.g., subsets 66, 68, 70 and 72 of storage device 62) and to whole storage devices
`
`(e.g., whole storage device 64). Id. at 4:48-55.
`
`44. More particularly, the ’041 Patent provides access controls by
`
`controlling what virtual local storage each host sees. Id. at 5:8-15 (“Storage router
`
`56 provides centralized control of what each workstation 58 sees as its local
`
`23
`
`
`
`drive . . . . Consequently, the storage space considered by the workstation 58 to be
`
`its local storage is actually a partition (i.e., logical storage definition) of a
`
`physically remote storage device 60, 62 or 64 connected through storage router
`
`56.”).
`
`45.
`
`In Figure 3, each workstation 58 on common Fibre Channel
`
`interconnect 52 sees different storage. As illustrated below, for example, because
`
`Workstation A is mapped to storage subset 66, Workstation A is “shown” storage
`
`subset 66 by the storage router.
`
`
`
`Workstation A does not see, and therefore cannot access, Workstation B Storage
`
`68, Workstation C Storage 70, Workstation D Storage 72 or Workstation E
`
`Storage 74. In the example of Figure 3, storage router 56 shows available storage
`
`to each host as FCP logical units. Ex. 1001, 6:7-10. In some instances, the same
`
`logical unit number may be used to represent different storage for different
`
`workstations on FC interconnect 52. Ex. 1001, 9:19-21.
`
`24
`
`
`
`E. Allow Access . . . Using NLLBP
`46.
`
`“Allowing Access . . . Using NLLBP” means that the storage router
`
`permits access using the native low level, block protocol of the virtual local storage
`
`without involving a translation from high level network protocols (network
`
`protocols containing file system commands) to a native low level block protocol
`
`request.
`
`47. As described in the specification, the storage router controls the
`
`virtual local storage that each workstation “sees” as its local drive. Id. at 5:8-15.
`
`Crossroads’ invention presents the remote storage to the computers as “virtual
`
`local storage” so that the storage appears to the host computer to be locally
`
`connected to the host computer, even though the storage is actually remotely
`
`located. Id. at 4:39-41, 4:50-53 (partitions “appear to the associated workstation
`
`58 as local storage accessed using [NLLBPs]”). Because the virtual local storage
`
`appears as local storage, the workstation accesses the virtual local storage using the
`
`NLLBP of the virtual local storage. Id. at 4:50-53. That is, just as the workstation
`
`sends an NLLBP request in order to access its local storage, when using the storage
`
`router of present invention, the workstation similarly sends an NLLBP request to
`
`the storage router. This concept is illustrated below:
`
`25
`
`
`
`
`
`Storage router 56 presents the storage available to Workstation A as virtual local
`
`storage. Accordingly, Workstation A sends requests for access to the virtual local
`
`storage using the NLLBP associated with the virtual local storage.
`
`48. The ’041 Patent contrasts allowing storage access using NLLBPs with
`
`the manner in which storage access was done using prior art network servers. The
`
`mechanism for actually accessing the storage devices in the prior art of Figure 1
`
`(i.e., network file server) and in the invention is the same – namely, by way of an
`
`NLLBP request. As described in the ’041 Patent, however, the storage access
`
`allowed by prior art network servers required the server to translate high level file
`
`system protocols received from the host into NLLBPs. Ex. 1001, 2:12-17, 3:46-50.
`
`This process is discussed in more detail above in ¶¶ 26-31. In contrast, host
`
`computers send NLLBP to the storage router of the ’041 Patent as discussed above
`
`in ¶ 48, in order to be allowed access to storage. Accordingly, as described in
`
`the ’041 Patent, “[s]torage access involves native low level, block protocols and
`
`26
`
`
`
`does not involve the overhead of high level protocols and file system required by
`
`network servers.” Ex. 1001, 5:30-33.
`
`F. Remote
`49.
`
`I note that Petitioners proposed and the Board provisionally adopted a
`
`construction that “remote” means “indirectly connected through a storage router to
`
`enable connections to storage devices at a distance greater than allowed by a
`
`conventional parallel network interconnect.” I disagree with this construction.
`
`50. The ’041 Patent distinguishes accessing local storage from accessing
`
`storage through “network interconnects” that can be used to provide access to data
`
`on remote devices. Ex. 1001, 2:7-10. At the time of filing, a SCSI bus or similar
`
`parallel bus was typically used to connect to local storage. These parallel busses
`
`could only support a limited number of devices over a short distance. Id. at 1:51-
`
`56. The ’041 Patent contrasts the use of the SCSI bus and similar parallel busses
`
`with high-speed serial network interconnects that provide the capability to attach a
`
`large number of devices to a common storage transport over larger distances. Id. at
`
`1:56-59.
`
`51. Typically, serial interconnects use less power, eliminate clock skew