`
`IN THE UNITED STATES DISTRICT COURT
`FOR THE WESTERN DISTRICT OF TEXAS
`AUSTIN DIVISION
`
`C.A. NO. 1:13-cv-00800-SS
`
`
`
`C.A. NO. 1:13-cv-00895-SS
`
`
`
`C.A. NO. 1:13-cv-01025-SS
`
`
`
`C.A. NO. 1:14-cv-00148-SS
`
`
`
`C.A. NO. 1:14-cv-00149-SS
`
`
`
`C.A. NO. 1:14-cv-00150-SS
`
`
`
`§§§§
`
`§
`
`§§§§
`
`§
`
`§§§§
`
`§
`
`§§§§
`
`§
`
`§§§§
`
`§
`
`§§§§
`
`§
`
`CROSSROADS SYSTEMS, INC.
`
`
`
`v.
`
`DOT HILL SYSTEMS CORP.
`
`CROSSROADS SYSTEMS, INC.
`
`
`
`v.
`
`ORACLE CORPORATION
`
`CROSSROADS SYSTEMS, INC.
`
`
`
`v.
`
`HUAWEI TECHS. CO., LTD., ET AL.
`
`CROSSROADS SYSTEMS, INC.
`
`
`
`v.
`
`CISCO SYSTEMS, INC.
`
`CROSSROADS SYSTEMS, INC.
`
`
`
`v.
`
`NETAPP, INC.
`
`CROSSROADS SYSTEMS, INC.
`
`
`
`v.
`
`QUANTUM CORPORATION
`
`DECLARATION OF RANDY KATZ REGARDING
`CLAIM CONSTRUCTION OF U.S. PATENT NOS.
`6,425,035, 7,051,147, 7,934,041, AND 7,987,311
`
`
`
`
` CROSSROADS EXHIBIT 2033
` Cisco Systems et al v Crossroads Systems, Inc.
` IPR2014-01544
`
`1 of 253
`
`
`
`Case 1:14-cv-00148-SS Document 53-2 Filed 09/08/14 Page 3 of 281
`
`Table of Contents
`
`Page
`
`Qualifications ..................................................................................................................... 1
`Technology Background .................................................................................................... 3
`A.
`Storage Systems ..................................................................................................... 3
`B.
`Storage Interconnects and Controllers ................................................................... 5
`The Patents-in-Suit............................................................................................................. 8
`Summary of Opinions ...................................................................................................... 10
`A.
`Person of Ordinary Skill in the Art ...................................................................... 10
`B.
`“Map” / “Mapping” (’035, ’147, ’041, and ’311 Patents) ................................... 11
`C.
`“Remote” (’035, ’147, ’041, and ’311 Patents) ................................................... 23
`D.
`“Storage Router” (’035, ’147, ’041, and ’311 Patents) ........................................ 29
`E.
`“Supervisor Unit” (’035 and ’147 Patents) .......................................................... 34
`F.
`Patents) ................................................................................................................. 37
`Concluding Remarks ........................................................................................................ 40
`
`“Interface Between” / “Interface With [A First Transport Medium]” /
`“Interface With [A Second Transport Medium] (’035, ’147, and ’041
`
`
`
`I.
`II.
`III.
`IV.
`
`V.
`
`
`
`
`
`
`i
`
`2 of 253
`
`
`
`
`
`Case 1:14-cv-00148-SS Document 53-2 Filed 09/08/14 Page 4 of 281
`
`I, Randy H. Katz, declare as follows:
`
`1.
`
`I have been retained by Defendants Dot Hill Systems Corp., Oracle Corporation,
`
`Huawei Technologies Co., Ltd., Huawei Enterprise USA, Inc., Huawei Technologies USA, Inc.,
`
`Cisco Systems, Inc., NetApp, Inc., and Quantum Corporation (collectively “Defendants”) to
`
`offer opinions regarding the meanings that certain claim terms in U.S. Patent Nos. 6,425,035 (the
`
`“’035 patent”), 7,051,147 (the “’147 patent”), 7,934,041 (the “’041 patent”), and 7,987,311 (the
`
`“’311 patent”) (collectively, the “Patents-in-Suit”) would have had to a person of ordinary skill
`
`in the art at the time of the alleged invention in those patents. This declaration summarizes my
`
`opinions relating to the issues addressed below.
`
`I.
`
`Qualifications
`
`2.
`
`I have studied, taught, and practiced computer science and engineering for over
`
`forty years. I earned an A.B. in Computer Science from Cornell University in 1976, and a M.S.
`
`and Ph.D. in Computer Science from University of California at Berkeley in 1978 and 1980,
`
`respectively. I worked in private industry as a computer scientist from 1980-81, and served as an
`
`assistant professor of computer science at the University of Wisconsin-Madison from 1981-83. I
`
`served as the Program Manager and Deputy Director of the Computer Systems Technology
`
`Office for the Advanced Research Projects Agency of the U.S. Department of Defense from
`
`1993-94.
`
`3.
`
`I joined the faculty of the Computer Science Division of the Electrical
`
`Engineering and Computer Sciences Department (EECS) of the University of California at
`
`Berkeley in 1983, where I have been to this day. I became a full professor in 1989, and served as
`
`the Chairman of the EECS Department from 1996-99. Since 1996, I have been the United
`
`Microelectronics Corporation Distinguished Professor in Electrical Engineering and Computer
`
`
`
`1
`
`3 of 253
`
`
`
`
`
`Case 1:14-cv-00148-SS Document 53-2 Filed 09/08/14 Page 5 of 281
`
`Science. My research interests have included high performance multiprocessor architectures and
`
`protocols, storage architectures, transport protocols spanning heterogeneous networks, and
`
`network and service architectures. I have taught and continue to teach courses that cover topics
`
`relevant to storage systems and protocols, including advanced graduate seminars as well as
`
`courses in undergraduate and graduate computer architecture and computer communications
`
`networks.
`
`4.
`
`Beginning in the late 1980s, with colleagues at Berkeley, I developed the essential
`
`framework for describing the tradeoff between reliability and performance in storage systems.
`
`That work led to the creation and wide-spread adoption of Redundant Arrays of Inexpensive
`
`Disks (RAID), which is still widely used today. In 1999, I along with two of my colleagues at
`
`Berkeley won the IEEE Reynolds Johnson Storage System Award, the highest professional
`
`recognition in the storage systems field, for our foundational work in and development of RAID.
`
`I have also earned other honors and awards and have been recognized for my work in the field of
`
`computer science and engineering. I am a Fellow of the Association for Computing Machinery
`
`(ACM), the Institute of Electrical and Electronics Engineers (IEEE), American Association for
`
`the Advancement of Science (AAAS), and American Society for Engineering Education
`
`(ASEE). I am a member of the National Academy of Engineering (NAE), the highest
`
`recognition that can be bestowed on an engineer in the United States, and the American
`
`Academy of Arts and Sciences.
`
`5.
`
`I have published over 250 technical papers, book chapters, and books in the field
`
`of computer science and engineering, including in the field of storage systems, in particular. I
`
`authored the textbook entitled Contemporary Logic Design used at over 200 colleges and
`
`universities. I have presented at numerous conferences on computer systems and networking,
`
`
`
`2
`
`4 of 253
`
`
`
`
`
`Case 1:14-cv-00148-SS Document 53-2 Filed 09/08/14 Page 6 of 281
`
`including the keynote addresses of the IEEE International Conference on Distributed Computing
`
`Systems and the International Conference on Networking Protocols. I serve and have served on
`
`several government and university advisory boards and the technical advisory board of several
`
`companies in the computer and storage field. I serve and have served as editor or referee for
`
`several academic journals, such as ACM Transactions on Computer Systems and the NSF
`
`Computer Engineering Section. I am also a named inventor on three U.S. storage-related
`
`patents, Nos.: 5,195,100, 5,475,697, and 5,758,054 (“Non-volatile memory storage of write
`
`operation identifier in data storage device”).
`
`6.
`
`Attached hereto as Exhibit 1 is a true copy of my curriculum vitae, which
`
`provides a more complete description of my educational background, experience, publications
`
`and other qualifications in the area of computer science, engineering, and storage systems.
`
`II.
`
`Technology Background
`
`A.
`
`7.
`
`Storage Systems
`
`Generally speaking, storage allows information to persist on computer systems.
`
`A storage system is based on technologies—such as magnetic or optical recording manifested in
`
`terms of disks and tapes—that allow information to be retained for the long term.
`
`8.
`
`A disk drive does not usually interface with a host computer directly. At the very
`
`least, an intermediary hardware component, called a disk controller, sits between a host computer
`
`and the storage device, offloading from the former the details of managing the sequencing of
`
`input/output operations. The host specifies an operation to “read” or “write” a sequence of
`
`characters at a specified offset into a “logical” (virtual) device, i.e., the logical input/output
`
`request. The disk controller is responsible for converting the logical request into detailed “seek”
`
`and “transfer” operations at a “physical” level. Generally, given the mechanical overheads
`
`
`
`3
`
`5 of 253
`
`
`
`
`
`Case 1:14-cv-00148-SS Document 53-2 Filed 09/08/14 Page 7 of 281
`
`associated with the operation of storage devices, they are not efficient at transferring a single
`
`character (or byte) at a time. Rather, the device and the operations it supports operate in a natural
`
`primitive unit of transfer to and from the device called a block. This is typically 512 bytes or
`
`(small) multiples of this amount. Software layers within the operational software between an
`
`application and the block-oriented interface to the storage device implement a device-
`
`independent character-oriented access interface. The disk controller may also manage an
`
`internal semiconductor cache, giving it additional freedom in deciding how to implement the
`
`requests from the host.
`
`9.
`
`By the middle 1980s, the most prevalent high performance disk interface for
`
`workstations and personal computers was Small Computer System Interface (SCSI, pronounced
`
`“scuzzy”). By the early 1990s, SCSI had become the dominant storage interface. In the SCSI
`
`specification, the disk controller initiates operations targeted for specific disks. The SCSI
`
`interface is a message-based protocol, which allows individual disk drives to perform their own
`
`seeks and transfers into and out of semiconductor memory buffers integrated with the disk.
`
`Multiple disk drives can share a common signal pathway to the disk controller, called a SCSI
`
`bus. It is called a bus because many devices can share it.
`
`10.
`
`Using 8- or 16-bit wide data paths, SCSI provides a high level operational
`
`interface to storage devices. Devices are viewed as logical streams of bytes, within which an
`
`operation can be positioned for the purposes of reading or writing. The bus protocol consists of a
`
`sequence of initiating the operation, followed by the initiator disconnecting from the bus, with
`
`the target device acquiring the bus when it has completed the requested operation, with a
`
`subsequent transfer of data between initiator and target over the bus. These operations are
`
`sufficiently generic that the same basic set can be used for magnetic disks, optical disks,
`
`
`
`4
`
`6 of 253
`
`
`
`
`
`Case 1:14-cv-00148-SS Document 53-2 Filed 09/08/14 Page 8 of 281
`
`CDROMs, DVD, or magnetic tape devices. For example, on a SCSI read, the controller/initiator
`
`sends a command to the disk/target with a request for a certain number of bytes from a specified
`
`offset into the disk. The target executes this as a seek operation followed by a transfer into its
`
`local memory. It then acquires the SCSI bus to transmit the read data back to the individual disk
`
`controller.
`
`11.
`
`The SCSI protocol is sufficiently generic that it can also be used as a general
`
`interface to a storage subsystem. As an alternative to placing the storage controller into the host
`
`backplane, it is possible to use SCSI as the interface between the host and a storage subsystem.
`
`In this configuration, a relatively simple SCSI controller is placed in the host backplane, and a
`
`wider/faster version of SCSI is used than that normally deployed between a disk controller and
`
`individual storage devices. SCSI as a method of storage device interconnection was well known
`
`by the late 1980s. SCSI as a method to connect storage subsystems to multiple hosts, that is, to
`
`connect aggregations of storage devices and controllers, was also well known by the early 1990s.
`
`B.
`
`12.
`
`Storage Interconnects and Controllers
`
`The various components of a computer system, including the storage system,
`
`must be able to communicate with each other as well as with other computer systems. Hardware
`
`mechanisms such as buses permit such communication across a short distance, such as within a
`
`printed circuit board or within a rack of hardware. A bus is a single electrical pathway,
`
`potentially made up of several parallel conductors, that is shared among several system hardware
`
`components. Buses typically are associated with interconnection distances in the range of a few
`
`meters. To communicate across a distance of tens or hundreds of meters, such as within a
`
`computer machine room or within an office building, other interconnection technologies are
`
`used, such as channels or local-area networks (LAN). A channel describes the signal carrying
`
`
`
`5
`
`7 of 253
`
`
`
`
`
`Case 1:14-cv-00148-SS Document 53-2 Filed 09/08/14 Page 9 of 281
`
`pathway and operational protocols for interconnecting storage (and other input/output devices) to
`
`a mainframe computer. LAN is a more general technology, in particular in the nature of the
`
`protocols used, for interconnecting computers at a distance. Beyond the distance of a small
`
`number of kilometers, wide-area network (WAN) technology is used. The distinction between
`
`LAN and WAN is mainly in the different switching/signaling mechanisms and signal carrying
`
`technologies used; the protocols remain largely the same.
`
`13.
`
`The computer systems found in the enterprise computing environment fall into
`
`three broad categories: the personal computers (PC) or workstations to be found on the desks of
`
`individual workers; the file servers managing documents and other electronic files shared by the
`
`community of users; and the mainframe computers upon which major enterprise applications
`
`(such as order entry or general ledger applications built on top of sophisticated database systems)
`
`are executing. Enterprise computing exhibits a mixture of all three kinds of machines. The
`
`software systems and methods of storage system attachment in computer systems may be
`
`radically different in these machine classes.
`
`14.
`
`By the 1990s, systems that provided a host computer access to storage across a
`
`computer network were well known. The concept behind the file server model—a network-
`
`connected subsystem allowing stored data to be shared among many client machines—had long
`
`been embraced by storage system architects. In the early 1990s, storage system architectures
`
`arose that made use of standard storage system interconnects, such as SCSI, to attach multiple
`
`host computers to a shared storage system through a single storage controller. The controller in
`
`turn made use of such interconnections to interface storage devices directly to the controller.
`
`15.
`
`Typically, there are two alternative ways to interconnect computers to shared
`
`storage across a network: through the storage system via a storage area network (SAN); or
`
`
`
`6
`
`8 of 253
`
`
`
`
`
`Case 1:14-cv-00148-SS Document 53-2 Filed 09/08/14 Page 10 of 281
`
`through a more general LAN or WAN. SAN emerged as a storage architecture based on a
`
`switched serial interconnection between hosts and the storage controller. SAN is an architecture
`
`that allows a storage subsystem to be shared among multiple mainframe host computers by
`
`providing multiple channel interfaces. The storage subsystem provides the illusion of a
`
`collection of logical units to the hosts to which it is attached. It is responsible for mapping
`
`block-level input-output operations presented by the host on logical devices into input-output
`
`operations against the actual physical devices attached to the storage subsystem.
`
`16.
`
`SAN provides direct and local area interconnection to a large-scale storage
`
`controller using interconnection technologies evolved from mainframe channel technology. The
`
`mainframe may use either proprietary or open system interconnections. An example of the
`
`former is Enterprise Storage Connection (ESCON), developed for IBM mainframes. Fibre
`
`Channel (FC) is an example of the latter, intended for vendor-independent interconnections
`
`among a broader category of machines and subsystems. The technology of FC was developed in
`
`the late 1980s and standardized by the early 1990s as a higher bandwidth and longer distance
`
`method of interconnection. A SAN can be constructed from FC links and switches that
`
`interconnect hosts to a shared storage controller.
`
`17.
`
`By the 1990s, there were storage controllers that served as an intermediary
`
`between potentially many host machines (i.e., workstations/PCs) and potentially many disk
`
`drives. Modern storage controllers support multiple host interfaces, each based on optical
`
`interconnections, allowing the storage resource to be shared among many host machines in a
`
`SAN. Such storage controllers export to the attached hosts a logical image of the disk drives
`
`under the storage controller’s control. The logical images are often called logical units, and may
`
`be assigned a Logical Unit Number (LUN). The controller assigns these logical images onto the
`
`
`
`7
`
`9 of 253
`
`
`
`
`
`Case 1:14-cv-00148-SS Document 53-2 Filed 09/08/14 Page 11 of 281
`
`attached physical drives that are divided into physical extents (i.e., a contiguous region of
`
`physical disk) and physical blocks. The host “sees” a logical drive divided into logical extents
`
`each consisting of a sequence of logical blocks. A SAN-attached subsystem looks like a
`
`collection of logical devices to the hosts to which it is attached; the host machines are
`
`responsible for constructing a file system on top of these logical devices.
`
`18.
`
`Controllers can include buffer memories, providing a mechanism for operational
`
`speed matching and asynchronous decoupling between host and storage. For example, if writes
`
`come in a short burst from the host at speeds faster than the storage devices can handle them,
`
`they could be placed in the buffer memory, waiting to be staged to disk. Likewise, if data comes
`
`back from disk at a faster rate than the host can manage, the data can be held in the buffer until
`
`the host is ready to accept it.
`
`III. The Patents-in-Suit
`
`19.
`
`The four Patents-in-Suit share the same title (“Storage Router and Method for
`
`Providing Virtual Local Storage”), named inventors, and specification. The ’035 patent was
`
`filed September 27, 2001 and issued July 23, 2002. The ’147 patent was filed September 9, 2003
`
`and issued May 23, 2006. The ’041 patent was filed January 20, 2010 and issued April 26, 2011.
`
`The ’311 patent was filed October 22, 2010 and issued July 26, 2011. All four of the Patents-in-
`
`Suit are continuations of the application that issued as U.S. Patent No. 5,941,972 (“the “’972
`
`patent,” which was filed December 31, 1997).
`
`20.
`
`The common specification of the Patents-in-Suit purports to describe the
`
`invention of a “storage router” that “is a bridge device that connects a Fiber Channel link directly
`
`to a SCSI bus and enables the exchange of SCSI command set information between application
`
`clients on SCSI bus devices and the Fiber Channel links.” E.g., the ’035 patent, 5:34-38. As
`
`
`
`8
`
`10 of 253
`
`
`
`
`
`Case 1:14-cv-00148-SS Document 53-2 Filed 09/08/14 Page 12 of 281
`
`depicted in Figure 3 of the specification, “a Fiber Channel high speed serial interconnect 52 and
`
`a SCSI bus 54 [are] bridged by a storage router 56. Storage router 56 of FIG. 3 provides for a
`
`large number of workstations 58 to be interconnected on a common storage transport and to
`
`access common storage devices 60, 62 and 64 through native low level, block protocols.” E.g.,
`
`id. at 3:64-4:6. The storage router implements “controls and routing such that each workstation
`
`58 can have access to a specific subset of the overall data stored in storage devices 60, 62 and 64.
`
`This specific subset of data has the appearance and characteristics of local storage and is referred
`
`to herein as virtual local storage.” E.g., id. at 4:7-13. The storage router “combines access
`
`control with routing such that each workstation 58 has controlled access to only the specified
`
`partition of storage device 62 which forms virtual local storage for the workstation 58.” E.g., id.
`
`at 4:7-13.
`
`21.
`
`Further, according to a disclosed embodiment, the storage router maintains a
`
`“configuration” that “maps between” workstations connected to a Fiber Channel and SCSI
`
`storage devices connected to a SCSI bus and that “implements access controls for storage space”
`
`on the SCSI storage devices. E.g., id. at 2:19-22. The storage router then allows access from
`
`Fiber Channel initiator devices to SCSI storage devices using native low level, block protocol “in
`
`accordance with” the configuration or mapping. E.g., id. at 2:12-14, 2:19-26. “This can be
`
`implemented to allow all generic FCP [Fibre Channel Protocol] and SCSI commands to pass
`
`through the storage router to address attached devices.” E.g., id. at 7:19-21. The specification
`
`claims that the benefit of this is to “centralize local storage for networked workstations without
`
`any cost of speed or overhead,” where the storage devices “can be located in a significantly
`
`remote position.” E.g., id. at 2:27-33.
`
`
`
`9
`
`11 of 253
`
`
`
`
`
`Case 1:14-cv-00148-SS Document 53-2 Filed 09/08/14 Page 13 of 281
`
`IV.
`
`Summary of Opinions
`
`A.
`
`22.
`
`Person of Ordinary Skill in the Art
`
`I have been asked to consider the level of ordinary skill in the art at the time of the
`
`patent. I understand that a person of ordinary skill is a hypothetical person ordinarily (as
`
`opposed to expertly) skilled in the art relating to the subject matter of the invention and who is
`
`presumed to be aware of all pertinent prior art.
`
`23.
`
`As described above, the subject matter of the Patents-in-Suit involves storage
`
`system architectures. It is my opinion therefore that someone of ordinary skill in the field at the
`
`time of the Patents-in-Suit likely would be an engineer with experience equivalent to an
`
`undergraduate degree in electrical engineering, computer science, or computer engineering, with
`
`two years of industrial experience designing and implementing hardware and software for
`
`storage subsystems.
`
`24. My opinion as to the level of ordinary skill in the art is based upon my personal
`
`knowledge and experience, and my consideration of such things as the level of education and
`
`experience of persons of skill working in the field, the sophistication of the technology, and the
`
`rapidity with which innovations are made in this field.
`
`25.
`
`I have been informed that the plaintiff alleges that the asserted claims of the
`
`Patents-in-Suit were conceived as early as May 1997. I have considered the level of ordinary
`
`skill in the art at the time in and around that date as well as the filing dates of the Patents-in-Suit
`
`and the applications to which the Patents-in-Suit allegedly claim priority, and my opinions are
`
`the same regardless of which date is the appropriate priority date. Moreover, regardless of which
`
`date is chosen, I personally had at least the level of ordinary skill in the art at that time.
`
`
`
`10
`
`12 of 253
`
`
`
`
`
`Case 1:14-cv-00148-SS Document 53-2 Filed 09/08/14 Page 14 of 281
`
`B.
`
`26.
`
`“Map” / “Mapping” (’035, ’147, ’041, and ’311 Patents)
`
`It is my opinion that one of ordinary skill in the art would understand the term
`
`“map” / “mapping” as used in all of the claims of the Patents-in-Suit to refer to creating a
`
`designated path for block-level communications from a device on one side of the storage router
`
`to a remote storage device on the other side of the router. I also agree that a “map,” as used in
`
`the claims of the Patents-in-Suit, contains a representation of devices on each side of the storage
`
`router, so that when a device on one side of the storage router wants to communicate via block-
`
`level communications with a device on the other side of the storage router, the storage router can
`
`designate a path to connect the devices by routing requests and data between the devices.
`
`Specifically, as discussed below, it is my opinion that “map” / “mapping” as claimed in the
`
`Patents-in-Suit requires at least three aspects: (1) a designated path, i.e., a path that is established
`
`and known by the storage router prior to controlling access requests; (2) the path is specified for
`
`block-level communications, i.e., data and commands are addressed to the block-level location
`
`on the target device; and (3) the path enables the storage router to route requests and data
`
`between the devices, i.e., pass requests and data from a device on one side of the storage router
`
`through to a device on the other side of the storage router.
`
`27.
`
`First, it is my opinion that the “map” referred to in the claims of the Patents-in-
`
`Suit comprises a set of designated path(s) between each host device on one side of the storage
`
`router and, on the other side of the storage router, the particular remote storage device(s) to
`
`which the host device has access. The paths in the map must be designated so as to enable the
`
`storage router to route and control access from host devices to storage devices in accordance
`
`with the map. Put differently, one of ordinary skill in the art at the time of the Patents-in-Suit
`
`would not understand the term “map” / “mapping” in the claims to refer to a path that is not
`
`
`
`11
`
`13 of 253
`
`
`
`
`
`Case 1:14-cv-00148-SS Document 53-2 Filed 09/08/14 Page 15 of 281
`
`designated, is only transiently designated for each access request, or is designated only after each
`
`access request is received by the storage router, because one of ordinary skill in the art would
`
`understand that the storage router cannot control access from hosts to remote storage devices in
`
`accordance with the map, as required by the claims, unless the path from host to remote storage
`
`device in the map is designated prior to the time that the access is requested.
`
`28. My opinion is based in part on the claim language itself. For example, the
`
`apparatus claims of the ’035 patent all require a storage router “operable to map between”
`
`“devices” or “workstations” “connected to” a “first transport medium,” on the one hand, and
`
`“remote” “storage devices” “connected to” a “second transport medium,” on the other hand.
`
`E.g., ’035 patent, claims 1 & 7. The plain language of the claims and the context of the other
`
`claim language makes clear that this “map” is intimately linked to the other requirements of the
`
`claims, namely to allow the storage router “to implement access controls for storage space on the
`
`storage devices” and to “allow access from devices [or workstations] connected to the first
`
`transport medium to the storage devices” on the “second transport medium” “in accordance with
`
`the mapping.” E.g., id. Similarly, the method claims of the ’035 patent require performing a
`
`step of “mapping between devices connected to the first transport medium,” on the one hand, to
`
`“storage devices connected to another transport medium,” on the other hand. E.g., ’035 patent,
`
`claim 11. The context of the other claim language further establishes that “mapping” is a
`
`prerequisite for “implementing access controls for storage space on the storage devices” and
`
`“allowing access from devices connected to the first transport medium to the storage devices,” as
`
`required by the claims. E.g., id. The same is true of all of the other claims of the Patents-in-Suit.
`
`See, e.g., ’147 patent, claims 1, 6, 10 & 34 (requiring a “configuration for remote storage devices
`
`connected to [a] second Fibre Channel transport medium” “that maps between” “Fibre Channel
`
`
`
`12
`
`14 of 253
`
`
`
`
`
`Case 1:14-cv-00148-SS Document 53-2 Filed 09/08/14 Page 16 of 281
`
`initiator devices” and the remote storage devices and “that implements access controls,” in order
`
`“to allow access from” the initiator devices to the remote storage devices “in accordance with the
`
`configuration”); id., claims 21 & 28 (requiring an “access control device operable to” “map
`
`between” “at least one device connected to [a] first transport medium” and “storage space on”
`
`“at least one storage device connected [a] second transport medium,” in order to “control access
`
`from the at least one device to the at least one storage device…in accordance with the map”);
`
`’041 patent, claims 1, 20 & 37 (requiring “maintain[ing] a map to allocate storage space on []
`
`remote storage devices to devices connected to [a] first transport medium by associating
`
`representations of the devices connected to the first transport medium with representations of
`
`storage space on the remote storage devices,” in order to “control access from the devices
`
`connected to the first transport medium to the storage space on the remote storage devices in
`
`accordance with the map”).
`
`29.
`
`In sum, the plain language and context of the claims confirm that the storage
`
`router must “maintain” a “map” that contains a specific “association” between host devices on
`
`one side of the storage router and particular remote storage devices on the other side of the
`
`storage router, so that the storage router can “allow” and “control” “access” between the devices
`
`“in accordance with” the “map.” One of ordinary skill in the art at the time of the Patents-in-Suit
`
`therefore would understand from such claim language and context that to “map” requires
`
`specifying a designated path between the host devices and remote storage devices. One of
`
`ordinary skill in the art at the time furthermore would understand that, in order for the storage
`
`router to “allow” and “control” access “in accordance with” such a map, such paths must be
`
`fixed and known by the storage router prior to performing any such access control.
`
`
`
`13
`
`15 of 253
`
`
`
`
`
`Case 1:14-cv-00148-SS Document 53-2 Filed 09/08/14 Page 17 of 281
`
`30. My opinion is also based on the shared specification of the Patents-in-Suit. The
`
`specification describes the “map” as designated paths between the host devices and remote
`
`storage devices that permit the storage router to control access “in accordance with” the map.
`
`See, e.g., ’035 patent, 2:8-13 (“The storage router maps between the workstations and the SCSI
`
`storage devices and implements access controls for storage space on the SCSI storage devices.
`
`The storage router then allows access from the workstations to the SCSI storage devices…in
`
`accordance with the mapping.”); id. at 2:19-26 (“A configuration is maintained for SCSI storage
`
`devices connected to the SCSI bus transport medium. The configuration maps between Fiber
`
`Channel devices and the SCSI storage devices….Access is then allowed from Fiber Channel
`
`initiator devices to SCSI storage devices…in accordance with the configuration.”). The
`
`specification confirms that, to do so, the map must designate the path from each host to the
`
`particular storage device, or portion thereof, to which that host has access. See, e.g., ’035 patent,
`
`8:67-9:3 (“The storage router can use tables to map, for each initiator, what storage access is
`
`available and what partition is being addressed by a particular request.”).
`
`31.
`
`The specification states that the purpose of providing the map is to allow the
`
`centralized control and administration of storage space using the storage router. See, e.g., ’035
`
`patent, 2:34-35 (A “technical advantage of the present invention is the ability to centrally control
`
`and administer storage space.”); id. at 4:13-16 (“Storage router 56 allows the configuration and
`
`modification of the storage allocated to each attached workstation 58 through the use of mapping
`
`tables or other mapping techniques.”); id. at 4:48-51 (“Storage router 56 provides centralized
`
`control of what each workstation 58 sees as its local drive, as well as what data it sees as global
`
`data accessible by other workstations 58.”). As discussed further below, a person of ordinary
`
`skill in the art