`
`LENOVO ET AL. EXHIBIT 1011
`Page A
`Page A
`
`
`
`iSCSI
`The Universal
`Storage Connection
`
`john L. Hufferd
`
`I I
`
`T.., Addison-Wesley
`Boston • San Francisco • New York • Toronto • Montreal
`London • Munich • Paris • Madrid • Capetown
`Sydney • Tokyo • Singapore • Mexico City
`
`LENOVO ET AL. EXHIBIT 1011
`Page B
`
`
`
`•'
`
`iSCSI
`
`. : ...
`
`• l'J.
`
`. ~: . \..
`........
`. , .
`....
`
`''I''
`
`I
`
`; . ~. ~ ~
`· ....
`
`.·
`
`' .
`
`'·· ...
`
`.. ... -. ~
`
`. ..: ...
`:: ··.·./<·;.,:·· .• :
`-
`--- --... - · . :,
`·--- .-·---··---- -----
`....
`
`....
`
`- ---
`
`., .
`
`.• . -...
`
`--- ---· -- --~---- ·----------- - ---·----- - ---~ -c-------- -· --
`
`_ ........ -..... ..
`
`LENOVO ET AL. EXHIBIT 1011
`Page C
`
`
`
`Mony of rhc d t~sigrwtion~ used by nu~tnrfacturer:s and 5ellers to d.istingnish their
`products are chr imcd as trademarks. \XIhere those designations <lppear in this book .• and
`A.ddison-Wcsley was aware of a trademark claim, the dcsignarions have been printed
`wirh initial c;tpiralletr.ers ()I; in all capitals.
`Tbc authors ,llld publisher have t;tl<en care in the preparnrion of rhis book, bm make
`no expressed or implied warranty of any kind and assume no responsibility for error:;
`or omissions. No liabilitY is assumed fo r incident:tl1lf consequc•nria [ damages in
`connection wirh or :ll'isi;rg Oltt of rhe use of rhe informmion or pn>grams contained
`l1erein.
`
`The publisher offers discounts Oll this book when orden:d in quamity for bulk
`purch:1ses and srccial S<tles. For more infonmtricm, plea~e conrac.r:
`U.S. Corporate and Govcrnmenr S,tles
`(800) 382-3419
`corps;lles@j~r.arson rc:.chgrou p.com
`
`For sales oursidl' of the U.S., please contact:
`
`lntemacional Sales
`(3 J 7) 581-3793
`in rcrna tion,,J@pe<l rson tee hgrou p.cnrn
`Visit Add ison-\X'esley on the Web: www.mvproft.ssional.com
`
`Library of Congress \Alaloging-in-Publication Data
`Hufkrd, John L.
`ISCSr: the universa l storage connection I Johll l. Huffcrd .
`p. em.
`Includes bibliographical rderenccs and index.
`ISBN 0-201-78419-X (pbk. : all<. paper)
`1. iSCST (Computer n~::rwork prorowl) 2. Computer netWI)rks-M;1nagcmcn1.
`3. Computer srorage devices. I. Titk:.
`TK5J05 . .5677 .L-182 2003
`004.6'068-dc2 'I
`
`2002026086
`
`Copyi·ighi· © 2003 b>; Pearson Education, fnc.
`Allr.ighrs reserved. No part of this publication may be reproduced, stored iJl a retrieval
`system, o.r transmitted, iu <~ny form, or by any rneans, electronic, mechanica l,
`photocopying, recording, or otherwise, wirhour the prior consel11' of the puhlisher.
`Printed in the United St<Hes of Americ<L Ptrblishcd simu ltaneously in Canada.
`For information on obtaining pt:rmission f(lt use of matt:ri<11 from this work, please
`submit <1 written request to:
`
`Pearson Education, Inc.
`Righrs and Contracts Department
`75 Arlington Street, Suire 300
`Boston, MA 021'1 6
`Fax: (61.7} 848-7047
`ISBN 0-20 1.-7841.9-X
`Text printed on recycled paper.
`1 2 3 4 56 7 8 9 10-CRS-0605040.302
`f;irs t printing, Ocrobcr 2002
`
`LENOVO ET AL. EXHIBIT 1011
`Page D
`
`
`
`Contents
`
`Ct·edits and Disclaimer xiii
`
`Preface xvii
`
`1
`
`The Background of SCSI
`Chapter 1
`SCSI Bus Interconnect 1
`Fibre Channel Interconnect 5
`iSCSI Interconnect 7
`File Servers and NAS 1 J
`Chapter Summary 11
`
`Chapter 2
`The Value and Position of IS(SI 13
`To the Reader 15
`T he Home Office
`J 5
`The 1-Iome Office ttlld Serial AT4 Vriues
`The Sma ll Office 18
`The Midrange 23
`The High End 27
`The Cmnpus 27
`The Satellite 3 0
`The At-Distance Site 31
`The Central Site 33
`FC and iSCSI 36
`Chapter Summary 36
`
`'17
`
`VII
`
`LENOVO ET AL. EXHIBIT 1011
`Page vii
`
`
`
`l'iii
`
`Cvntents
`
`41
`
`The Histoty uf iSCSI
`
`Chaptet' J
`To the Reader 4l
`SC$1 over TCP/JP 41
`Measurements 43
`Cisco and IBM's .Joint Effort 45
`iSCSl a11d IETF 45
`The End of the Story 46
`Chapter Summary 47
`
`An Overview of iSCSI 49
`Chapte•· 4
`To rhe Reader 49
`TCP/1P 50
`TCP/JP Summary 52
`iSCSI-Related Protocol Layers 52
`.H
`Protocol S11mmary
`Sessions 56
`Session Summary 57
`ProLOcol Data Unir (PDU) Structure 58
`PDU Structure Sun1mary 60
`iSCSI and TOE Integration on a Chip or ~IBA 60
`TOE Integration Su111mary 61
`Checksums and CRC (Digests) 61
`Checksum and CRC Digest Stlli/11/drY 63
`Naming and Addressing 63
`Details of Naming and l~ddressing 64
`Naming and Addressing Summary 67
`Chapter Summary 68
`
`Chaptel' 5
`Session Establishment 7 1
`To tbe Reader 71
`71
`Lntroduction to the Login Process
`Login and Session Establishment 73
`Login PD Us 7 4
`The Login Request PDU 75
`The Login ResfJOt?.Se PDU 77
`iSCSI Sessions 80
`Authentication Routines 81
`Login Keywords 81
`Keywords and the Login PTocess 82
`
`LENOVO ET AL. EXHIBIT 1011
`Page viii
`
`
`
`Conte11ts
`
`Discovery Session 88
`Chapter Summary 90
`
`tx
`
`Text Commands and Keywoi'd l'rocessing 93
`
`Chapter 6
`To rhc Reader 93
`Text Requests and Responses 93
`PDU Fields 94
`Text Keywords and Responses 95
`Rules for l<ey= Value Pah·s 96
`l<ules for Keyword Value Ncgotiatio11 97
`Ru.les for Negotiation Flow 99
`Rules for Negotiation Failure 10 I
`l02
`Chapter Summary
`
`Chapter 7
`
`Session Management
`
`103
`
`To the Reader 103
`Initiator Session ID 103
`Connection Establishment 106
`Data Travel Direction 108
`Sequencing 108
`Rcsending Data or Starns 1 J 2
`Recap 116
`Chapter Summary
`
`ll?
`
`Chapter 8
`
`Command and Data Ordering and Flow 1·19
`
`To the Reader 119
`l.l9
`Command Ordering
`Command Windowing 122
`initiator Task Tag 124
`Design ExamfJie: Direct Host Memory Placement
`Data Ordering 126
`Target Transfer Tag 132
`Data Placement (A Form of RDMA)
`Chapter Summary 135
`
`J 34
`
`I 24
`
`Structure of iSCSI and Relationship to :SCSI 139
`Chapter 9
`To the Reader 139
`iSCSI Srructure and SCSI Relationshjp 139
`
`LENOVO ET AL. EXHIBIT 1011
`Page ix
`
`
`
`Contents
`
`X
`
`SCSI Nexus 147
`Chapter Summary 149
`
`Chapter 10 Ta$lt Management 151
`To the Reader 15 l
`1ilgged and Unt<lgged Texrs
`Chapter Sum mary
`1.56
`
`15:1
`
`157
`
`Chapter· 11
`Eirror Handling
`To the Reader.
`J 57
`Error Recovery Levels
`J 58
`Error Recovery Level 0 159
`Error Recovery Level .L
`160
`Header Digest Recovery at the Juitiator Side 161
`Header Digest Recovery at t!Je Target Side
`l63
`Data Digest Recovery 165
`Error Recovery Level 2 166
`Chapter Summary 168
`
`Chapter '12 Companion Pyocesses 171
`To rhe Reader
`J 71
`Boot Process 171
`Discovery Process 1 72
`Discovety Using Administmtive Specifications 173
`Discovery Using SendTargets 173
`Discovery Using the Service Location Protocol
`Discovery Using iSNS 179
`Security Process 182
`To the Reader 184
`.184
`!Psec Features
`Access Control Lists 187
`.NUB and SNMP 187
`Chapter Swnmary 189
`
`l76
`
`Chapter 13
`Synchronizat io n and Ste ering 193
`To the Reader 193
`Main Memory Replacement 194
`Errors and Congestio11 194
`Missil1g TCP Segme1Lts and Marl~ing 195
`
`LENOVO ET AL. EXHIBIT 1011
`Page x
`
`
`
`Contents
`
`XI
`
`J96
`Fixed-Interval Markers
`"196
`FIM Pointers
`l97
`Marker lmplement,ltiou
`FIM Synchronization Scheme 197
`TCP Upper-Level-Protocol Fran1ing (TUF)
`The TUF Schcn1e
`199
`The TUF Header 200
`flducmtages and Disadvantages 200
`TUF/FlM 201
`Chapter Summary 202
`
`199
`
`iSCSI Summaa-y and Conclusions 205
`Chapter 14
`To the Reader 205
`Summary 20.5
`iSCSI Deuelopment History 210
`Conclusions 211
`iSCSI Network Mar1agement 212
`Ease of Administration 2 12
`Backup atld Disaster Preparation 273
`Performance 215
`The Future 216
`Summary oi Conclusions 216
`
`iSCSI function PDUs 219
`Appendix A
`Serial Number Arithmetic 219
`Asynchronous Message PDU 220
`Login Request PDU 224
`ISJD, TSIH, and CJD Values 230
`Login Response PDU 231
`Logout Requesr PDU 237
`Notes on the Logout I<.equest PDU 238
`T mplicit Termination of Tasks 240
`Logout Response PDU 241
`NOP-ln PDU 244
`NOP-Our PDU 247
`Ready to Transfer (R2T) PDU 250
`Notes on the R2T PDU 252
`Reject PDU
`253
`Notes on the Reject PDU 255
`
`LENOVO ET AL. EXHIBIT 1011
`Page xi
`
`
`
`XII
`
`Cunteuts
`
`SCSJ (Command) Request PDU 257
`SCSI (Command) Response llDU 262
`SCS£ Data-In PDU 268
`SCSI Data-Out PDU 273
`SNACK Request PDU 276
`l~esegmentation 278
`Notes on the SNACK Req11est PDU 279
`Task Ma nagement Funcrioll Rcqncsr PDU 281
`Notes 011 the 1~1sk Management hmctiou Request t>DU 283
`Task Management Function Response PDU 285
`Notes 011 the Tasl~ Management Fu11ctiou Response PDU 287
`Text Request PDU 289
`Text Response PDU 293
`
`Appendix B Keys and Values 297
`AurhMerhod 298
`AurhMerhod Keys 298
`DaraDigest 299
`DataPDULtOrder 299
`DataSequcnceinOrder 299
`DefaultTime2Retain 299
`DefaulrTime2Wait 299
`ErrorRecoveryLevel 300
`Fi rstBurstLengtb 300
`HeaderDigest 300
`IfMarker 300
`JFM<tJ'ldnt 300
`ImmcdiateData 301
`InirialR2T 301
`JnitiatorAlias 301
`lnitiatorName 301
`MaxBurstLength 301
`MaxConnections 302
`MaxOutstandingR2T 302
`MaxRecvDataSegmentLength 302
`OFMarker 302
`OFMarklnt 302
`SendTargets 303
`SessionType 303
`
`LENOVO ET AL. EXHIBIT 1011
`Page xii
`
`
`
`xiii
`
`Conte11t.s
`
`TargetAddress 303
`TargetAlias 303
`TargerNamc 303
`Target Porta !Group Tag 3()4
`X-<YcndorSpt'cificJ<ep
`.304
`X#<li\NA-registercd -string> 304
`
`Appendix C SCSR Ard1Jfitectuo-e Model 305
`SCSI-iSCSI Mappings 305
`Consequences of the Model 306
`1-T Nexus State 307
`SCSI Mode Pages 308
`
`Appendix I> Numbers, Characters, and Bit Encodings 309
`Text form:-~r 309
`
`Appendix E Definitions 313
`
`Appendix F Acronyms 317
`
`Appendix G References and Web Pointers 323
`Basic References for iSCSt 323
`References for SCSI-Related Items 325
`References for iSC:Sl Security and IPsediKE ~25
`References Thar Indirectly i\ff.cct iSCSl 326
`
`Index 329
`
`LENOVO ET AL. EXHIBIT 1011
`Page xiii
`
`
`
`This material may be protected by Copyright law (Title 17 U.S. Code)
`
`The Background of SCSI
`
`WHEN IT COMES TO ATTACHING storage to computers, ATA (AT
`attachment) is the most prevalent method. The characters AT, which ~tand
`for "advanced technology," come from the name of the first 80286-based
`IBM PC. ATA drives are found mostly on desktop systems and laptops.
`Higher-end systems, often called ''servers," utilize a connection technique
`called SCSI (Small Computer System Interface) parallel bus architecture.
`These systems may have several such SCSI buses attached to them. The more
`SCSI buses that can be effectively connected to a system, the higher the data
`input/output (110) capabilities of that system.
`
`SCSI Bus Interconnect
`
`A SCSI bus permits hard disks, tape drives, tape libraries, printers, scanners,
`CD-ROMs, DVDs, and the like to be connected to server systems. It can be
`considered a general interconnection technique that permits devices of many
`different types to interoperate with computer systems. (See Figure 1-1.)
`The protocol used on the SCSI bus is the SCSI Protocol. It defines how
`the SCSI device can be addressed, commanded to perform some operation,
`and give or take data to or from the (host) computing system. The opera(cid:173)
`tional commands are defined by a data structure called a command descrip(cid:173)
`tion block (CDB). For example, a read command would have a CDB that
`contained an "opcode" defined by the protocol to mean, "read." It would
`also contain information about where to get the data (e.g., the block location
`on the disk) and miscellaneous flags to further define the operation.
`The protocol that defines how a SCSI bus is operated also defines how to
`address the various units to which the CDB will be delivered. Generally, pre(cid:173)
`senting the address on the hardware lines of the SCSI bus performs the
`addressing. This address technique calls out a particular SCSI device, which
`
`1
`
`LENOVO ET AL. EXHIBIT 1011
`Page 1
`
`
`
`2
`
`Chapter 1 The Background of SCSI
`
`SCSI Bus
`
`Printers
`Figure 1-1 Small computer system interface (SCSI).
`
`Disks
`
`may then be subdivided into one or more logical units (LUs). An LU is an
`abstract concept that can represent various real objects such as tapes, print(cid:173)
`ers, and scanners.
`Each LU is given an address. This is a simple number called the logical
`unit number (LUN). Thus,. the SCSI protocol handles the addressing of both
`the SCSI device and the LU. (Note: «LUN,, though technically incorrect,
`will often be used when "LU, is meant.) Servers may connect to many SCSI
`buses; in turn the SCSI buses can each connect to a number of SCSI devices,
`and each SCSI device can contain a number of LUs (8, 16, 32, etc.). There-
`- fore, the total number of SCSI entities (LUs) attached to a system can be very
`large. (See Figure 1-2.)
`The next thing to consider is what happens when many computers are in
`the same location. If there are numerous disks (LUs) for each system, this
`configuration creates a very large grouping of storage units. Many installa(cid:173)
`tions group their servers and storage separately and put appropriate trained
`personnel in each area. These people are usually skilled in handling issues
`with either the computer system or the storage.
`One of the most prevalent issues for the storage specialist is supplying the
`proper amount of storage to the appropriate systems. As systems are actually
`used, the amount of storage originally planned for them can vary-either too
`
`LENOVO ET AL. EXHIBIT 1011
`Page 2
`
`
`
`SCSI Bus Interconnect
`
`3
`
`Figure 1- 2 Host processors can have many SCSI buses.
`
`much or too little. Taking- storage from one system's SCSI bus and moving it
`to another system's SCSI bus can be a major disruptive problem often requir(cid:173)
`ing booting of the various systems. Users want a pool of storage, which can
`be assigned in a nondisruptive manner to the servers as need requires.
`Another issue with the SCSI bus is that it has distance limitations varying
`from 1.5 to 25 meters, depending on the bus type (yes, there are multiple
`types). The bus type has to be matched with the requirements of the host and
`the SCSI (storage) devices (often called storage controllers), which seriously
`limits the amount of pooling a SCSI bus can provide.
`Further, many SCSI bus storage devices can have no more than one bus
`connected to them, and unless high-end storage devices are used, one gener(cid:173)
`ally has at most two SCSI bus connections per storage device. In that case the
`storage devices have at most two different host systems that might share the
`various LUs within the SCSI devices. (See Figure 1-3.)
`.
`Often the critical host systems want a primary and a secondary connection
`to the storage devices so that they have an alternate path in case of connection
`or bus failure. This results in additional problems for systems that want alter(cid:173)
`nate paths to the storage and, at the same time, share the storage controllers
`with other hosts (which might be part of a failover-capable cluster).
`Often an installation requires a cluster made up of more than two hosts,
`and it uses a process called file sharing via a shared file system (e.g., Veritas
`Clustered File System) or a shared database system (e.g., Oracle Cluster
`
`LENOVO ET AL. EXHIBIT 1011
`Page 3
`
`
`
`4
`
`Chapter 1 The Background of SCSI
`
`Host A
`
`Host B
`
`Disk Control Unit
`Figure 1- 3 Two hosts sharing one storage control unit.
`
`Database). Often this is not possible without the expense of a mainframe/
`enterprise-class storage controller, which usually permits many SCSI bus
`connections but brings the installation into a whole new price range. (See
`Figure 1- 4.)
`
`Enterprise Storage Server
`Figure 1-4 Pooled storage via SCSI connections.
`
`LENOVO ET AL. EXHIBIT 1011
`Page 4
`
`
`
`Fibre Channel Interconnect
`
`5
`
`Fibre Channel Int erconnect
`
`Understanding the problems with SCSI led a number of vendors to create a
`new interconnection type known as Fibre Channel. In this technology the
`SCSI CDBs are created in the host system, as they were in SCSI bus systems;
`however, the SCSI bus is replaced with a physical "fibre channel" connection
`and a logical connection to the ·target storage controller.
`The term "logical connection" is used because Fibre Channel (FC)
`components can be interconnected via hubs and switches. These intercon(cid:173)
`nections make up a network and thus have many of the characteristics
`found in any network. The FC network is referred to as an FC storage area ·
`network (SAN). However, unlike in an Internet Protocol (IP) network, basic
`management capability is missing in Fibre Channel. This is being rectified,
`but the administrator of an IP network cannot now, and probably never
`will be able to, use the same network management tools on an FC network
`that are used on an IP network. This requi~es duplicate training cost for the
`FC network administrator and the IP network administrator. These costs are
`i_n addition to the costs associated with the actual storage management duties
`of the storage administrator.
`I have had niany storage customers request that storage be set up on an
`IP network, for which they have trained personnel. (See Figure 1-5.) This
`request comes from the fact that FC networking has not been taught in col(cid:173)
`leges and universities.* People with FC skills are generally taught by the ven(cid:173)
`dor or by specialty schools, which are paid for by their company. This is a
`very expensive burden that must be borne by the customer of FC equipment.
`The more storage shipped that is FC connected, the more ruthless the
`demand for trained personnel. Without universities providing trained gradu(cid:173)
`ates, companies will keep hiring people away from each other.
`Some people minimize this point and then go further and state that stor(cid:173)
`age has different needs from other products located on ·a general IP network.
`This is true; however; those needs are in addition to the management of the
`actual network fabric. Fibre Channel needed to invent general· FC network
`fabric management as well as storage management~ It is the fabric manage(cid:173)
`ment that people have been wishing were the same for both storage and the
`general IP network.
`
`*There is at least one important exception: the University of New Hampshire, which has
`become an important center for interoperability testing for Fibre Channd (and recently for
`iSCSI).
`
`LENOVO ET AL. EXHIBIT 1011
`Page 5
`
`
`
`6
`
`Chapter 1 The Background of SCSI
`
`What They Have
`
`What They Want
`
`SAN
`Storage Domain
`
`Storage
`Monitoring and
`Management
`
`LAN
`Server Domain
`
`Network
`Monitoring and
`Management
`
`Figure 1- 5 Have versus want .
`
`Universities have not been training students because of a combination of
`factors:
`
`1. Fibre Channel does not yet replace any other curriculum item.
`
`2. Storage interconnect is seen as a specialty area.
`
`3. Few instructors have expertise in storage and storage interconnects.
`
`4. Many university servers are not FC connected.
`
`5. The processors used by professors are not lik~ly to be FC
`connected.
`
`That the main university servers are not Fibre Channel connected is a prob(cid:173)
`lem currently being addressed. However, the professors' local systems, which
`have significant budget issues, will probably be the last to be ·updated.
`There is another solution to the problem of training, and that is the
`hiring of service companies that plan and install the FC networks. These
`
`LENOVO ET AL. EXHIBIT 1011
`Page 6
`
`
`
`iSCSI Interconnect
`
`7
`
`companies also train customers to take over the day-to-day operations, but
`remain on call whenever needed to do fault isolation or to expand the network.
`Service companies such as IBM Global Services (IGS) and Electronic Data
`Systems (EDS) are also offering ongoing operation services.
`.
`The total cost of ownership (TCO) with Fibre Channel is very high
`l:Ompared to that wich IP networks. This applies not only to the price of
`FC components, which are significantly more expensive than corresponding
`IP components, but also to operation and maintenance. The cost of training
`personnel internally or hiring a service company to operate and maintain the
`PC network is a significant addition to the TCO.
`·It is important to understand that storage networks have management
`needs that are not present in direct-attach SCSI. The fact that Fibre Channel
`has suffered through the creation of many of these new storage management
`functions (e.g., host LUN masking, shared tape drives and libraries) means
`that IP storage networks can exploit these same management functions with(cid:173)
`out having to create them from scratch.
`
`iSCSI Interconnect
`
`The iSCSI (Internet SCSI} protocol was created in order to reduce the TCO
`of shared storage _solutions by reducing the initial outlay for networking,
`training, and fabric management software. To this end a working group
`within the IETF (Internet Engineering Task Force) Standards Group was
`established.
`iSCSI has the capability to tie together a company's systems and storage,
`which may be spread across a campus-wide environment, using the com(cid:173)
`pany's interconnected local area networks (LANs), also known as intranets.
`This applies not only to the company's collection of s·ervers but also to their
`desktop and laptop systems.
`-
`Desktops and laptops can operate with iSCSI on a normal 100-megabit(cid:173)
`per-second (Mb/s) Ethernet link in a manner that is often better than "sawing
`across"* their own single-disk systems. Additionally, many desktop systems
`can exploit new "gigabit copper" connections such as the 10/100/lOOOBaseT
`Ethernet links. The existing wiring infrastructure that most companies have is
`Category 5 (Cat. 5) Ethernet cable. The new lOOOBaseT network interface
`
`*"Sawing" is a term used to describe the action of the voice coil on a disk drive that moves
`the recording heads back and forth across the sector of the disk. The resultant noise often
`sounds like sawing.
`
`LENOVO ET AL. EXHIBIT 1011
`Page 7
`
`
`
`-- - ··-
`
`8
`
`· Chapter 1 The Background of SCSI
`
`cards (NICs) are able to support gigabit speeds on the existing Cat. 5 cables. It
`is expected that the customer will, over time, replace or upgrade his desktop
`system so that it has 1000BaseT NICs. In this environment, if the desktops can
`operate effectively at even 300 Mb/s, the customer will generally see better
`response than is possible today with normal desktop ATA drives-without
`having to operate ~t full gigabit speeds.
`Data suggest that 500MHz Pentium systems can operate the normal host
`TCP/IP (Transmission Control Protocol over Internet Protocol) stacks at
`100 Mb/s using less than 10% of CPU resources. These resources will hardly
`··-- be missed if the 110 arrives in a timely manner. Likewise we can expect the
`desktop systems shipping in the coming year and beyond to be on the order
`of 1.5 to 3 GHz. This means that, for 30 megabyte-per-second (MB/s) 110
`requirements {approximately 300 Mb/s), desktop systems will use about the
`same, or less, processor time as they previously consumed on 500MHz desk(cid:173)
`top systems using 100Mb/slinks (less than 10%). Most users would be very
`happy if their desktops could sustain an 110 rate of 30 MB/s. (Currently
`desktops average less than 10 MB/s.)
`·
`The important point here is that iSCSI for desktops and laptops makes
`sense even if no special hardware is dedicated to its use. This is a significant
`plus for iSCSI versus Fibre Channel, since Fibre Channel requires special
`hardware and is therefore unlikely to be deployed on desktop and laptop sys(cid:173)
`tems. (See Figure 1-6.)
`The real competition between Fibre Channel and iSCSI will occur on
`server-class systems. These systems are able to move data {read and write) at
`
`Figure 1-6 iSAN for desktops and and laptops.
`
`LENOVO ET AL. EXHIBIT 1011
`Page 8
`
`
`
`iSCSI Interconnect
`
`9
`
`up to 2Gb/s speeds. These FC connections require special FC chips and host .
`bus adapters (HBAs). As a rule, these HBAs are very expensive (co,mpared to
`NICs), but they permit servers to send their SCSI CDBs to SCSI target devices
`and LUs at very high speed and at very low processor overhead. Therefore, if
`iSCSI is to be competitive in the server environment, it too will need specially
`built chips and HBAs. Moreover, these chips and HBAs will need to have
`TCPIIP offload engines (TOEs) along with the iSCSI function. The iSCSl
`function can be located in the device driver, the HBA, or the chip, and, in
`one way or another, it will need to interface directly with the TOE and
`thereby perform all the TCPIIP processing on the chip or HBA, not on the
`host system.
`Some people belie.ve that the price of FC networks will fall to match that
`of IP networks. I believe that will not occur for quite a while, since most FC
`sales are at the very high end of the market, where they are very entrenched.
`It therefore seems foolish for them to sacrifice their current profit margins,
`.fighting for customers in the middle to low end of the market (against iSCSI),
`where there are no trained personnel anyway. I believe that FC prices will go
`down significantly when iSCSI become a threat at the market high end,
`which won't happen for some time.
`Studies conducted by IBM and a number of other vendors have concluded
`that iSCSI can perform at gigabit line speed, with overheads as low as those of
`Fibre Channel, as long as it has iSCSI and TCPIIP hardware assist in HBAs or
`chips. It is expected that the price of gigabit-speed iSCSI HBAs will be signifi(cid:173)
`cantly lower than that of FC HBAs. It is also felt that two 1Gb iSCSI HBAs
`will have a significantly lower combined price than current 2Gb FC HBAs.
`Even though iSCSI HBAs and chips will be able to operate at link speed,
`it is expected that their latency will be slightly higher than that of Fibre
`Channel's. This difference is considered to be less than 10 microseconds,
`which, when compared to the time for I/0 processing, is negligible. iSCSI's
`greater latency is caused by the greater amount of processing to be done
`within the iSCSI chip to support TCP. Thus, there is some impact from the
`additional work needed, even if supported by a chip. A key future vendor(cid:173)
`value-add will be how well a chip is able to parallel its processes and thus
`reduce the latency. This is not to say that the latency of iSCSI chips will be
`unacceptable. In fact, it is believed that it will be small enough not to be notice(cid:173)
`able in most normal operations.
`Another important capability of iSCSI is that it will be able to send I/0
`commands across the Internet or a customer's dedicated wide area networks
`(WANs}. This will be significant for applications that require tape.
`
`LENOVO ET AL. EXHIBIT 1011
`Page 9
`
`
`
`10
`
`Chapter 1 The Background of SCSI
`
`An odd thing about tape is that almost everyone warits to be able to use it
`(usually for backup) but almost no one wants the tape library nearby. iSCSI
`provides interconnection to tape libraries at a great distance from the host
`that is writing data to it. This permits customers to place their tape libraries in
`secure backup centers, such as "Iron Mountain." A number of people have
`said that this "at distance" tape backup will be iSCSI's killer app.
`At the bottom line, iSCSI is all about giving. the customer the type of inter:
`connect to storage that they have been requesting-a network-connected stor(cid:173)
`·age configuration made up of components that the customer can buy from
`many different places, whose purchase price is low, and whose operation is
`familiar to many people (especially computer science graduates). They also
`get a network they can configure and operate via standard network manage(cid:173)
`ment tools, thereby keeping the TCO low. Customers do not have to invest
`in a totally new wiring installation, and they appreciate the fact that they can
`.use Cat. 5 cable-. which is already installed. They like the way that iSCSI can
`seamlessly operate, not only from server to local storage devices but also
`across campuses as well as remotely via WANs.
`These customers can use iSCSI to interconnect remote sites, which per(cid:173)
`mits mirrored backup and recovery capability, as well as a remote connec(cid:173)
`tion· to their tape libraries. (See Figure 1-7.) On top of all that, iSCSI will be
`operating on low-end systems and on high-end systems with performance as
`good as what FC networks can provide. If that is not enough, it also comes
`
`iSAN (LAN)
`· Fabric
`
`WAN Fabric ..
`
`iSAN (LAN) ·
`Fabric
`
`Figure 1-7 Remote iSAN, WAN, and tape.
`
`IP Network
`
`'---------,>
`
`LENOVO ET AL. EXHIBIT 1011
`Page 10
`
`
`
`File Servers and NAS
`
`11
`
`with built~in Internet Protocol security (IPsec), which the customer can enable
`whenever using unsecured networks.
`It is no wonder why customers, consultants, and vendors are singing the
`praises of iSCSI; it is a very compelling technology.
`
`File Servers and NAS
`
`For over a decade now, there has been the concept of file serving. It begins with
`the idea that a host can obtain its file storage remotely from the host system.
`SUN defined a protocol .called Network File System (NFS) that was designed to
`operate on the IP network. IBM and Microsoft together defined a protocol
`based on something they called Server Message Block (SMB). Microsoft called
`its version LAN Manager; IBM called its version LAN Server.
`The original SMB protocol ran only on small local networks. It was
`unable to operate seamlessly with the Internet and hence was generally lim(cid:173)
`ited to small LANs. Microsoft updated SMB to make it capable of operating
`on IP networks. It is now called the Common Internet File System (CIFS).
`It should be noted that Novell created a file server protocol to compete
`with IBM and Microsoft.
`A file server protocol places a file system "stub" on each host, which acts
`as a client of the target file server. Like a normal file system, the file system
`stub is given control ·by the OS; however, it simply forwards the host's file
`system request to the remote file server for handling. The actual storage is at
`the file server.
`File serving began as a means to share files between peer computer systems,
`but ·users soon started dedicating systems to file serving only. This was the
`beginning of what we now call a network attached storage (NAS) appliance.
`Various vendors started specializing in NAS appliances, and today this is a very
`hot market. These appliances generally support NFS protocols, CIFS proto(cid:173)
`cols, Novell protocols, or some combination. Since NASs operat~ on IP net(cid:173)
`works, many people see them as an alternative to iSCSI (or vice versa). In some
`ways they are, but they are significantly different, which makes one better than
`the other in various environments. We will cover these areas later in this book.
`
`~ Chapte r Summary
`
`In this chapter we discussed the various types of hard drives, and the type of
`interconnect they have with the host systems. We also discussed their applic(cid:173)
`able environment and their limitations. This information is highlighted below.
`
`LENOVO ET AL. EXHIBIT 1011
`Page 11
`
`
`
`12
`
`Chapter 1 The Background of SCSI
`
`• There are two main hard drive types available today: .
`
`> ATA (used in desktop and laptop systems)
`
`> SCSI (used in se:ver-class systems)
`
`• SCSI drives are connected to a host via a SCSI bus and use the SCSI
`protocol.
`
`---- ·-··-
`
`• The SCSI command description block (CDB) is a key element of the
`-- SCSI protocol.
`
`• The real or logical disk drive that the host talks to is a logical unit (LU).
`
`• The SCSI protocol gives each addressable LU a number, or LUN.
`
`• SCSI bus distance limitations vary from 1.5 to 25 meters depending on
`the type of cable needed by the host or drive.
`
`• Non enterprise storage controllers usually have only one or two SCSI
`bus connections.
`
`• Enterprise storage controllers usually have more than two SCSI bus
`connections.
`
`• Clustering servers without using enterprise-class storage systems is often
`difficult (especially if each host wants to have more then one connection
`to a storage controller).
`
`• Fibre Channel (FC) connections solve many interconnection problems,
`but bring their own management problems.
`
`• Fibre Channel requires its own fabric management software and cannot
`use the standard IP network management tools.
`
`• Fibre Channel needs software to manage the shared storage pool to
`prevent systems from stepping on each other.
`
`• FC networks are generally considered to have a high TCO (total cost of
`ownership).
`
`• FC HBAs, chips, and switches are generally considered to be expensive
`(especially when compared to IP network NICs and switches).
`
`• Personnel trained in Fibre Channel are scarce, and companies are
`pirating employees from each other.
`
`LENOVO ET AL. EXHIBIT 1011
`Page 12
`
`
`
`Chapter Summary
`
`13
`
`• Universities in general are not teaching Fibre Cha