throbber
US007707263B1
`
`(12) Ulllted States Patent
`Cramer et a].
`
`(10) Patent N0.:
`(45) Date of Patent:
`
`US 7,707,263 B1
`Apr. 27, 2010
`
`(54) SYSTEM AND METHOD FOR ASSOCIATING
`A NETWORK ADDRESS WITHA STORAGE
`
`6,591,306 B1 *
`6,621,820 B1 *
`
`7/2003 Redlich .................... .. 709/245
`9/2003 Williams 6161. ..... .. 370/39531
`
`DEVICE
`
`6,636,499 B1 * 10/2003 DoWling ................... .. 370/338
`
`(75) Inventors: Samuel M- Cramer’ Sunnyvale’ CA
`
`6,718,383 B1 *
`
`4/2004 Hebert ..................... .. 709/224
`
`
`
`Joydeep sen sarma’ MountainvieW, CA (US); Richard 0.
`
`
`
`* 6,810,396 B1 * 10/2004 Blumenau et a1. ............ .. 707/5 Blumenau et a1. . . . . . . . . . ..
`
`
`
`Larson, CuPemnO, CA (US)
`
`6,870,852 B1 *
`
`3/2005 Lawitzke .................. .. 370/401
`
`(73) Assignee: NetApp, Inc., Sunnyvale, CA (US)
`
`7/2005 Cmmer et 31'
`6’920’580 Bl
`7,079,499 B1* 7/2006 Akhtar et a1. ............. .. 370/310
`
`( * ) Notice:
`
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`USC 154(b) by 764 days.
`
`(21) Appl. No.: 10/138,918
`
`(22) Filed:
`(51) Int Cl
`
`May 3’ 2002
`
`(2006.01)
`G06F 15/16
`(52) US. Cl. ..................... .. 709/212; 709/203; 709/201;
`709/223; 709/220; 709/204
`(58) Field of Classi?cation Search ............... .. 709/203,
`709/201’ 21’ 220’ 223’ 204’ 212
`See application ?le for complete search history.
`
`(56)
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`
`5,163,131 A 11/1992 ROW et a1~
`5’355’453 A 10/1994 R(_)W et 31'
`5,485,579 A
`1/1996 HltZ et a1.
`5 802 366 A
`9/l998 ROW et 31‘
`538193292 A l 0 / 1998 HitZ et al‘
`5,854,901 A * 12/1998 C016 et a1. ................ .. 709/245
`5,931,918 A
`8/1999 ROW et a1‘
`5,941,972 A
`8/1999 Hoese et a1,
`5,963,962 A 10/ 1999 Hitz et a1.
`6,065,037 A
`5/2000 HitZ et a1~
`6,243,759 B1 *
`6/2001 Boden et a1. .............. .. 709/238
`6,289,356 B1
`9/2001 HitZ et a1.
`6,425,035 B2
`7/2002 Hoese et a1.
`6,574,667 B1 *
`6/2003 Blumenau et a1. ......... .. 709/229
`6,581,102 B1 *
`6/2003 Amini et a1. .............. .. 709/231
`
`_
`(Cont1nued)
`OTHER PUBLICATIONS
`
`Request for Comments (RFC) 826: An Ethernet Address Resolution
`Protocol, Internet Engineering Task Force (IFTF), Nov. 1982.
`
`(Continued)
`
`_
`_
`Prlmary ExammeriThu Ngqyen
`ASS/SlamExammeriLan-Dal T Truong
`(74) Attorney, Agent, or Firm4Cesari and McKenna, LLP
`
`(57)
`
`ABSTRACT
`
`A system and method for associating a network address With
`a volume, individual disk or collection of disks in a netWork
`storage system. Identifying information is stored on each
`.
`volume or d1sk so that a ?le server can map a netWork address
`to a MAC address associated With a particular netWork inter
`face Controller Of a ?le server- Input/01111311t Operations
`directed to the netWork address associated With a particular
`volume or disk is directed to the ?le server that is currently
`managing that volume or disk. In a system utilizing these
`address associated volumes, the name and address of data
`does not Change as different ?le Servers manage a particular
`Volume or disk'
`
`13 Claims, 11 Drawing Sheets
`
`918541181415“ i 1
`
`400
`
`S
`404
`|
`THISAN lg VOLUME
`
`40 m I
`
`OBTAIN IP ADDRESS
`FOR STORAGE DEVICE
`I
`410" MAP IP ADDRESS TO NIC
`I
`412‘ ADVERTISE lP ADDRESS
`OVER APPROPRIATE NICE
`
`416
`J
`
`COMPLETE
`
`"0
`
`414
`MORE
`DEVICESTO
`
`CONFI?GURE
`
`YES
`
`LENOVO ET AL. EXHIBIT 1005
`Page 1 of 20
`
`

`
`US 7,707,263 B1
`Page 2
`
`U.S. PATENT DOCUMENTS
`
`2003/0126118 A1*
`2003/0131207 A1*
`2003/0177206 A1*
`2006/0265529 A1*
`2007/0012931 A1*
`2007/0112931 A1*
`
`7/2003
`7/2003
`9/2003
`11/2006
`1/2007
`5/2007
`
`Burton et al. ................ .. 707/3
`Arakawa et al.
`711/162
`Whitlow ................... .. 709/220
`
`Kuik et a1. .................. .. 710/62
`
`Kuik et a1. ................ .. 709/216
`
`OTHER PUBLICATIONS
`
`Common Internet File System (CIFS) Version: CIFS-Spec 0.9, Stor
`age Networking Industry Association (SNIA), Draft SNIA CIFS
`Documentation Work Group Work-In-Progress, Revision Date: Mar.
`26, 2001‘
`
`7,296,068
`2002/0023150
`2002/0120706
`2002/0138628
`2002/0143946
`2002/0147774
`2002/0165906
`2003/0023784
`2003/0088700
`2003/0101109
`2003/0115324
`2003/0120743
`
`B1
`A1 *
`
`11/2007 Sarma et a1.
`2/2002 Osafune et a1. ...... ..
`
`8/2002 Murphy ....... ..
`A1 *
`9/2002 Tingley et a1. .
`A1 *
`A1 * 10/2002 Crosson .... ..
`A1 * 10/2002 Lisiecki et al. .
`
`A1* 11/2002 Ricart et a1. ......... ..
`
`709/221
`709/208
`. 709/227
`
`. 709/226
`709/203
`709/203
`
`
`
`A1 * A1 *
`
`
`
`1/2003 Matsunami et al. 5/2003 Aiken ............. ..
`
`
`
`A1 * A1 *
`
`
`
`5/2003 Kaneda et al. 6/2003 Blumenau et al. .... ..
`
`..... .. 710/36
`
`. 709/245
`705/28
`709/225
`
`A1
`
`6/2003 Coatney et al.
`
`* cited by examiner
`
`LENOVO ET AL. EXHIBIT 1005
`Page 2 of 20
`
`

`
`US. Patent
`
`Apr. 27, 2010
`
`Sheet 1 0f 11
`
`US 7,707,263 B1
`
`109
`
`INTERNET
`
`107,
`NETWORK
`CACHE
`
`104,
`
`V
`PCS J
`l
`i -
`
`100\
`
`E
`SERVERS
`
`108\
`’
`SWITCH,
`ROUTER
`
`LAN
`m
`
`100
`/
`
`‘
`
`MGMT
`CONSOLE
`@ ,
`
`F|LER1
`
`11°
`
`F|LER2
`
`112
`
`,
`
`120
`
`SWITCHING
`NETWORK
`
`122
`
`122
`
`122
`
`122
`
`FIG. 1
`
`LENOVO ET AL. EXHIBIT 1005
`Page 3 of 20
`
`

`
`US. Patent
`
`Apr. 27, 2010
`
`Sheet 2 0f 11
`
`US 7,707,263 B1
`
`MEQQZRY
`——
`
`PROCESSOR
`22
`
`NVRAM
`262
`
`110
`J
`
`/ 225
`
`NETWORK
`ADAPTER
`2.2.2
`
`,
`L__;_
`TO/FROM
`LAN 102
`
`STORAGE
`ADAPTER
`2.8.
`
`i
`
`TO/FROM
`SWITCHING tSJETWORK
`12
`
`FIG. 2
`
`LENOVO ET AL. EXHIBIT 1005
`Page 4 of 20
`
`

`
`US. Patent
`
`Apr. 27, 2010
`
`Sheet 3 0f 11
`
`US 7,707,263 B1
`
`8~\
`
`./
`
`[Ill 0
`
`@z_n_n_<_>_ _ T P
`
`|...I| _ III.
`
`_ A _
`/ Q \
`
`- - lull
`
`/ \
`
`2%? _ T: N;
`
`QTLwEmE _ M _ mm mm
`03 _ H " $6 ,5:
`M2105 _ n 0 a8
`
`[Ill Ill
`
`/ \ 1 ,
`
`155 _
`$210 " _ $52 $52
`
`% u _ _ u am
`
`_
`
`5,5 555 BE
`ow.
`
`m .wE
`
`
`
`Q55 5E6: 210252 5E6:
`
`
`
`
`
`LENOVO ET AL. EXHIBIT 1005
`Page 5 of 20
`
`

`
`US. Patent
`
`Apr. 27, 2010
`
`Sheet 4 0f 11
`
`US 7,707,263 B1
`
`402
`
`IDENTIFY NEXT DISK
`TO CONFIGURE
`
`404
`IS
`THIS AN IE; VOLUME
`
`YES
`
`408w
`
`OBTAIN IP ADDRESS
`FOR STORAGE DEVICE
`
`410*“ MAP IP ADDRESS TO NIC
`
`412‘ ADVERTISE IP ADDRESS
`OVER APPROPRIATE NICS
`
`MORE
`DEVICES TO
`CONFIIPGURE
`
`YES
`
`FIG. 4
`
`LENOVO ET AL. EXHIBIT 1005
`Page 6 of 20
`
`

`
`US. Patent
`
`Apr. 27, 2010
`
`Sheet 5 0111
`
`US 7,707,263 B1
`
`(500
`
`,ENG
`% ,MKT
`
`E0’
`
`E1 ’
`
`E2’
`
`E4A7
`
`E413’
`
`502‘
`
`FlLER1
`110/
`
`FILER 2
`112’
`
`5047
`
`voua
`
`510
`
`515
`
`FIG. 5
`
`LENOVO ET AL. EXHIBIT 1005
`Page 7 of 20
`
`

`
`US. Patent
`
`Apr. 27, 2010
`
`Sheet 6 0f 11
`
`US 7,707,263 B1
`
`502
`
`504
`
`602* ENG-NiCS
`
`MKT-NICS
`
`FIG. 6
`
`7o2- ENG-NICS
`
`704* MKT-NICS
`
`E4B
`
`E4A
`
`FIG. 7
`
`LENOVO ET AL. EXHIBIT 1005
`Page 8 of 20
`
`

`
`US. Patent
`
`Apr. 27, 2010
`
`Sheet 7 0f 11
`
`US 7,707,263 B1
`
`VDLUME
`NAME
`ggg
`
`NETWORK
`ADDRESS
`s_1g
`
`515
`‘/
`
`LOGICAL
`INTERFACES
`TO BE
`ADVERTISED 0N
`_a_1_5
`
`FIG. 8
`
`LENOVO ET AL. EXHIBIT 1005
`Page 9 of 20
`
`

`
`US. Patent
`
`Apr. 27, 2010
`
`Sheet 8 0f 11
`
`US 7,707,263 B1
`
`f 900
`
`CLIENTS
`em
`
`ENG-NICS 3
`
`Q
`
`I’ 910
`
`E_1
`
`MKT-NICS
`i
`
`E2
`
`SWITCHING
`
`FIG. 9
`
`LENOVO ET AL. EXHIBIT 1005
`Page 10 of 20
`
`

`
`US. Patent
`
`Apr. 27, 2010
`
`Sheet 9 0f 11
`
`US 7,707,263 B1
`
`1005
`
`ENG-NICS
`
`E0
`
`1000
`
`FIG. 10
`
`LENOVO ET AL. EXHIBIT 1005
`Page 11 of 20
`
`

`
`US. Patent
`
`Apr. 27, 2010
`
`Sheet 10 0f 11
`
`US 7,707,263 B1
`
`1105 1 ENG-NICS
`
`1110 4 MKT-NICS
`
`E1
`
`E2
`
`FIG. 11
`
`LENOVO ET AL. EXHIBIT 1005
`Page 12 of 20
`
`

`
`US. Patent
`
`Apr. 27, 2010
`
`Sheet 11 0111
`
`US 7,707,263 B1
`
`1205*
`
`VOL1
`
`1. 2. 3. 4
`
`ENG-NICS
`
`FIG. 12
`
`LENOVO ET AL. EXHIBIT 1005
`Page 13 of 20
`
`

`
`US 7,707,263 B1
`
`1
`SYSTEM AND METHOD FOR ASSOCIATING
`A NETWORK ADDRESS WITH A STORAGE
`DEVICE
`
`FIELD OF THE INVENTION
`
`The present invention relates to networked storage sys
`tems, and more particularly to accessing ?les in a networked
`storage system.
`
`BACKGROUND OF THE INVENTION
`
`A ?le server is a computer that provides ?le service relating
`to the organiZation of information on storage devices, such as
`disks. The ?le server or ?ler includes a storage operating
`system that implements a ?le system to logically organiZe the
`information as a to hierarchical structure of directories and
`?les on the disks. Each “on-disk” ?le may be implemented as
`a set of data structures, e.g., disk blocks, con?gured to store
`information. A directory, on the other hand, may be imple
`mented as a specially formatted ?le in Which information
`about other ?les and directories are stored.
`A ?ler may be further con?gured to operate according to a
`client/ server model of information delivery to thereby alloW
`many clients to access ?les stored on a server, e. g., the ?ler. In
`this model, the client may comprise an application, such as a
`database application, executing on a computer that “con
`nects” to the ?ler over a computer netWork, such as a point
`to-point link, shared local area netWork (LAN), Wide area
`netWork (WAN), or virtual private netWork (V PN) imple
`mented over a public netWork such as the Internet. Each client
`may request the services of the ?le system on the ?ler by
`issuing ?le system protocol messages (in the form of packets)
`to the ?ler over the netWork.
`A common type of ?le system is a “Write in-place” ?le
`system, an example of Which is the conventional Berkeley fast
`?le system. In a Write in-place ?le system, the locations of the
`data structures, such as Modes and data blocks, on disk are
`typically ?xed. An Mode is a data structure used to store
`information, such as meta-data, about a ?le, Whereas the data
`blocks are structures used to store the actual data for the ?le.
`The information contained in an inode may include, e.g.,
`oWnership of the ?le, access permission for the ?le, siZe of the
`?le, ?le type and references to locations on disk of the data
`blocks for the ?le. The references to the locations of the ?le
`data are provided by pointers, Which may further reference
`indirect blocks that, in turn, reference the data blocks,
`depending upon the quantity of data in the ?le. Changes to the
`inodes and data blocks are made “in-place” in accordance
`With the Write in-place ?le system. If an update to a ?le
`extends the quantity of data for the ?le, an additional data
`block is allocated and the appropriate inode is updated to
`reference that data block.
`Another type of ?le system is a Write-anyWhere ?le system
`that does not overWrite data on disks. If a data block on disk
`is retrieved (read) from disk into memory and “dirtied” With
`neW data, the data is stored (Written) to a neW location on disk
`to thereby optimiZe Write performance. A Write-anyWhere ?le
`system may initially assume an optimal layout such that the
`data is substantially contiguously arranged on disks. The
`optiis mal disk layout results in ef?cient access operations,
`particularly for sequential read operations, directed to the
`disks. A particular example of a Write-anyWhere ?le system
`that is con?gured to operate on a ?ler is the Write AnyWhere
`File Layout (WAFLTM) ?le system available from NetWork
`Appliance, Inc. of Sunnyvale, Calif. The WAFL ?le system is
`implemented Within a microkernel as part of the overall pro
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`2
`tocol stack of the ?ler and associated disk storage. This micro
`kernel is supplied as part of NetWork Appliance’s Data
`ONTAPTM softWare, residing on the ?ler, that processes ?le
`service requests from netWork-attached clients.
`As used herein, the term “storage operating system” gen
`erally refers to the computer-executable code operable on a
`storage system that manages data access, and, in the case of
`?lers, implements ?le system semantics (such as the above
`referenced WAFL). In this sense, ONTAP softWare is an
`example of such a storage operating system implemented as a
`microkernel. The storage operating system can also be imple
`mented as an application program operating over a general
`purpose operating system, such as UNIX or WindoWs NT®,
`or as a general-purpose operating system With con?gurable
`functionality, Which is con?gured for storage applications as
`described herein.
`Disk storage is typically implemented as one or more stor
`age “volumes” that comprise physical storage disks, de?ning
`an overall logical arrangement of storage space. Currently
`available ?ler implementations can serve a large number of
`discrete volumes (150 or more, for example). Each volume is
`associated With its oWn ?le system and, for purposes hereof,
`volume and ?le system shall generally be used synony
`mously. The disks Within a volume are typically organiZed as
`one or more groups of Redundant Array of Independent (or
`Inexpensive) Disks (RAID). RAID implementations enhance
`the reliability/ integrity of data storage through the redundant
`Writing of data “stripes” across a given number of physical
`disks in the RAID group, and the appropriate caching of
`parity information With respect to the striped data. In the
`example of a WAFL ?le system, a RAID 4 implementation is
`advantageously employed. This implementation speci?cally
`entails the striping of data across a group of disks, and sepa
`rate parity caching Within a selected disk of the RAID group.
`As described herein, a volume typically comprises at least
`one data disk and one associated parity disk (or possibly
`data/parity) partitions in a is single disk) arranged according
`to a RAID 4, or equivalent high-reliability, implementation.
`More than one ?ler can reside on a single netWork (LAN,
`WAN, etc.), for access by netWork-connected clients and
`servers. Where multiple ?lers are present on the netWork,
`each ?ler may be assigned responsibility for a certain set of
`volumes. The ?lers may be connected in a cluster using a
`separate physical interconnect or linking communication pro
`tocol that passes over the netWork (eg the LAN, etc.). In the
`event of a failure or shutdoWn of a given ?ler, its volume set
`can be reassigned to another ?ler in the cluster to maintain
`continuity of service. In the case of a shutdoWn, various
`failover techniques are employed to preserve and restore ?le
`service, as described generally in commonly oWned US.
`patent application Ser. No. 09/933,883 entitled OPERATED
`INITIATED GRACEFUL TAKEOVER IN A NODE CLUS
`TER by Naveen Bali et al and US. patent application Ser. No.
`09/625,234 entitled NEGOTIATING TAKEOVER IN HIGH
`AVAILABILITY CLUSTER by Samuel M. Cramer et al., the
`teachings of Which are expressly incorporated herein by ref
`erence. Such techniques involve (a) the planned and
`unplanned takeover of a ?ler’s volumes by a cluster partner
`?ler upon ?ler shutdoWn; and (b) the giveback of the taken
`over volumes by relinquishing control by the cluster partner
`?ler. A management station can also reside on the netWork, as
`a specialiZed client that includes storage management soft
`Ware used by a system administrator to manipulate and con
`trol the storage-handling by netWorked ?lers.
`A ?ler can be made more reliable and stable in the event of
`a system shutdoWn or other unforeseen problem by employ
`ing a backup memory consisting of a non-volatile random
`
`LENOVO ET AL. EXHIBIT 1005
`Page 14 of 20
`
`

`
`US 7,707,263 B1
`
`3
`access memory (NV RAM). An NVRAM is typically a large
`volume solid-state memory array (RAM) having either a
`backup battery, or other built-in last-state-retention capabili
`ties (eg a FLASH memory), that holds the last state of the
`memory in the event of any poWer loss to the storage system.
`A ?ler is typically made more reliable and stable in the
`event of a system shutdown or unforeseen problem by
`employing a backup memory consisting of a non-volatile
`random access memory NVRAM. An NVRAM is typically a
`large-volume, solid-state memory array (RAM) having either
`a backup battery, or other built-in last-state-retention capa
`bilities (e. g. a FLASH memory), that holds the last state of the
`memory in the event is of any poWer loss to the storage
`system. In a knoWn implementation, each client transaction
`request processed by the storage operating system is logged
`to the NVRAM as a journal entry. The NVRAM is loaded
`With requests until such time as a consistency point (CP) is
`reached. CPs occur at ?xed time intervals, or When other key
`events arise. In the event of a fault, poWer loss, or other
`interruption in the normal How of information among the
`20
`client, storage operating system, and the disks, the NVRAM
`log is replayed to re-perform any requests logged therein for
`its oWn ?ler (and an associated cluster partner ?ler, if any)
`betWeen the last CP and an interruption in storage handling.
`In addition, the log is replayed during reboot. Each time a CP
`occurs, the requests logged in the NVRAM are subsequently
`overWritten or otherWise cleared, once the results of the
`requests are Written from the ?ler’ s conventional RAM buffer
`cache to disk. Immediately thereafter, the NVRAM is avail
`able for the logging of neW requests.
`In the event of a shutdoWn, poWer failure or other system
`problem, Which interrupts the normal flow of information
`among the client, storage operating system, and the disks, the
`NVRAM can be used to recover information logged since the
`last CP prior to the interruption event.
`Computer netWorks are typically composed of computers
`that communicate With each other through speci?c protocols.
`Each computer or device attached to a netWork is assigned at
`least one netWork-unique name, i.e. a netWork address. Any
`device in the netWork can manage ?les that are stored on
`devices Which are locally accessible to that device. Addition
`ally, a netWork device could require access to ?les that are
`managed by other netWork devices Within the netWork.
`In knoWn ?le system implementations, local ?lesithose
`?les Which are stored on devices locally accessible to a net
`Work deviceiare identi?ed by a device name and a ?le name.
`For example, in a Microsoft Windows@ compatible device, a
`?le may be identi?ed by “C:/foo/bar/?le.doc.” In this example
`the device is called C:, Which is the name of a disk connected
`to the Microsoft WindoWs machine. The ?le name identi?es
`the directory structure and individual ?le to be accessed. In
`this example, the ?le “?le.doc” is stored in the “bar” subdi
`rectory of the “foo” directory. This combination produces a
`unique ?le identi?er for those ?les Which are locally acces
`sible. Files Which are accessed via the netWork, i.e. non-local
`?les, are identi?ed by the device name and ?le name as local
`?les are, but are further identi?ed by the address of the com
`puter or netWork device that manages that ?le. This combi
`nation of netWork address, device name, and ?le name pro
`duces a netWork-unique identi?er for a particular ?le.
`As part of a netWork storage system, clients request the
`services of a ?ler’s ?le system, i.e. making a data access
`request, by issuing ?le system protocol messages (in the form
`of packets) to the ?ler over the netWork. The ?ler ful?lls a data
`access request by locating and packaging the data, and then
`sending the data through the netWork back to the requesting
`client. In knoWn implementations, the data access request
`
`50
`
`55
`
`60
`
`65
`
`25
`
`30
`
`35
`
`40
`
`45
`
`4
`identi?es the data through an identi?er that contains the net
`Work address of the ?ler, for example “1.2.3.4:C1/foo/bar/
`?le.doc.” In this example, the ?le named “?le.doc” is located
`in the “/foo/bar” directory on drive C: of the ?ler Whose
`netWork address is 1.2.3.4. The data that the client has
`requested could be physically located anyWhere in the net
`Work, on any of the possible storage devices accessible
`through any ?le server.
`A noted disadvantage of prior implementations arises for a
`client When a ?ler is taken out the netWork, and its netWork
`address becomes unavailable to other computers or netWork
`devices connected to the netWork. It is possible to reconnect
`the storage devices that a particular ?le server manages to
`another ?ler, but in doing so, the data identi?cation must
`change to the netWork address of the neW ?ler in order to
`properly locate the data managed by the ?ler Which has been
`taken out of the netWork. This change of the data identi?ca
`tion can be inconvenient and time-consuming, especially
`considering that a ?ler can be taken off of the netWork for
`many reasons, including ?ler upgrade, scheduled mainte
`nance, or unexpected system failure. The ability to transpar
`ently access netWork data before, during, and after storage
`device recon?guration to a neW ?ler could vastly increase
`con?guration ?exibility and data accessibility, and could
`decrease doWntime resulting from ?ler or other server failure.
`In addition, With the groWing demand for electronic ?le
`availability comes the challenge of backing up and restoring
`storage devices that can have extremely large storage capaci
`ties. To reduce the doWntime associated With the back-up/
`restore operation, administrators often resort to imposing
`limits on storage and enforcing quotas. Fasterback-up/restore
`capabilities and more ?exibility in terms of adding storage
`devices can both reduce doWntime and decrease the numbers
`of restrictions imposed on users of the system. Accordingly,
`the potential inaccessibility of data described above is another
`disadvantage that impedes these advantageous goals.
`Accordingly, it is an object of the invention to provide for
`a system and method for associating netWork addresses With
`volumes so that clients can seamlessly and transparently
`address I/O operations to the speci?ed netWork address for a
`given volume Without concern for the netWork address of the
`?le server managing the particular volume.
`
`SUMMARY OF THE INVENTION
`
`This invention overcomes the disadvantages of the prior art
`by providing a system and method for identi?cation of a
`storage device by an IP address, or similar netWorkbased
`address. Thus, each storage device is netWork accessible by
`all devices connected to the netWork Without regard to the
`identity of the storage device’s ?le management system. The
`storage device is associated With one or more IP addresses.
`According to an aspect of the invention, the IP address and
`other con?guration information is stored at a predetermined
`location on the storage device itself for access by ?lers,
`thereby permitting the storage device to be portable among
`various ?lers. Each ?ler or other netWork device in a netWork
`obtains the IP address of an IP volume. I/O operations to that
`volume are then directed to the IP address of the particular
`volume. This IP addressing permits the ?ler that is managing
`a particular volume to be changed so that the transfer is
`transparent to a client accessing data on a volume.
`In accordance With one embodiment of the invention, the
`netWork address of a volume is mapped to the machine
`address code (MAC) address of a netWork interface controller
`(N IC) of the ?le server that is currently managing the volume.
`
`LENOVO ET AL. EXHIBIT 1005
`Page 15 of 20
`
`

`
`US 7,707,263 B1
`
`5
`Thus, I/O requests that are directed to the network address of
`the volume are redirected to the appropriate ?le server for
`processing.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`The above and further advantages of the invention may be
`better understood by referring to the following description in
`conjunction with the accompanying drawings in which like
`reference numerals indicate identical or functionally similar
`elements:
`FIG. 1 is a block diagram of an exemplary network envi
`ronment;
`FIG. 2 is a schematic block diagram of an exemplary ?le
`server;
`FIG. 3 is a schematic block diagram of an exemplary stor
`age operating system;
`FIG. 4 is a ?owchart of the process to con?gure volumes as
`IP volumes in accordance with the invention;
`FIG. 5 is a schematic block diagram of an exemplary net
`work environment in which the principles of the present
`invention are performed;
`FIG. 6 is a block diagram of a logical interface mapping
`table in accordance with the present invention;
`FIG. 7 is a block diagram of a logical interface mapping
`table in accordance with the present invention;
`FIG. 8 is a block diagram of an exemplary on-volume table;
`FIG. 9 is a block diagram of an exemplary network envi
`ronment showing an IP volume connected to a switching
`network;
`FIG. 10 is a block diagram of an exemplary logical inter
`face mapping table in accordance with the present invention;
`FIG. 11 is a block diagram of an exemplary logical inter
`face mapping table in accordance with the present invention;
`and
`FIG. 12 is a block diagram of an exemplary network
`address to logical name mapping table in accordance with the
`present invention.
`
`DETAILED DESCRIPTION OF AN
`ILLUSTRATIVE EMBODIMENT
`
`A. Networks and File Servers
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`FIG. 1 is a schematic block diagram of an exemplary net
`work environment 100 in which the principles of the present
`invention are implemented. The network 100 is based around
`a local area network (LAN) interconnection 102. However, a
`wide area network (WAN), virtual private network (V PN)
`implementation (utiliZing communication links over the
`Internet, for example), or a combination of LAN, WAN and
`VPN implementations can be established. For the purposes of
`this description, the term “LAN” should be taken broadly to
`55
`include any acceptable networking architecture. The LAN
`102 interconnects various clients based upon personal com
`puters (PCs) 104, servers 106 and a network cache 107. Other
`network based devices such as X-terminals, workstations and
`the like can also be connected to the LAN. Also intercon
`nected to the LAN may be a switch/router 108 that provides a
`gateway to the well-known Internet 109, thereby enabling
`various networked devices to transmit and receive internet
`based information, including e-mail, web content, and the
`like.
`In addition exemplary ?le servers (or “?lers”) 110 and 112
`(Filer 1 and Filer 2, respectively) are connected to the LAN.
`
`60
`
`65
`
`6
`These ?lers, (described further below) are con?gured to con
`trol storage of, and access to, data in a set of interconnected
`storage volumes 122.
`Each of the devices attached to LAN include an appropri
`ate, conventional network interface arrangement (not shown)
`for communicating over the LAN using desired communica
`tion protocols, such as the well-known Transport Control
`Protocol/Internet Protocol (TCP/IP), User Datagram Proto
`col (U DP), Hypertext Transfer Protocol (HTTP) or Small
`Network Management Protocol (SNMP).
`While the invention is described herein in reference to a
`?ler or a cluster of ?lers, the teachings of this invention can be
`adapted to a variety of storage system architectures including,
`but not limited to, a network-attached storage environment, a
`storage area network and disk assembly directly-attached to a
`client/host computer. The term “storage system” should
`therefore be taken broadly to include such arrangements. It is
`expressly contemplated that the various processes, architec
`tures and procedures described herein can be implemented in
`hardware, ?rmware or software, consisting of a computer
`readable medium including program instructions that per
`form a series of steps. Moreover, while the invention is
`described herein in reference to ?le-based storage, it can be
`employed in other data storage schemas, such as block-based
`storage systems. Additionally, it should be noted that the term
`“volume” can also be de?ned to include a logical unit number
`(LUN) that designates one or more disks.
`Note that each ?ler can also be provided with a graphical
`user interface/console 150 so that instructions from an opera
`tor can be entered directly to the ?ler while generally bypass
`ing the LAN or other network.
`An exemplary ?le server, or ?ler, architecture is now
`described in further detail. FIG. 2 is a more-detailed sche
`matic block diagram of the exemplary ?le server 110 (Filer 1)
`implemented as a network storage appliance, such as the
`NetApp® ?ler available from Network Appliance, that can
`execute the above-described Data ONTAPTM software and is
`advantageously used with the present invention. Other ?lers
`can have similar construction (including exemplary Filer 2
`(112). By way of background, a network storage appliance is
`a special-purpose computer that provides ?le service relating
`to the organization of information on storage devices, such as
`disks. However, it will be understood by those skilled in the
`art that the inventive concepts described herein may apply to
`any type of ?ler whether implemented as a special-purpose or
`general-purpose computer, including a standalone computer.
`The ?ler 110 comprises a processor 222, a memory 224, a
`network adapter 226 and a storage adapter 228 intercon
`nected by a system bus 225. The ?ler 110 also includes a
`storage operating system 230 that implements a ?le system to
`logically organiZe the information as a hierarchical structure
`of directories and ?les on the disks.
`In the illustrative embodiment, the memory 224 may have
`storage locations that are addressable by the processor and
`adapters for storing software program code and data struc
`tures associated with the present invention. The processor and
`adapters may, in turn, comprise processing elements and/or
`logic circuitry con?gured to execute the software code and
`manipulate the data structures. The storage operating system
`230, portions of which are typically resident in memory and
`executed by the processing elements, functionally organiZes
`the ?ler 110 by, inter alia, invoking storage operations in
`support of a ?le service implemented by the ?ler. It will be
`apparent to those skilled in the art that other processing and
`memory means, including various computer readable media,
`may be used for storing and executing program instructions
`pertaining to the inventive technique described herein.
`
`LENOVO ET AL. EXHIBIT 1005
`Page 16 of 20
`
`

`
`US 7,707,263 B1
`
`7
`The network adapter 226 comprises the mechanical, elec
`trical and signaling circuitry needed to connect the ?ler 110 to
`a client 104 (including, for example, but not limited to, man
`agement station 140) (see FIG. 1) over the computer netWork
`(LAN 102), Which, as described generally above, can com
`prise a point-to-point connection or a shared medium, such as
`a local area netWork. A client (104, 140) can be a general
`purpose computer con?gured to execute applications includ
`ing ?le system protocols, such as the Common Internet File
`System (CIFS) protocol. Moreover, the client can interact
`With the ?ler 110 in accordance With a client/ server model of
`information delivery. That is, the client may request the ser
`vices of the ?ler, and the ?ler may return the results of the
`services requested by the client, by exchanging packets that
`conform to, e. g., the CIFS protocol format over the netWork
`102. The format of the CIFS protocol packet exchanged over
`the netWork is Well-knoWn and described in Common Internet
`File System (CIFS) Version: CIFS-Spec 0.9, Storage Net
`Working Industry Association (SNIA), Draft SNIA CIFS
`Documentation Work Group Work-in-Progress, Revision
`Date: Mar. 26, 2001 (hereinafter “CIFS speci?cation”),
`Which is hereby incorporated by reference as though fully set
`forth herein.
`The storage adapter 228 cooperates With the storage oper
`ating system 230 executing on the ?ler to access information
`requested by the client, Which information may be stored on
`a number of storage volumes 122. The storage adapter 228
`includes input/ output (I/O) interface circuitry that couples to
`the disks over an I/O interconnect arrangement, such as a
`conventional high-performance, Fibre Channel serial link
`topology. The information is retrieved by the storage adapter
`228 and, if necessary, processed by the processor 222 (or the
`adapter 228 itself) prior to being forWarded over the system
`bus 225 to the netWork adapter 226, Where the information is
`formatted into a packet and returned to the client 110.
`Notably, the exemplary ?ler 110 includes an NVRAM 260
`that provides faulttolerant backup of data, enabling the integ
`rity of ?ler transactions to survive a service interruption based
`upon a poWer failure, or other fault. The NVRAM 260 is
`typically made suf?ciently large to log a certain time-based
`chunk of transactions (for example, several seconds Worth).
`The NVRAM entry may be constructed in parallel With
`execution of the corresponding request, once it is determined
`that a request Will be successfully performed but it must be
`completed (as must any copying to mirror NVRAM of the
`partner in a cluster con?guration) before the result of the
`request is returned to the reque

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket