`Banga et al.
`
`111111111111111111111111111111111111111111111111111111111111111111111111111
`US006895429B2
`
`(10) Patent No.:
`(45) Date of Patent:
`
`US 6,895,429 B2
`May 17,2005
`
`(54) TECHNIQUE FOR ENABLING MULTIPLE
`VIRTUAL FILERS ON A SINGLE FILER TO
`PARTICIPATE IN MULTIPLE ADDRESS
`SPACES WITH OVERLAPPING NETWORK
`ADDRESSES
`
`(75)
`
`Inventors: Gaurav Banga, Sunnyvale, CA (US);
`Mark Smith, Cupertino, CA (US);
`Mark Muhlestein, Tuscan, AZ (US)
`
`(73) Assignee: NetworkAppliance, Inc., Sunnyvale,
`CA(US)
`
`( *) Notice:
`
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 439 days.
`
`(21) Appl. No.: 10/035,666
`
`(22) Filed:
`
`Dec. 28, 2001
`
`(65)
`
`(51)
`(52)
`
`(58)
`
`(56)
`
`Prior Publication Data
`
`US 2003/0135578 A1 Jul. 17, 2003
`
`Int. Cl? .............................................. G06F 15/167
`U.S. Cl. ....................... 709/215; 711!212; 711/213;
`711/214; 711/216; 711!217; 711!218
`Field of Search ................................. 709/212-218,
`709/220
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`
`5,163,131 A
`5,355,453 A
`5,432,907 A *
`5,485,579 A
`5,802,366 A *
`5,819,292 A
`5,835,720 A
`5,931,918 A
`5,941,972 A
`5,963,962 A
`6,038,570 A
`
`11/1992
`10/1994
`7/1995
`1!1996
`9/1998
`10/1998
`* 11/1998
`8/1999
`8/1999
`10/1999
`3/2000
`
`Row eta!.
`Row eta!.
`Picazo et a!. . . . . . . . . . . . . . . . 709/249
`Ritz eta!.
`Row eta!. .................. 709/250
`Ritz eta!.
`Nelson et a!.
`Row eta!.
`Roese eta!.
`Ritz eta!.
`Ritz eta!.
`
`.............. 709/224
`
`5!2000 Ritz et a!.
`6,065,037 A
`6,167,446 A * 12/2000 Lister et a!.
`7/2002 Roese et a!.
`6,425,035 B2
`2001!0011304 A1
`8/2001 Wesinger, Jr. et a!.
`
`................ 709/223
`
`OTHER PUBLICATIONS
`
`U.S. Appl. No. 10/635,664, Muhlestein et al.
`Rekhter, Y. and Moskowitz, B. and Kerrenberg, D. and De
`Groot, G.J. and Lear, E, Request for Comments:1918, Net(cid:173)
`work Working Group, Feb. 1996.
`U.S. Appl. No. 10/035,664, Muhlestein et al.
`David Hitz et al. TR3002 File System Design for a NFS File
`Server Appliance published by Network Appliance, Inc.
`Common Internet File System (CIFS) Version: CIFS-Spec
`0.9, Storage Networking Industry Association (SNIA), Draft
`SNIA CIFS Documentation Work Group Work-in-Progress,
`Revision Date: Mar. 26, 2001.
`Fielding et al. (1999) Request for Comments (RFC) 2616,
`HTTP/1.1.
`
`(Continued)
`
`Primary Examiner-T Nguyen
`(74) Attorney, Agent, or Firm---Cesari and McKenna, LLP;
`A Sidney Johnston
`
`(57)
`
`ABSTRACT
`
`A technique enables a server, such as a filer, configured with
`a plurality of virtual servers, such as virtual filers (vfilers),
`to participate in a plurality of private network address spaces
`having potentially overlapping network addresses. The tech(cid:173)
`nique also enables selection of an appropriate vfiler to
`service requests within a private address space in a manner
`that is secure and distinct from other private address spaces
`supported by the filer. An IPspace refers to each distinct
`address space in which the filer and its storage operating
`system participate. An IPspace identifier is applied to trans(cid:173)
`lation procedures that enable the selection of a correct vfiler
`for processing an incoming request and an appropriate
`routing table for processing an outgoing request.
`
`43 Claims, 7 Drawing Sheets
`
`190
`
`SERVER
`(FILER)
`200
`
`LENOVO ET AL. EXHIBIT 1008
`Page 1 of 20
`
`
`
`US 6,895,429 B2
`Page 2
`
`01HER PUBLICATIONS
`
`European Search Report for European Patent Application
`No.: EP 02 25 8871.9, Filed: Dec. 23, 2002, all pages.
`Glaude, David: "Virtual Server with Separate/Multiple
`Default Gateway/Routing" Mailing List
`'LARTC1,
`'Online1, Nov. 9, 2001, pp. 1-3, XP002256475, Retrieved
`from the Internet: Http://mailman.da9a.n1/pipermail/lartc/
`2001q4/001626.html, retrieved on Oct. 2, 2003.
`
`IEEE-SA Standards Board: "Virtual Bridged Local Area
`Networks-IEEE STd, 802.1Q-1998", IEEE Standars for
`Local and Metropolitan Area Networks, Mar. 8, 1999, pp.
`1-199, XP002256476, New York, NY, USA
`United States Patent Application Publication, Pub. No.: US
`2001!0011304 Al, Pub. Date: Aug. 2, 2001, by Wesinger, Jr.
`et al., all pages.
`* cited by examiner
`
`LENOVO ET AL. EXHIBIT 1008
`Page 2 of 20
`
`
`
`N
`~
`\0
`N
`~
`(It
`\0
`Oo
`0'1
`rJ'l
`
`e
`
`CLIENT I
`
`110
`
`FIG. 1
`
`~
`~
`
`-..J
`......,
`'"""' 0
`.....
`=-
`
`'JJ.
`
`c c
`N
`'"""'
`~-..J
`'-<
`~
`~
`
`Ul
`
`~ =
`
`......
`......
`~
`~
`\Jl .
`d .
`
`I
`
`CLIENT
`
`110
`• -
`•
`•
`110
`
`· CLIENT
`
`CLIENT I
`
`•
`•
`•
`110
`
`CLIENT I
`
`110
`
`I !INTERMEDIATE
`
`NETWORK
`
`180 I
`
`I
`
`"1..
`
`---.. --
`(
`NETWORK T
`
`CLOUD
`
`......
`
`,
`190
`
`{SWITCH) ~
`
`150
`
`0
`
`120 ~ NODE
`
`(FILER)
`SERVER
`
`200
`
`~
`100
`
`LENOVO ET AL. EXHIBIT 1008
`Page 3 of 20
`
`
`
`N
`~
`\0
`N
`~
`(It
`\0
`Oo
`0'1
`rJ'l
`
`e
`
`-..J
`0 ......,
`N
`~ .....
`'JJ. =(cid:173)~
`
`N c c
`'"""'
`~-..J
`'-<
`~
`~
`
`Ul
`
`~ = ......
`~ ......
`~
`\Jl .
`d .
`
`216
`DISK
`"""
`
`I
`
`-
`
`_j_
`
`216
`DISK
`.....
`
`-
`
`-....
`
`216
`DISK
`
`"""
`
`I
`
`"""
`
`_L
`
`ADAPTER
`STORAGE
`
`206
`
`10
`
`)
`
`DATA 226 --
`
`---------·
`HEADER 224
`
`r222
`
`•
`• •
`
`I BUFFER 2221
`
`BUFFER POOL
`
`-
`
`220
`
`FIG. 2
`
`(TOJFROM LINK 180)
`
`(TO/FROM LINK 180)
`
`(TO/FROM LINK 140)
`
`INTERFACE
`
`218
`
`ADAPTER
`NETWORK
`
`208
`(NIC)
`
`I
`
`(
`
`I
`1
`t
`
`-. 300~ -
`l OPERATING
`
`STORAGE
`
`-
`
`SYSTEM
`
`MEMORY 204
`
`INTERFACE
`
`218
`
`ADAPTER
`NETWORK
`
`208
`(NIC)
`
`PROCESSOR
`
`I
`I
`202
`
`INTERFACE
`
`218
`
`ADAPTER
`NETWORK
`
`208
`(NIC)
`
`I
`
`LENOVO ET AL. EXHIBIT 1008
`Page 4 of 20
`
`
`
`N
`~
`\0
`N
`).
`(It
`\0
`00
`-..a-..
`rJ'l
`e
`
`-..J
`0 ......,
`~
`~ .....
`'JJ. =(cid:173)~
`
`N c c
`'"""'
`~-..J
`'-<
`~
`~
`
`Ul
`
`~ = ......
`~ ......
`~
`\Jl .
`d .
`
`I c:,-.,...., -----1·
`
`LAYER
`DRIVER
`
`326
`
`I
`
`STORAGE
`
`LAYER
`
`324
`
`--~-----
`
`-
`
`'
`
`/
`
`(TO/FROM DISK 216)
`
`FIG. 3
`
`(TO/FROM LINKS 140, 180)
`
`'
`1
`:
`1 LAYER 310
`:
`
`:LAYER 312
`
`---------f---
`
`(NETWORK DRIVERS}
`
`ACCESS
`MEDIA
`
`/
`
`:
`1
`I
`I
`
`1---------.....:...1 _ _
`
`IP I
`
`I
`I
`I
`I
`I
`1
`I
`1
`(SCSI) I
`1
`DISK
`I
`1
`I
`1
`I
`I
`I
`I
`I
`1
`--1
`:
`I
`1
`(RAID) 1
`:
`:
`DISK
`I ~ I
`
`I
`1
`1 UDP316
`~
`1----......._ ____ --r-1
`I
`1
`I
`I ~ I
`1----.....--------+-1 0-I
`I J I,.
`I
`-----~ ~r '
`r-~
`/,,..-~ccE.SS REQUEST DArA--,,,
`--
`
`320
`NFS
`
`------
`
`,.,---
`
`'-7)
`
`I
`
`-
`
`~~fl LAYER 330
`
`.... -----------
`
`TCP 314
`
`NBT 315
`
`318
`CIFS
`
`HTTP I
`
`322
`
`\....._
`
`300
`
`-~ I
`p.,
`
`LENOVO ET AL. EXHIBIT 1008
`Page 5 of 20
`
`
`
`N
`~
`\0
`N
`).
`(It
`\0
`00
`-..a-..
`rJ'l
`e
`
`-..J
`0 ......,
`~
`~ .....
`'JJ. =(cid:173)~
`
`N c c
`'"""'
`~-..J
`'-<
`~
`~
`
`Ul
`
`~ = ......
`~ ......
`~
`\Jl .
`d .
`
`FIG. 4
`
`-
`
`FIREWALL
`
`424b
`
`INTERNET
`
`....._ FIREWALL
`
`424a
`
`422b
`NAT
`
`........._
`
`420b
`
`CLIENT 410b
`
`'
`
`426b-
`
`4oeb
`
`IPSPACE
`
`ID
`
`~404b
`
`INTERFACE
`
`408b
`
`40Sa
`
`IPSPACE
`
`ID
`
`~404a
`
`TABLE
`ROUTING
`
`VF2
`
`I
`
`400
`
`-426a
`
`INTERFACE
`
`408a
`
`TABLE
`ROUTING
`
`VF1
`
`422a
`NAT
`
`-
`
`420a
`
`_.../
`
`CLIENT 410a
`
`LENOVO ET AL. EXHIBIT 1008
`Page 6 of 20
`
`
`
`N
`~
`\0
`N
`~
`(It
`\0
`Oo
`0'1
`rJ'l
`
`e
`
`-..J
`0 ......,
`Ul
`~ .....
`'JJ. =(cid:173)~
`
`N c c
`'"""'
`~-..J
`'-<
`~
`~
`
`Ul
`
`~ = ......
`~ ......
`~
`\Jl .
`d .
`
`POINTER
`CONTEXT
`VFILER
`CURRENT
`
`592
`
`PROC 590
`
`564
`
`IFADDR
`
`'--
`
`VFNETID
`
`562
`
`I POINTER
`
`570
`CONTEXT
`~ VFILER
`
`IFADDR
`
`540
`
`...
`
`IFADDR
`
`540
`
`~
`
`IF NET
`
`510
`
`IF NET
`
`510
`
`I . . •
`
`I
`
`I
`
`VFNET
`
`560
`
`FIG. 5
`
`POINTER
`VFNET
`
`I
`566
`
`I 5~o
`
`I I ·1 CONTEXT
`
`570
`
`IFADDR
`
`540
`
`I .I
`
`I
`
`IFADDR
`
`540
`
`I J .I
`
`580
`586
`CIFS NFS NIS DNS
`
`582
`
`584
`
`BACK LINK POINTER 550 I-
`
`IF NET POINTER 522
`
`IPSPACE ID 520
`
`ROUTING TABLE
`
`POINTER 578J
`VFNET LIST 576
`POINTER574
`
`IPSPACE ID 572
`
`IF ADDR LIST 546
`
`ADDRESS 544
`
`AF542
`
`POINTER 1
`IP LIST 516
`TYPE 514
`MAC 512
`
`518
`
`570
`
`r
`
`540
`
`/
`
`510
`
`/'
`
`LENOVO ET AL. EXHIBIT 1008
`Page 7 of 20
`
`
`
`U.S. Patent
`
`May 17,2005
`
`Sheet 6 of 7
`
`US 6,895,429 B2
`
`600
`
`602
`
`RECEIVE INCOMING REQUEST AT DRIVER
`ASSOCIATED WITH NETWORK INTERFACE
`
`606
`
`DROP
`REQUEST
`
`608
`
`610
`
`616
`
`618
`
`620
`
`626
`
`628
`
`630
`
`RECORD IPSPACE ID OF INTERFACE AND
`PASS REQUEST TO IP LAYER
`
`EXAMINE ADDRESSES LISTED IN IF NET STRUCTURE
`
`614
`SEARCH IP LIST
`OF OTHER IF NET
`STRUCTURES
`UNTIL FIND MATCH
`
`OBTAIN BACK LINK POINTER FROM MATCHING
`IF ADDR STRUCTURE
`
`FIND VFILER CONTEXT ASSOCIATED WITH
`DEST ADDRESS
`
`COMPARE IPSPACE ID OF INTERFACE WITH
`IPSPACE 10 OF VFILER CONTEXT
`
`624
`
`N
`
`DROP
`REQUEST
`
`SET POINTER IN PROC STRUCTURE TO INVOKE
`VFILER CONTEXT
`
`PASS REQUEST UP PROTOCOL STACK
`
`USE VFILER CONTEXT TO QUALIFY REQUEST
`FOR SUBSEQUENT PROCESSING
`
`FIG. 6
`
`632
`
`LENOVO ET AL. EXHIBIT 1008
`Page 8 of 20
`
`
`
`U.S. Patent
`
`May 17,2005
`
`Sheet 7 of 7
`
`US 6,895,429 B2
`
`700
`
`VFILER ISSUES OUTGOING
`REQUEST
`
`706
`
`N
`
`USE FAST PATH
`TECHNIQUE
`
`USE IPSPACE ID OF VFILER CONTEXT TO
`SELECT ROUTING TABLE
`
`PERFORM LOOKUP OPERATION INTO SELECTED
`ROUTING TABLE TO DETERMINE OUTPUT
`INTERFACE FOR REQUEST
`
`702
`
`708
`
`710
`
`712
`
`FORWARD REQUEST TO OUTPUT INTERFACE
`AND OVER NETWORK
`
`714
`
`FIG. 7
`
`LENOVO ET AL. EXHIBIT 1008
`Page 9 of 20
`
`
`
`US 6,895,429 B2
`
`1
`TECHNIQUE FOR ENABLING MULTIPLE
`VIRTUAL FILERS ON A SINGLE FILER TO
`PARTICIPATE IN MULTIPLE ADDRESS
`SPACES WITH OVERLAPPING NETWORK
`ADDRESSES
`
`FIELD OF THE INVENTION
`
`The present invention relates to storage systems, such as
`filers, and, more specifically, to a filer having multiple
`virtual filers configured to participate in multiple private
`address spaces having potentially overlapping network
`addresses.
`
`BACKGROUND OF THE INVENTION
`
`A file server is a computer that provides file service
`relating to the organization of information on writeable
`persistent storage devices, such memories, tapes or disks.
`The file server or filer may be embodied as a storage system
`including a storage operating system that implements a file
`system to logically organize the information as a hierarchi(cid:173)
`cal structure of directories and files on, e.g., the disks. Each
`"on-disk" file may be implemented as set of data structures,
`e.g., disk blocks, configured to store information, such as the
`actual data for the file. A directory, on the other hand, may
`be implemented as a specially formatted file in which
`information about other files and directories are stored.
`A storage system may be further configured to operate
`according to a client/server model of information delivery to
`thereby allow many clients to access an application service
`executed by a server, such as a file server. In this model, the
`client may comprise an application executing on a computer
`that "connects" to the file server over a computer network,
`such as a point-to-point link, shared local area network, wide
`area network or virtual private network implemented over a
`public network, such as the Internet. Each client may request
`the services of the file system on the file server by issuing file
`system protocol messages (in the form of packets) to the
`server over the network. It should be noted, however, that
`the file server may alternatively be configured to operate as
`an assembly of storage devices that is directly-attached to a
`(e.g., client or "host") computer. Here, a user may request
`the services of the file system to access (i.e., read and/or
`write) data from/to the storage devices.
`One type of file system is a write-anywhere file system
`that does not overwrite data on disks. If a data block on disk
`is retrieved (read) from disk into memory and "dirtied" with
`new data, the data block is stored (written) to a new location
`on disk to thereby optimize write performance. A write(cid:173)
`anywhere file system may initially assume an optimal layout
`such that the data is substantially contiguously arranged on
`disks. The optimal disk layout results in efficient access
`operations, particularly for sequential read operations,
`directed to the disks. An example of a write-anywhere file
`system that is configured to operate on a storage system,
`such as a filer, is the Write Anywhere File Layout (WAFL™)
`file system available from Network Appliance, Inc.,
`Sunnyvale, Calif. The WAFL file system is implemented as
`a microkernel within an overall protocol stack of the filer
`and associated disk storage.
`The disk storage is typically implemented as one or more
`storage "volumes" that comprise a cluster of physical stor(cid:173)
`age devices (disks), defining an overall logical arrangement
`of disk space. Each volume is generally associated with its
`own file system. A filer typically includes a large amount of
`storage (e.g., 6 terabytes) with the ability to support many
`
`2
`(thousands) of users. This type of storage system is generally
`too large and expensive for many applications or "pur(cid:173)
`poses". Even a typical minimum storage size of a volume (or
`file system) is approximately 150 gigabytes (GB), which is
`still generally too large for most purposes.
`Rather than utilizing a single filer, a user may purchase a
`plurality of smaller servers, wherein each server is directed
`to accommodating a particular purpose of the user. However,
`the acquisition and (usually more importantly) maintenance
`10 of many smaller servers may be more costly than the
`purchase of a single filer. Therefore, it would be desirable to
`consolidate many servers within a single filer platform in a
`manner that logically embodies those servers. Server con(cid:173)
`solidation is thus defined as the ability to provide many
`15 logical or virtual servers within a single physical server
`platform. Some prior server consolidation solutions are
`configured to run multiple instances of a process, such as an
`application service. Other server consolidation solutions
`provide many independent servers that are essentially
`20 "racked together" within a single platform. Examples of
`virtual servers embodied within a single platform are web
`servers, database servers, mail servers and name servers.
`Server consolidation is particularly useful in the case of a
`storage server provider (SSP). An SSP serves ("hosts") data
`25 storage applications for multiple users or clients within a
`single, physical platform or "data center". The data center is
`centrally maintained by the SSP to provide safe, reliable
`storage service to the clients. In a typical configuration, the
`data center may be coupled to a plurality of different client
`30 environments, each having an independent private internal
`network ("intranet"). Each intranet may be associated with
`a different client or division of a client and, thus, the data
`traffic must be separately maintained within the physical
`platform.
`Request for Comments (RFC) 1918 defines portions of a
`32-byte Internet protocol version 4 (IPv4) address space that
`may be used on any private intranet without requiring
`explicit authority from a third party, such as the Internet
`40 Assigned Numbers Authority (lANA). To communicate
`with an external host computer "outside" of a private
`intranet, e.g., over the Internet, an internal host computer on
`the private intranet sends a packet (request) having a desti(cid:173)
`nation IP address of the external host. The request further
`includes a source IP address that represents a private intranet
`IP address of the internal host. That private IP address may
`be translated to a globally agreed-upon IP address using a
`network address translation (NAT) device.
`Specifically, the NAT device dynamically translates pri-
`50 vate intranet IP addresses (and transport port numbers) to
`non-private globally unique IP addresses (and port numbers)
`for all packets leaving and entering the private intranet. The
`NAT device uses a pool of globally unique IP addresses,
`which have been properly assigned by the lANA Since it is
`55 expected that most data packet traffic will take place within
`the intranet, a small pool of such "global" addresses suffices
`to provide Internet connectivity to a large number of external
`hosts. This arrangement is generally necessary in most
`current networks because unassigned globally unique IP
`60 addresses are scarce and cannot be used for all hosts in a
`large network. A vast majority of computers currently con(cid:173)
`nected to the Internet use NAT techniques to communicate
`with Internet servers.
`Each client environment served by the SSP may corre-
`65 spond to a different virtual server (or sets of virtual servers)
`of the data center that performs storage operations using the
`client's unified view, i.e., "namespace", of network
`
`35
`
`45
`
`LENOVO ET AL. EXHIBIT 1008
`Page 10 of 20
`
`
`
`US 6,895,429 B2
`
`3
`resources, such as an RFC 1918 compliant private IP address
`space. The private intranet of each environment is typically
`coupled to the Internet through an intermediate network
`device, such as a "firewall". Clients generally do not like to
`connect their storage resources (served by the data center) to
`their internal networks through firewalls, primarily because
`those devices adversely impact performance and restrict
`functionality. Therefore, the intranets are typically con(cid:173)
`nected directly to the data center, bypassing the firewalls. A
`common arrangement for such an SSP configuration pro-
`vides a dedicated network path (or paths) that begins at a
`client's RFC 1918 compliant intranet (where all IP addresses
`are private IP addresses) and ends at the data center. This
`allows each client to utilize IP addresses of its private
`address space when accessing the storage resources on the
`data center.
`Although each private intranet guarantees unique IP
`addresses within its own private IP address namespace, there
`is no such guarantee across private namespaces. Since each
`client environment is directly connected to the SSP over its
`private intranet, the "hosting" data center may participate in 20
`several distinct client IP address spaces having IP addresses
`that overlap and, thus, conflict. Yet the data center is
`expected to service requests directed to these conflicting IP
`addresses, while maintaining the integrity and security of all
`data request traffic within each client's intranet and internal 25
`address space.
`Moreover, the SSP must advertise its services and thus
`issue broadcast packets. Broadcast packets are also issued in
`the process of actually providing services, especially during
`the process of name resolution. In this case, the data center
`must ensure a broadcast packet generated in the context of
`one virtual server is only forwarded over the private intranet
`of the client associated with that server. In addition, a
`broadcast packet received over an intranet at the data center
`must be capable of identifying both the source and destina(cid:173)
`tion of that packet among the various virtual servers embod(cid:173)
`ied within the data center platform.
`
`SUMMARY OF THE INVENTION
`The present invention comprises to a technique that 40
`enables a server, such as a filer, configured with a plurality
`of virtual servers, such as virtual filers (vfilers), to partici(cid:173)
`pate in a plurality of private network address spaces having
`potentially overlapping network addresses. To that end, the
`invention further enables selection of an appropriate vfiler to 45
`service requests within a private address space in a manner
`that is secure and distinct from other private address spaces
`supported by the filer. A vfiler is a logical partitioning of
`network and storage resources of the filer to establish an
`instance of a multiprotocol server. The network resources 50
`include network addresses, such as Internet protocol (IP)
`addresses, assigned to network interfaces.
`As described herein, an IPspace refers to each distinct IP
`address space in which the filer and its storage operating
`system participate. Each vfiler is associated with an IP 55
`address space and, thus, belongs to one IPspace. The IPspace
`is preferably implemented on the "front end", i.e., the
`network interfaces, of the filer. In this context, the term
`network interface refers to an IP addressable interface. Each
`network interface is tagged with an IPspace identifier (ID) 60
`that binds the interface to an IPspace. Each interface has one
`or more IP addresses that are unique within the interface's
`IPspace. Moreover, an IPspace is further associated with a
`routing domain and the use of one or more routing tables.
`Each vfiler is provided with a routing table from its IPspace 65
`that controls routing operations for all requests processed by
`the vfiler.
`
`10
`
`4
`According to an aspect of the invention, configuration
`information used to select an appropriate vfiler to service a
`request is stored in various data structures that cooperate to
`provide a statically assigned array or "database" of IPspace
`data structures. These data structures include, among others,
`an interface network (ifnet) structure associated with the
`network interfaces, an interface address (ifaddr) structure
`representing the IP address of an interface, a vfiler context
`data structure and process block (proc) data structure. The
`ifnet data structure includes configuration information such
`as a list of IP addresses assigned to the interface, pointers
`referencing ifaddr structures for those addresses and an
`IPspace ID of the interface. The ifaddr data structure
`includes a back link pointer that references a vfiler context
`15 associated with the IP address, whereas the vfiler context
`structure contains configuration information needed to
`establish an instance of a multiprotocol server. Each vfiler
`context is tagged with an IPspace ID and a pointer to one of
`the routing tables of the IPspace of the vfiler. The proc data
`structure represents the context of a process thread executing
`on the filer and contains a pointer referencing the current
`vfiler context.
`In another aspect of the invention, the IPspace IDs are
`applied to translation procedures that enable the selection of
`a current vfiler for processing an incoming request and an
`appropriate routing table for processing an outgoing request.
`Logic of the network interface receiving the incoming
`request uses a destination IP address of that request along
`with the IPspace ID of the interface and other configuration
`30 information to select a current vfiler context for processing
`the request. If a routing operation is required for the outgo(cid:173)
`ing request, the configuration information, including an
`IPspace ID associated with the current vfiler context, is used
`to select an appropriate routing table. The translation pro-
`35 cedures are preferably employed on two critical paths of the
`vfiler: an incoming path of a request received at the vfiler
`and an outgoing path of a request issued by the vfiler.
`Broadly stated, an incoming request received at a network
`interface of the filer is directed to the proper vfiler based on
`the destination IP address of the request and the IPspace ID
`of the interface. On the incoming path, the ifnet structure
`associated with the network interface is examined, searching
`the list of IP addresses for an IP address that matches the
`destination IP address. Upon finding a match, the pointer to
`the matching ifaddr structure is followed to obtain the back
`link pointer needed to locate the vfiler context data structure
`for the request. The correct vfiler context is then selected
`after comparing the IPspace ID stored in the ifnet structure
`with the IPspace ID of the vfiler context.
`Thereafter, the vfiler context is set by configuring the
`pointer of the proc data structure to reference the current
`vfiler context. The correct vfiler context is set to qualify the
`request for subsequent processing in the filer, e.g., searches
`or boundary checks needed to verify that the vfiler is allowed
`to access requested storage resources. Notably, authentica(cid:173)
`tion information contained in each vfiler context prevents
`the request from being processed by users from other vfilers.
`If the vfiler issues an outgoing request that does not
`require route calculation, a fast path technique described
`herein is invoked to increase performance. However, if a
`routing calculation is required, a lookup operation is per-
`formed to the routing table to determine over which output
`interface the request should be issued. On the outgoing path,
`the current vfiler context is used to choose the correct
`routing table; this is guaranteed to be a routing table of the
`vfiler's IPspace. The request is then forwarded to the output
`interface.
`
`LENOVO ET AL. EXHIBIT 1008
`Page 11 of 20
`
`
`
`US 6,895,429 B2
`
`5
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`The above and further advantages of the invention may be
`better understood by referring to the following description in
`conjunction with the accompanying drawings in which like
`reference numerals indicate identical or functionally similar
`elements:
`FIG. 1 is a schematic block diagram of a computer
`network including a plurality of clients and a server that may
`be advantageously used with the present invention.
`FIG. 2 is a schematic block diagram of a server, such as
`a network storage appliance or filer that may be advanta(cid:173)
`geously used with the present invention;
`FIG. 3 is a schematic block diagram of a storage operating
`system that may be advantageously used with the present
`invention;
`FIG. 4 is a schematic diagram of an embodiment of a filer
`that may be advantageously used with the present invention;
`FIG. 5 is a schematic block diagram illustrating an
`IPspace database that may be advantageously used with the
`present invention;
`FIG. 6 is a flowchart illustrating the sequence of steps
`involved with an incoming path translation procedure in
`accordance with the present invention; and
`FIG. 7 is a flowchart illustrating the sequence of steps
`involved with an outgoing path translation procedure in
`accordance with the present invention.
`
`DETAILED DESCRIPTION OF THE
`ILLUSTRATIVE EMBODIMENTS
`FIG. 1 is a schematic block diagram of a computer
`network 100 including a plurality of clients 110 and a file
`server, such as a network storage appliance, that may be
`advantageously used with the present invention. The file
`server or filer 200 is a computer that provides file service
`relating to the organization of information on storage
`devices, such as disks. The clients 110 may be general(cid:173)
`purpose computers configured to execute applications
`including file system protocols, such as the conventional
`Common Internet File System (CIFS) protocol. Moreover,
`the clients 110 may interact with the filer 200 in accordance
`with a client/server model of information delivery. That is,
`each client may request the services of the filer, and the filer
`may return the results of the services requested by the client,
`by exchanging packets 120 encapsulating, e.g., the CIFS
`protocol format over the network 100. It will be understood
`to those skilled in the art that the inventive technique
`described herein may apply to any server capable of pro(cid:173)
`viding a service to any client in accordance with various
`applications executing on the client.
`The filer 200 may be coupled to an intermediate network
`node, such as a router or switch 150, over a plurality of
`physical links 180, each of which may comprise, e.g., a
`gigabit Ethernet link, a 100 base T Ethernet link, a 10 base
`T Ethernet link or any similar link. The switch 150 is further
`coupled to the clients 110 over network clouds 130 config(cid:173)
`ured as, e.g., local area networks (LANs) or virtual LANs
`(VLANs). Alternatively, the filer may be connected directly
`to one or more clients over a communications link 140
`comprising a point-to-point connection or a shared medium,
`such as a LAN.
`FIG. 2 is a schematic block diagram of the filer 200
`comprising a processor 202, a memory 204, a storage
`adapter 206 and one or more network adapters 208 inter- 65
`connected by a system bus 210, which is preferably a
`conventional peripheral computer interconnect (PCI) bus
`
`10
`
`6
`210. The filer also includes a storage operating system 300
`that implements a file system to logically organize the
`information as a hierarchical structure of directories and files
`on disks 216 coupled to the storage adapter 206. In the
`illustrative embodiment described herein, the operating sys(cid:173)
`tem 300 is preferably the NetApp® Data ONTAP™ oper(cid:173)
`ating system available from Network Appliance, Inc. that
`implements a Write Anywhere File Layout (WAFL) file
`system.
`The memory 204 may be apportioned into various
`sections, one of which is a buffer pool 220 organized as a
`plurality of data buffers 222 for use by network drivers of the
`operating system 300. Each network driver is assigned a list
`of buffers 222 that are used to load incoming data requests
`15 received at interfaces 218 of the network adapter 208, as
`described herein. Other sections of the memory may be
`organized as storage locations that are addressable by the
`processor and adapters for storing software program code
`and data structures, including routing tables, associated with
`20 the present invention. The processor and adapters may, in
`turn, comprise processing elements and/or logic circuitry
`configured to execute the software code and access the data
`structures. The storage operating system 300, portions of
`which are typically resident in memory and executed by the
`25 processing elements, functionally organizes the filer by, inter
`alia, invoking storage and network operations in support of
`the services implemented by the filer 200. It will be apparent
`to those skilled in the art that other processing and memory
`means, including various computer readable media, may be
`30 used for storing and executing program instructions pertain(cid:173)
`ing to the inventive technique described herein.
`The network adapter 208 may comprise a network inter(cid:173)
`face card (NIC) having the mechanical, electrical and sig(cid:173)
`naling interface circuitry needed to connect the filer 200
`35 directly to the communications link 140 or to the switch 150
`over the physical links 180. In one embodiment, the physical
`links and interfaces may be organized as an aggregate or
`virtual interface (VIF) 190. Each NIC may include a single
`interface 218 such that, for a 4-link VIF, the filer includes 4
`40 NICs 208. Alternatively, each NIC 208 may include 4 "quad
`port" interfaces 218, each of which is connected to a link 180
`of the VIF 190. In another embodiment, the physical links
`and interfaces may be arranged as a de-aggregate or VLAN.
`Each interface 218 may be assigned one or more Internet
`45 Protocol (IP) addresses along with one media access control
`(MAC) address. However, when the physical interfaces 218
`and their associated links 180 are aggregated as a single
`virtual interface 190, all of the physical interfaces respond to
`only one MAC address. That is, the physical interfaces 218
`50 are organized into one virtual "pipe" having one logical
`interface that is assigned a common MAC address.
`The storage adapter 206 cooperates with the storage
`operating system 300 executing on the filer to access infor(cid:173)
`mation requested by the client, which information may be
`55 stored on any writeable media, such as the disks 216. The
`storage adapter includes input/output (110) interface cir(cid:173)
`cuitry that couples to the disks over an 1!0 interconnect
`arrangement, such as a conventional high-performance,
`Fibre Channel serial link topology. The information is
`60 retrieved by the storage adapter and, if necessary, processed
`by the processor 202 (or the adapter 206 itself) prior to being
`forwarded over the system bus 210 to the network adapter
`208, where the information is formatted into a packet 120
`and returned to the client 110.
`Storage of information on the filer is preferably imple(cid:173)
`mented as one or more storage "volumes" that comprise a
`cluster of physical storage disks 216, defining an overall
`
`LENOVO ET AL. EXHIBIT 1008
`Page 12 of 20
`
`
`
`US 6,895,429 B2
`
`7
`logical arrangement of disk space. Each volume is generally
`associated with its own file system. To facilitate access to the
`disks 216, the storage operating system 300 implements a
`file system that logically organizes the information as a
`hierarchical structure of a s directories and files on the disks.
`Each "on-disk" file may be implemented as set of disk
`blocks configured to store information, such as data,
`whereas the directory may be implemented as a specially
`formatted file in which information about other files a