throbber
\
`
`~-
`
`....
`
`( 1\.,
`
`I .
`
`· 1 I ·-r, ··
`
`:'10f)e
`Number
`
`=1''*" Dale
`nn1uD1pww
`Anle/Posl-cleled
`
`.
`
`1967- t•;,wn ·D't>lD!J:'1 pln ·
`PATENT LAW, 5727. 1967
`~lt)D··~ ilVIp:J
`~lkelion lor Patent
`C:17980
`(mnut.,., a1pe - 111•a 'JU ~"' UJD ·•P9n a•) ,'Jill
`I (Name and address ol apphcant, and In case of body corporate-piece ollncotporaflon)
`
`' f
`~
`{;. .l:f-
`
`AUSPEX SYSTEMS,•INC.,
`2952 Bunker Hill lane,
`Santa Clara. ·.
`ca. 95054, u.s.A.
`
`(Incorporated in the ·State of California, USA)
`
`1111 nD~'•----·---·--··-------.: .. :. ........ _. _____ _ - ............ -·----·----rDD hllllll ~J:A
`.
`o-., . ., .....,. of
`ol an lnvenUe~n tha lltle of wfllch It
`
`PARALLEL 1/0 NETWORK FILE SERVER ARCHITECTURE
`
`lterebf apply lor a palent lo be \~~'anted lo me In respect thereof.
`
`- 'J01D c.mt n11P2 •
`- np1'Jn ""'::a •
`ApplleeUon of Division Application for Patent Aidmon
`tlltiD nwp:sc
`'· "l"""'nep::a'l •
`from AppllcaUon
`lo Patent/ Appl.
`No._.Q_2~..:.44.:~7:.---'11D No ........ _.~ ... -.... -................. '01:1
`l-d•_•·_d_._2l_ ..• _.s_. _I9_9_o_~~'~_"»..__d_••_efl_ •• -: ... :-:-•.. _ .... _ .. _ ... _ ... _ ... _ ... _cr_,.o--t 071 404 ,959
`'r:)l~ 11J I m::a '11:11- 'ml't: I t'l'l:> : rD "111' •
`P.O.A.: ganerelll~vtduel-ett.tchai/to be_ Iliad later•
`
`,.,.n
`
`Data
`
`8.9.89·
`
`U.S.A.
`
`flied In ca••-·····-·-···· .............................. r.1J2 'IVS'II'I
`'lnr:a D"::DOD m-,:a) )JDn
`Addrast lor Sanlca In hr..t
`Sif_~.fE.!:~_r..:....f!?.l~.-~--fQ.:_~_'u.Y. ... ..!.£.-"!J.!.!.~ D
`P..!.9. ... JL .... ZZ1L-.... --·----.... ·-·--ULL.......1.. n
`. ......... -... -.•..
`76122 n111n1
`Rehovot 76 122
`_,, ......... --··· .. ····-·--··· .. ····· ....... .... ........................
`11p:mn na•nn
`S~g,..tura ol Applicant
`For the~----
`_,.,~
`
`Sanford T. Colb & Co.
`C: 17980
`
`_1.993 __ n.Je .... H.QL .... _:rnhl .J.§. ............ bl"n
`of the r••r
`ol
`lhll
`
`I n:>11.,n vm•e'l
`
`For Offte• :u.. ·
`
`. . -·-- ---- --· -·-··
`
`CQ-1002 / Page 492 of 959
`
`

`
`. .
`
`PARALLEL I/0 NETWORK FILE SERVER ARCHITECTURE
`
`•'
`~·
`
`AUSPEX SYSTEMS. INC.
`C: 17980
`
`CQ-1002 / Page 493 of 959
`
`

`
`r
`
`\..
`
`"-·
`
`-1-
`
`PABALLEL I/0 NETHORK FILE SERVER ARCHITECTURE
`
`INVEN'l'OB.S:
`EDWARD JOHN ROW, LAURENCE B. BOUCHER,
`WILLIAM M. PITTS, STEPHEN E. BLIGHTMAN
`
`5
`
`CRQSS-REFERENCE TO BELATED APPLICATIONS
`
`The present application .is related
`
`to
`
`the
`
`following O.S. Patent . Applications·, all
`
`f"iled
`
`concurrently herewith:
`
`10
`
`1. MULTIPLE
`
`FACILIT-Y OPERATING
`
`SYSTEM
`
`ARCHITECTURE,
`
`invented by David Hitz, Allan Schwartz,
`
`Ja_mes Lau and Guy Harris; ·
`
`2.
`
`ENHANCED VMEBUS
`
`PROTOCOL UTILIZING:
`
`PSEUDOSYNCHRONOUS HANDSHAKING AND BLOCK MODE DATA.
`
`15
`
`TRANSFER, invented by Daryl Starr; and
`
`3. BOS LOCKING FIFO MULTI-PROCESSOR COMMUNICATIONS
`
`SYSTEM UTILIZING
`
`PSEUDOSYNCHRONOUS HANDSHAKING AND
`
`. BLOCK MODE DATA TRANSFER invented by Daryl D. Starr,
`
`William Pitts and Stephen Blightman .
`
`20
`
`. ~he above applicatio~s are all assiqned to the:
`
`assignee of the present invention and are all expressly
`
`incorporated herein by reference.
`
`BACKGROUND OP THE INYEHTION
`
`Field of the Invention
`
`25
`
`The invention relates to computer data networks,·
`
`and more particularly,
`
`to network file serve~
`
`architectures for computer networks.
`
`Attorney Docket No.:AUSP7209
`WP1/WSW/A6SP/7209.001
`
`8/24/89-7
`
`CQ-1002 / Page 494 of 959
`
`

`
`\...
`
`l.
`
`:-2-
`
`pesoription of the Related Art
`
`Over the p~st ten years, remarkable increases in
`
`hardware price/performance
`
`ratios have caused
`
`a:
`
`startling shift in both technical and office computing'·
`
`5
`
`environments.
`
`Dis~ributed workstation-server netWOrks:
`
`are displacing
`
`the once pervasive dumb
`
`terminal
`
`attached
`
`to mainframe or minicomputer.
`
`To date,
`
`however, network I/0 limitations have constrained the
`
`potential performance available to workstation users.
`
`10
`
`This situation has developed in part because dramatici
`
`jumps
`
`in microprocessor performance have exceeded
`
`increases in network I/0 performance.
`
`In
`
`a
`
`computer network,
`.
`workstations are referred to as clients, and shared:
`
`individual user.
`
`15
`
`resources for filing, printing, data storage and wide-:
`
`area communications are referred
`
`to as
`
`servers.:
`
`Clients and servers are -all considered nodes of a,
`
`network.
`
`Client nodes use standard communications:
`
`protocols to exchange service reqilests and responses.
`
`20
`
`with server nodes.
`
`Present-day network clients· and servers usually
`
`run the DOS, Macintosh OS, OS/2, or Unix operating
`
`systems. Local networks are usually Ethernet or Token
`
`Ring at the high end, Arcnet
`
`in
`
`the midrange, or:
`
`·25
`
`LocalTalk or StarLAN at the low end. The client-server·
`
`Attorney Docket No.:AOSP7209
`wP1/WSW/AUSP/7209.001
`
`8/24/89-7,
`
`CQ-1002 / Page 495 of 959
`
`

`
`'
`
`\.
`
`-3-
`
`communication protocols are fairly strictly dictated by
`
`the operatinq system environment -- usually one of
`
`several proprietary schemes for PC&
`
`(NetWare, 3Plus~
`
`Vines, LANManaqer, LANServer); AppleTalk fo~
`
`5
`
`Hacintoshes; and TCP/IP with NFS or RFS
`
`for Unix~
`
`These protocols are all well-knoWn in the industry:
`
`Unix client nodes typically feature a 16- or 32 ..
`bit microprocessor with 1-8. MB of primary memory, a
`640. x 1024 pixel display, a~d a built-in network
`
`10
`
`interface. A 40-100 MB local disk is often optional~
`
`Low-end examples are 80286-based PCs or 68000-base4
`
`Macintosh l's; mid-ranqe machines include 80386 Pes;
`
`Macintosh II's~ and 680XO-based Unix .workstations;
`
`Jiiqh-end machines include RISC-based DEC, HP, and SU!l
`
`15
`
`Unix workstations. Servers are typically nothinq mor~
`
`than repackaqed client nodes, confiqured in 19-inc4·
`
`racks rather than desk sideboxes. The extra space o(
`
`•'
`
`a 19-inch rack is used for additio~l backplane slots~
`
`diskor tape drives~ and power supplies.
`
`20
`
`Driven by RISC
`
`and CISC
`
`.microprocesso*
`
`developments, client worksta·tion performance has .
`
`increased by more than a factor of ten in the last few
`
`years.
`
`Concurrently,
`
`these extremely fast client$
`
`have also qained an appetite for data that remote
`the I/~
`
`Because
`
`servers are unable
`
`to satisfy.
`
`25
`
`shortfall is most dramatic in the Unix environment, th+
`
`Attorney Docket No.:AUSP7209
`WPl/WSW/AUSP/7209.001
`
`8/24/89-7
`
`CQ-1002 / Page 496 of 959
`
`

`
`\.
`
`-4-
`
`•J
`
`description of the preferred embodiment of the present:
`
`invention will
`
`focus on Unix file servers.
`
`The!
`
`architectural principles that solve the Unix server I/Oi
`
`problem, however, extend easily to server performance'
`
`5
`
`bottlenecks in other operating system environments as:
`
`well.
`
`Similarly,
`
`the descript"ion of the preferredi
`
`embodiment will
`
`focus on Ethernet
`
`implementations,
`
`though the principles extend easily to ·Other types of'
`
`networks.
`
`10
`
`In most Unix environments, clients and servers:
`
`exchange file data ·using
`
`the Network File System:
`
`(•NFs•), a standard promulgated by Sun Microsystems and:
`
`now widely adopted by the Unix community.
`
`NFS
`
`is:
`
`defined in a do.cument entitled,
`
`•NFS: Network File·
`
`15
`
`System Protocol Specification, • Request For Comments:
`
`(RFC) 1094, by Sun Microsystems, Inc.
`
`(March 1989). ·
`
`This document is incorporated herein by reference in:
`
`its entirety ..
`
`While simple and reliable, NFS is not optimal.
`
`.... ~ ..
`
`20
`
`Clients using NFS place considerable demands upon both:
`
`networks and NFS servers supplying clients with NPS'
`
`.·data. This demand i~ particularly acute for so-calledi
`
`diskless clients
`
`that have no
`
`local disks and'
`
`therefore depend on a
`
`file server for application
`
`25
`
`binaries and virtual memory paging as well as data.:
`
`For these Unix client-server configurations, the ten-:
`
`Attorney Docket No.:AUSP7209
`WPl/WSW/AOSP/7209.001
`
`8/24/89-7:
`
`.. ·;
`
`CQ-1002 / Page 497 of 959
`
`

`
`\...
`
`-s-
`to-one increase in client power haa·not been matched by·
`
`a
`
`ten-to-one increase in Ethernet capacity, in disk
`
`speed, or server disk-to-network I/O throuqhput.
`
`The result is that the number of diskless clients;
`
`5
`
`that a sinqle modern high-end server can adequately:
`
`support has dropped
`
`to betweer! s-10, depending' oni
`
`client power and application workload.
`
`For clients:
`
`containinq small
`
`local disks
`
`for applications and1
`
`paging, referred to as dataless clients, the client-:
`
`10
`
`to-server ratio is about twice this, or between 10-20.
`
`Such
`
`low client/server ratios cause piecewise'
`
`network configurations in which each local Ethernet;
`
`contains isolated traffic for its· own 5-10 (diskless)
`
`clients and dedicated server.·
`
`For overall'
`
`15
`
`connectivity, these local networks are usually joined:
`
`together with an Ethernet backbone or, in the future, ·
`
`with an FDDI backbone. These backbones are typically;
`
`connected to the local networks either by IP routers or:
`
`MAC-ievel bridges, coupling the local ~etworks toqether;
`
`20 · directly, or by a
`
`second server fuuctioninq as a,
`
`network interface, couplinq servers for all the local
`
`networks together.
`
`In addition to performance considerations, the low
`
`client-to-server ratio creates computinq problems
`
`in'
`
`25
`
`several additional ways:
`
`Attorney Docket No.:AUSP7209
`WP1/WSW/AUSP/7209.001
`
`8/24/89-7:
`
`CQ-1002 / Page 498 of 959
`
`

`
`\..
`
`-6-
`
`1.
`
`Sharing. Development groups of more than S~
`
`10 people cannot share the same server, and thus cannot
`
`easily share files without file replication and manual,:
`
`multi-server updates. Bridges or routers are a partial
`
`5
`
`solution but inflict a performance penalty due to mor~
`
`network hops.
`
`2. Adminiatration.
`
`System administrator~
`
`must maintain many limited-capacity servers rather tha~
`'
`a few more substantial servers. This burden includes
`
`10
`
`network administration, hardware maintenance, a~ use~
`
`account administration.
`
`3.
`
`File System Backup. System administrators oz:
`
`operators must conduct multiple file· system backups,;
`
`which can be onerously -time consuming tasks.
`
`It isl
`
`15
`
`also expensive to duplicate backup peripherals on eac~
`
`server (or every few servers.if slower network backup
`
`is used).
`
`. •
`
`4.
`
`Price Per Seat .
`
`With only S-10 client•
`
`per ijerver, the cost of the server must be shared byj
`
`20
`
`only· a small number of user£!.
`
`The real cost of a~
`
`entry-level Unix workstation is therefore significantly
`
`greater, often as much as. 140% greater, than the cost;
`
`of the workstation alone.
`
`~
`The widening I/O gap,. as well as administrative
`
`25
`
`and . economic considerations, demonstrates a need fo~
`
`higher-performance, larger-capacity Unix file servers.:
`
`Attorney Docket No.:AUSP7209
`WP1/WSW/AUSP/7209.001
`
`8/24/89-7
`
`CQ-1002 / Page 499 of 959
`
`

`
`-7-
`
`Conversion of a display-less workstation into a server!
`
`·~
`
`may address disk capacity issues, but does nothinq to:
`
`address fundamental I/O limitations. As an NFS server,
`
`the one-time workstation must sustain 5-10 or morel
`
`5
`
`times the network, disk, backplane, and file system!
`
`throughput
`
`than it was designed
`
`to support as
`
`a!
`
`client. Addinq larqer disks, more network adaptors, ;
`
`extra primary memory, or even a faster processor do not!
`i
`riol
`
`basic architectural
`
`I/O constraints;
`
`resolve
`
`10
`
`throuqhput does not increase sufficiently.
`
`Other prior art computer architectures, while not!
`.,
`specifically designed as file servers, may potentially:
`
`be used as such.
`
`In one such well-known architecture,!
`
`a CPU,
`
`a memory unit, and
`
`two
`
`I/O processors are!
`
`15'
`
`connected to a sinqle bus. One of the I/O processors!
`i
`operates a set of disk.drives, and if the architecture!
`
`is to be used as a serter, the other I/0 processor!
`i
`would be connected to a network. This architecture is'
`i
`not optimal as a file server,.however, at least becausei
`'
`.I/0 processors cannot handle .network file!
`
`the
`
`two
`
`20
`
`requests without involvinq the CPU. All network fil.e\
`
`are first
`
`transmitted
`
`to
`
`requests that are received by the network I/0 processor!
`!
`the CPU, which makesi :
`for! i
`the disk-I/O processor
`
`appropriate requests
`
`to
`
`25.
`
`satisfaction of the network request.
`
`Attorney Docket No. : AUSP7209
`WP1/WSW/AUSP/7209.001
`
`8/24/89-7:
`
`CQ-1002 / Page 500 of 959
`
`

`
`r
`
`-8-
`
`In another such computer architecture, a disk:
`
`controller CPU manaqes access
`
`to· disk drives, and:
`
`several other CPUs,
`
`three
`
`for example, may
`
`be:
`
`clustered around the disk controller CPO. Each of the;
`
`5
`
`other CPUs can be connecte4 to its own network.
`
`The;
`
`network CPUs are each connected ~o the disk controller:
`
`CPU as well as
`
`to each other for
`
`interprocessor:
`
`communication.
`
`One of
`
`the disadvantaqes of
`
`thisi
`
`computer architecture is that each CPU in the system/
`
`10
`
`runs its own complete operatinq system. Thus, network~
`
`file server requests must be handled by an operatinq!
`
`system which is also heavily loaded with facilities a~dl
`
`processes for performinq a larqe number of other, ·non:
`
`file-server tasks. Additionally,
`
`the interprocessor!
`
`15
`
`communication is not optimized for file server type i
`
`requests.
`
`In yet another computer architecture,. a plurality\
`
`of CPOs, each havinq its own cache me~ory for data andi
`
`instrUction storaqe, are connected to a common bus with!
`
`20
`
`a
`
`system memory and a disk controller.
`
`The disk!
`
`controller and each of the CPOs have direct memory:
`
`access to the system memory, and one or more of thej
`
`CPUs can be connected to a network. Thi.s architecture:
`
`is disadvantaqeous as a file ·server· because, amonqi
`
`other thinqs, both file data and the instructions forj
`
`•
`
`I
`
`the CPUs reside in the same. system memory. There will!
`
`Attorney Docket No.:AUSP7209
`WPl/WSW/AUSP/7209.001
`
`8/24/89-7:
`
`25
`
`· ..
`
`CQ-1002 / Page 501 of 959
`
`

`
`-9-
`
`be instances, therefore, in which the CPUs must stop
`
`runninq while they wait for larqe blocks of file data
`
`to be transferred between system memory and the network
`
`CPU.
`
`Additionally, as with ·both of the previously
`
`5
`
`described computer architectures, the entire operatinq
`
`system runs on each of the CPUs, ~ncludinq the net~ork
`
`CPU.
`
`In yet another type of compu.ter architecture, a
`
`larqe number of CPUs are connect~d toqether in a
`
`10
`
`hypercube topoloqy.
`
`one of more of these. CPUs can be
`
`connected to networks, while another can be connected
`
`to disk drives.
`
`This architecture
`
`is also
`
`disadvantaqeous as a file server because, amonq other
`
`thinqs,
`
`each processor
`
`runs
`
`the entire operatinq
`
`15
`
`system.
`
`Interprocessor communication
`
`is also not
`
`optimal for file server applications.
`
`SQMMARY OF THE INYENTION
`
`'The present
`
`invention
`
`involves a new,
`
`.server-
`
`specific I/O architecture that is optimized for a Unix
`
`•·
`
`20
`
`file server's most common actions -- file operations.
`
`Rouqhly stated, the invention involves a fila server
`
`architecture comprisinq one or more network
`
`controllers, one or more file controllers, one or more
`
`.storaqe processors 1 and a sy!lltem or buffer memory 1 all
`
`25
`
`connected over a messaqe. passinq bus and operatinq in
`
`Attorney Docket No.:AUSP7209
`WPl/WSW/AUSP/7209.001
`
`8/24/89-7
`
`CQ-1002 / Page 502 of 959
`
`

`
`-10-
`
`parallel with the Unix host processor.
`
`The network
`
`controllers each connect to one or more network, and
`
`provide all protocol processing between the network
`
`layer data format and an internal file server format
`
`5
`
`for communicating client requests to other processors
`
`in the server. Only those data packets which cannot be
`
`interpreted by the network COntrollers 1
`:.
`client requests to run a cliQnt-defined program on the
`
`for example
`
`server, are
`
`transmitted
`
`to
`
`the Unix host
`
`for
`
`10
`
`processing.
`
`Thus
`
`the network controllers,
`
`file
`
`controllers and storage processors contain only small
`parts of an overall operating system, and each is
`optimized for the particular type of work to which it
`
`is dedicated.
`
`15
`
`Client requests
`
`for file operat_ions are
`
`transmitted to one of
`
`the file controllers which,
`
`independently of the ·Unix host, manages
`
`the virtual
`
`file system of a mass stor~ge device which is coupled
`
`to tiie storage processors.
`
`The file controller~S may
`
`I
`
`20
`
`also control data buffering between
`
`the storage
`
`processors and the network controllers,
`
`through the
`
`system memory.
`
`The file controllers preferably each
`
`include a local buffer memory for caching file control
`
`information,
`
`separate
`
`from
`
`the
`
`system· memory
`
`for
`
`2.5
`
`caching
`
`file data.
`
`Additionally,
`
`the network
`
`controllers, file .Processors and storage processors are
`
`Attorney Docket No.:AUSP7209
`WP1/WSW/AUSP/7209.001
`
`8/24/89-7
`
`CQ-1002 / Page 503 of 959
`
`

`
`-11-
`
`:~
`
`all designed to avoid any instruction fetches from the
`
`system memory, instead keepinq all instruction memory
`
`separate and
`
`local.
`
`This arranqement eliminates
`
`contention on
`
`the backplane between microprocessor
`
`5
`
`instruction fetches and transmissions of messaqe and
`
`file data.
`
`BRIEF DESCRIPTION OF THE DBAWINGS
`
`The invention will be described with respect to
`
`particular embodiments thereof 1 and reference will be
`
`10
`
`made to the drawings, in which:
`
`Fig. 1. is a block diagram of a prior art file
`
`server architecture;
`
`Fig. 2 ·is a block diagram of a file server
`
`architecture according to the invention;
`
`15
`
`Fig. 3 is a block diaqram of one of the network
`
`controllers shown in Fig. 2;
`
`Fiq. 4
`
`is a block diagram of one of the file
`
`controllers shown in Fig. 2; ·
`
`Fig. 5 is a block diagram of one of the storage
`
`20
`
`processors shown in Fig. 2;
`
`Fig. 6 is a block diagram of one of the system
`
`memory cards shown in Fiq. 2;
`
`Figs.
`
`7A-C are
`
`a
`
`flowchart
`
`illustrating
`
`the
`
`operation of a
`
`fast
`
`transfer protocol BLOCK WRITE
`
`25
`
`cycle; and
`
`Attorney Docket No.:AUSP7209
`WPl/WSW/AUSP/7209.001
`
`8/24/89-:-7
`
`CQ-1002 / Page 504 of 959
`
`

`
`'·
`
`-12-
`
`.\ ._..,-i
`
`Figs.
`
`8A-C are
`
`a
`
`flowch~rt illustrating
`
`the
`
`operation of a
`
`fast
`
`transfer protocol BLOCK READ
`
`cycle.
`
`DETAILEQ QESCRIPTION
`
`5
`
`For comparison purposes and background,
`
`an
`
`illustrative prior-art file server architecture will
`
`first be described with respect to Fig. 1. Fig. 1 is
`
`an overall block diagram of a conventional prior-art
`
`Unix-based file server for Ethernet networks.
`
`It
`
`10
`
`consists of a host CPU card 10 with
`
`a
`
`single
`
`microprocessor on board. The host CPU card 10 connects
`
`to an Ethernet 11 12, and it connects via a memory
`
`management unit (MMU) 11 to a large memory array 16.
`
`The host CPU card 10 also drives a keyboard, a video
`
`15
`
`display, and
`
`two RS232 ports (not shown} •
`
`It also
`
`connects via the MMU 11 and a standard 32-bit VME bus
`
`20 to various .Peripheral devices, including an SMD. disk
`
`controller 22 controlling one or two disk drives 24, a
`
`20
`
`a
`SCSI host adaptor 26 connected to a SCSI bus 28 1
`tape controller 30 connected to a quarter-inch tape
`drive 32 I
`and possibly a network 12 controller 34
`
`connected
`
`to a
`
`second Ethernet 3 6 .
`
`The SMD disk.
`
`controller 22 can communicate with memory array 16 by
`
`direct memory access via bus 20 and MMU 11, with either
`
`25
`
`the disk controller or the MMU acting as a bus master.
`
`Attorney Docket No.:AUSP7209
`WPl/WSW/AUSP/7209.001
`
`8/24/89-7
`
`~ ..
`
`.i
`
`CQ-1002 / Page 505 of 959
`
`

`
`-13-
`
`This confiquration is illustrative; many variations
`
`are available.
`
`The system communicates over the.Ethernets usinq
`
`.industry standard TCP/IP and NFS protocol stacks. A
`
`5
`
`description of protocol stacks in qeneral can be found
`
`in Tanenbaum,
`
`•Computer Networks•
`
`(Second Edition,
`
`Prentice Hall: 1988). File server protocol stacks are
`
`described at pages 535-546. The Tanenbaum reference is
`
`incorporated herein by reference.
`
`10
`
`Basically,
`
`the
`
`followinq protocol
`
`layers are
`
`implemented in the apparatus of Fiq. 1:
`
`Network Layer. The network
`
`layer converts data
`
`packets between a formal specific to Ethernets and a
`
`format which is independent of the particular type of
`
`15
`
`network used.
`
`the Ethernet-specific format which. is
`
`used in the apparatus of Fig. 1 is described in Horniq,
`
`•A Standard For The Transmission of IP Dataqrams Over
`
`Ethernet Networks, • RFC 894
`
`(April 1984), which· is
`
`incorporated herein by reference.
`
`20
`
`Tbe Internet Protogol CIPl Layer. This
`
`layer
`
`provides the functions. necessary to deliver a package ·
`
`of bits
`
`(an internet datac.Jram) from a source to a
`
`destination over an interconnected system of networks.
`
`For messaqes to be sent from the file server to a
`
`25
`
`client, a higher level in the server calls the IP
`
`module, providinq
`
`the
`
`internet address of
`
`the
`
`Attorney Docket No.:AUSP7209
`WPl/WSW/AUSP/7209.001
`.
`
`8/24/89-7
`
`CQ-1002 / Page 506 of 959
`
`

`
`'
`
`-14-
`
`destination client and the message to transmit. The IP
`
`module performs any
`
`required
`
`fragmentation of
`
`the
`
`message to accollll!lodate packet size limitations of any
`
`intervening gateway, adds
`
`internet headers
`
`to each
`
`5
`
`fragment, and calls on the network layer to transmit
`
`the resulting internet datagrams. The internet header
`
`includes
`
`a
`
`local network destination address
`
`(translated from the internet address) as well as other
`
`parameters.
`
`10
`
`For messages received by the IP layer from the
`
`network
`
`layer,
`
`the
`
`IP module determines
`
`from
`
`the
`
`internet address whether
`
`the. datagram
`
`is
`
`to be
`
`forwarded
`
`to another host on another network,
`
`for
`
`example on a second Ethernet such as 36 in Fig. 1, or
`
`15
`
`whether it is intended for the server itself. If it is
`
`intended for another host on the second network, the IP
`
`module determines
`
`a
`
`local net address
`
`for
`
`the
`
`destination and calls on the local network layer for
`
`that "network to send the datagram.
`
`If the datagram is
`
`20
`
`intended for an application program within the server,
`
`the IP layer strips off the header and passes the
`
`remaining portion of the message to the appropriate
`
`next higher layer. The internet protocol standard used
`
`in the illustrative apparatus of Fig. 1 ·is specified in
`
`25
`
`Information Sciences Institute,
`
`•Internet Protocol,
`
`DARPA Internet Program Protocol Specification,• RFC 791
`
`Attorney Docket No.:AUSP7209
`WPl/WSW/AUSP/7209.001
`
`8/24/89-7
`
`CQ-1002 / Page 507 of 959
`
`

`
`'
`
`-15-
`
`(September 1981), which
`
`is
`
`incorporated herein by
`
`reference.
`
`TGP/ODP Layer. This
`
`layer is a. datagram service
`
`with more elaborate packaging and addressing options
`
`5
`
`than .the IP layer. For example, whereas an IP datagram
`
`can hold about 1,500 bytes and be.addressed to hos~s,
`
`UDP dataqrams can hold about 64KB and be addressed to a
`:.
`particular port within a host.
`
`TCP and ODP are
`
`alternative protocols at
`
`this
`
`layeri applications
`
`10
`
`requiring ordered reliable delivery of streams of data
`
`may use TCP, whereas applications (such as NFS) which
`
`do not require ord,ered and reliable delivery may use
`
`UDP.
`
`The prior art file server of ~ig. 1 uses both TCP
`
`15
`
`and ODP. "It uses UDP for file server-related services,
`
`and uses TCP for certain other services which
`
`the
`
`server provides
`
`to network clients.
`
`The UDP
`
`is
`
`specified in Postel, •user Datagram Protocol,, • RFC 768
`
`(AugU.st 28, 1980), which is incorporated herein by
`
`20
`
`reference.
`
`TCP is specified in Postel, •Transmission
`
`Control Protocol,• RFC 761 (January 1980) and RFC 793
`
`(September 1981), which is also incorporated herein by
`
`reference.
`
`XDR/RfC Layer. This
`
`layer provides
`
`functions
`
`25
`
`callable from higher level programs to run a designated
`
`procedure on a remote· machine.
`
`It also provides the
`
`Attorney Docket No.:AUSP7209
`WP1/WSW/AUSP/7209.001
`
`8/24/89-7
`
`CQ-1002 / Page 508 of 959
`
`

`
`'·
`
`-16-
`
`decoding necessary
`
`to permit a client machine
`
`to
`
`execute a procedure on the server.
`
`For example, a
`
`caller process in a client node may send a call message
`
`to the server of Fig. 1. The call message includes a
`
`5
`
`specification of
`
`the desired procedure,
`
`and
`
`its
`
`parameters. The message is passe~ up the stack to the
`
`RPC layer, which calls the appropriate procedure within
`
`the server. When the procedure is complete, a reply
`
`message is generated and RPC passes it back down the
`
`.... ,.
`
`10
`
`stack and over the network to the caller client. RPC
`
`is. described in Sun Microsystems, Inc. 1
`
`•RPC: Remote
`
`Procedure Call Protocol Specification, Version 2,• B.PC
`
`1057
`
`(June 1988), which is
`
`incorporated herein by
`
`reference.
`
`1$
`
`B.PC uses
`
`the XDB. external data representation
`
`standard to represent information passed to and from
`
`the underlying ODP
`
`layer.
`
`XDB.
`
`is merely a data
`
`encoding standard, useful for transferring data between
`
`different computer architectures. ·Thus, on the network
`
`20
`
`side of
`
`the XDR/llPC
`
`layer 1
`
`information is machine-
`
`1ndependent; on the host application side, it may not
`
`be. XDR is described in Sun Microsystems, Inc., •xDa:
`
`Externa1 Data B.epresen~ation Standard,• B.FC 1014 (June
`
`1987), which is incorporated herein by reference.
`
`25
`
`NFS Layer.
`
`The NFS
`
`(•network file system•)
`
`layer is one of the programs available on the server
`
`Attorney Docket No.:AUSP7209
`WP1/WSW/AUSP/7209.001
`
`8/24/89-7
`
`..
`
`CQ-1002 / Page 509 of 959
`
`

`
`\..
`
`-17-
`
`which an RPC request can call. The combination of host
`
`address, program number, and procedure number in an RPC
`
`request can specify one remot.e NFS procedure to be
`
`called.
`
`5
`
`Remote procedure calls to NFS on the file server
`
`of Fig. 1 provide transparent, stateless, remote accoss
`
`to shared files on the disks 24. NFS assumes a file
`
`system that is hierarchical, with directories as all
`
`but the bottom level of files. Client hosts can call
`
`10
`
`any of about 20 HFS procedures
`
`including
`
`such
`
`procedures as readinq a s~e~~fied number of bytes from
`
`a specified file; writing a specified number of bytes
`
`to a specified file; creating, renaming and removing
`
`specified files; parsing directory trees; creating and
`
`15
`
`removing directories; and reading and setting file
`
`attributes.
`
`The location on disk to which and from
`
`which data is stored and retrieved is always specified
`
`in logical terms, such as by a file handle or Inode
`
`desiqiiation and a byte offset.
`
`The details of the
`
`.:"..·*;.··
`
`20
`
`actual data storage are hidden from the client. The
`
`NFS procedures,
`
`together with possible hiqher ·level
`
`modules
`
`such as Unix VFS
`
`and UFS,
`
`perform all
`
`conversion of logical data,addresses to physical data
`
`addresses
`
`such as drive, head,
`
`track and sector
`
`25
`
`identification. NFS is specified in Sun Microsystems,
`
`Inc. ,
`
`•NFS: Network File System Protocol
`
`Attorney Docket Ho.:AOSP7209
`WPl/WSW/AOSP/7209.001
`
`8/24/89-7
`
`CQ-1002 / Page 510 of 959
`
`

`
`'
`
`-18-
`
`Specification,• RFC 1094
`
`(March 1989),
`
`incorporated
`
`herein by reference.
`
`With the possible exception of the ~etwork layer,
`
`all the protocol processing described· above is done in
`
`5
`
`software., by a single processor in the host CPU card
`
`10.
`
`'l'hat is, when an Ethernet. packet arrives .on
`
`Ethernet 12, the host CPU 10 performs all the protocol
`
`processinq in the NFS stack, as well as the protocol
`
`processinq for any other application which may be
`
`,.:-· .. -·
`
`10
`
`running on the host 10. NFS procedures are run on the
`
`host CPU 10, with access to memory 16 for both data and
`
`program code beinq provided via MMU 11.
`
`Logical.ly
`
`specified data addresses are converted to a much more
`
`physically specified form and communicated to the SMD
`
`15
`
`disk controller 22 or'the SCSI bus 28, via the VME bus
`
`20, and all disk caching is done by the host CPU 10
`
`through the memory 16.
`
`'l'he host CPU card 10 also runs
`
`procedures for performing various other functions. of
`
`the file server, communicatinq with. tape controller 30
`
`20
`
`via the VMB bus 20.
`
`Among these are client-defined
`
`' . .......
`
`remote procedures requested by client workstations.
`
`If the server serves a second Ethernet 36, packets
`
`from that Ethernet are transmitted to the host CPU 10
`
`over the same VME bus 20 in the form of IP dataqrams.
`
`25
`
`Again,. all protocol processing except for the ·network
`
`layer is performed by software processes running on the
`
`Attorney Docket No.:AUSP7209
`WP1/WSW/AUSP/7209.001
`
`8/24/89-7
`
`-·
`
`CQ-1002 / Page 511 of 959
`
`

`
`-19-
`
`host CPU 10.
`
`In addition, the protocol processing for
`
`any message that is to be sent from the server out on
`
`either of
`
`the Ethernet& 12 or 36
`
`is also done by
`
`processes running on the host CPU 10.
`
`5
`
`It can be seen that the host CPU 10 performs an
`
`enormous amount of processing of .data, especially. if
`
`5-10 clients· on each of the two Ethernets are making
`
`file server requests and need to be sent responses on a
`
`frequent basis. The host CPU 10 runs a multitasking·
`
`10
`
`Unix operating system, so each incoming request need
`
`not wait for the previous request to be completely
`
`processed and
`
`returned before being processed.
`
`Multiple processes are activated on the host CPU 10 for
`
`performing different stages of
`
`the processing of
`
`15
`
`different requests, so many requests may be in process
`
`at the same time.
`
`But there is only one CPU on the
`
`card 10 1
`
`so the processing of these requests is not
`
`accomplished in a truly parallel manner. The processes
`
`are instead merely time 'sliced.
`
`The CPU 10 therefore
`
`20
`
`represents a major bottleneck in the processing of file
`
`server requests.
`
`Another bottleneck occurs in MMU 11 I which must
`
`transmit both instructions and data between the. CPU
`
`card 10 and the memory 16. All data flowing between
`
`25
`
`the disk drives and· the network passes through this
`
`interface at least twice.
`
`Attorney Docket No.:AUSP7209
`WP1/WSW/AUSP/7209.001
`
`8/24/89-7
`
`....
`
`CQ-1002 / Page 512 of 959
`
`

`
`-20-
`
`Yet another bottleneck can occur· on the VMB bus.
`
`20, which must
`
`transmit data among
`
`the SHD disk
`
`controller 22, the SCSI host adaptor 26, the host CPU
`card 10, and possibly tbe network 12 controller 34.
`
`5
`
`PREFERREQ EMBODIMENT-OVEBALL HARDWARE ARCHITSCTURE
`
`In Fig. 2
`
`there is shown a b~ock diagram of a
`
`network file server 100 according to the invention. It
`•
`can include 111.ultiple network controller (NC) boards,
`
`one or more file controller (FC) boards, one or more
`
`10
`
`stor(lge processor (SP) boards, multiple system 111emory
`
`boards,
`
`and one or more host processors.
`
`The
`
`particular em.bodiment shown in Fig. 2
`
`includes four
`
`network controller boards 110a-110d,
`
`two
`
`file
`
`controller boards 112a-112b,
`
`two storage processors
`
`15
`
`114a-114b,
`
`four system memory cards 116a-ll6d for a
`
`total of 192MB of memory, and one local host processor
`
`118.
`
`The boards 110, 112, 114, 116 and 118 are
`
`connected
`
`toqether over a VME bus 120 on which an
`
`enhanced block
`
`transfer mode as described
`
`in
`
`the
`
`20
`
`ENHANCED VMEEIOS PROTOCOL application identified above
`
`may be used. Each of the four network controllers 110
`
`shown in Fig. 2 can be connected to up to two Bthernets
`
`122, for a
`
`total capacity of B Ethernet& 12~a-122h.
`
`Each of
`
`the storage processors
`
`114 opera'tes
`
`ten i
`
`25
`
`parallel SCSI busses, nine of which can each support up
`
`I
`I
`I
`
`I I
`
`I
`
`I·
`
`Attorney Docket No.:AUSP7209
`WPl/WSW/AUSP/7209.001
`
`8/24/89-7
`
`_L
`I
`I
`. i
`
`- - - ' - - ' . · _____ ..:_ ____ _
`
`CQ-1002 / Page 513 of 959
`
`

`
`'···
`
`\.
`
`-21-
`
`to three SCSI disk drives each. The tenth SCSI channel
`
`on each of the storage processors 114 is used for tape
`
`drives and other SCSI peripherals.
`
`The host 118 is essentially a standard SunOs Unix
`
`5
`
`processor, providing all the standard Sun Open Network
`
`Computing
`
`(ONC) services except NFS and IP routing.
`
`Importantly, all network requests
`
`to
`
`run a user-
`
`defined procedure are passed
`
`to
`
`the host
`
`for
`
`execution.
`
`Each of the NC boards 110, the FC boards
`
`···~,. ...
`
`10
`
`112 and the SP boards 114 includes its own independent·
`
`32-bit microprocessor.
`
`These boards essentially off-
`
`load from the host processor l18 virtually all of the
`
`NFS and disk processing.
`
`Since the vast majority of
`
`messages to and from clients over the Bthernets 122
`
`15
`
`involve NFS requests and responses, the processing of
`
`these requests
`
`in parallel by
`
`the NC,
`
`FC and SP
`
`processors, with minimal involve~ent by the local host
`
`118, vastly improves file s

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket