throbber
United States Patent
`Pisello et al.
`
`[19]
`
`[54] NETWORK MANAGEMENT SYSTEM
`HAVING HISTORICAL VIRTUAL CATAWG
`SNAPSHOTS FOR OVERVIEW OF
`HISTORICAL CHANGES TO FILES
`DISTRIBUTIVELY STORED ACROSS
`NETWORK DOMAIN
`
`[75]
`
`Inventors: Thomas Pisello, De Bary; David
`Crossmier, Casselberry; Paul Ashton,
`Oviedo, all of Fla.
`
`[73] Assignee: Seagate Technology, Inc., Scotts
`Valley, Calif.
`
`[21] Appl. No.: 590,528
`Jan. 24, 1996
`
`[22] Filed:
`
`Related U.S. Application Data
`
`[62] Division of Ser. No. 153,011, Nov. 15, 1993, Pat. No.
`5,495,607.
`Int. CL6
`
`[51]
`
`G06F 7/00; G06F 7/06;
`G06F 12/00; G06F 17/30
`395/610; 395/607; 3951200.03;
`3951200.01; 395/800; 395/650; 395/280
`[58] Field of Search
`395/600, 200.03,
`395/200.11,200.13, 800,440,444.200.06.
`200.12,200.17.200.18, 182.02, 182.04,
`182.05, 182.06,610,607,650
`
`[52] U.S. CI
`
`[56]
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`
`4,141,006
`4,710,870
`4,805,134
`4,897,841
`
`2/1979 Braxton et aI
`1211987 Blackwell et aI
`211989 Calo et aI
`1/1990 Gang, Jr
`
`3401505
`3641200
`3951600
`370185.13
`
`IIIIIIIIIIIIII~IIIIIIIIIIIIIIIIII~IIIIIIIIIIIIIIIIIII
`USOO5678042A
`[11] Patent Number:
`[45] Date of Patent:
`
`5,678,042
`Oct. 14, 1997
`
`4,914,571
`4,987,531
`5,001,628
`5,077,658
`5,133,075
`5,163,131
`5,175,852
`5,216,591
`5,220,562
`5,247,670
`5,271,007
`5,287,453
`5,287,461
`5,295,244
`5,325,527
`5,403,639
`5,448,727
`5,506,986
`5,537,585
`
`4/1990 BaralZ et aI.
`111991 Nishikado et aI
`3/1991 Johnson et aI.
`12/1991 Bendert et aI
`7/1992 Risch
`1111992 Row et aI
`12/1992 Johnson et aI.
`6/1993 Nemirovsky et aI
`6/1993 Takada et aI
`911993 Matsunaga
`12/1993 Kurahashi et aI
`2/1994 Roberts
`2/1994 Moore
`3/1994 Dev et aI
`6/1994 Cwikowski et aI
`4/1995 Belsan et aI
`9/1995 Annevelink
`4/1996 HeaIy
`7/1996 Blickenstaff et aI
`
`3951600
`395/600
`395/600
`395/600
`395/800
`3951200
`395/600
`395/200
`370185.13
`3951650
`3951600
`3951200
`3051275
`3951200
`3951650
`395/600
`395/600
`3951600
`395/600
`
`Primary Examiner-Jack B. Harvey
`Assistant Examiner-Raymond N. Phan
`Attorney, Agent, or Firm-Fliesler. Dubb, Meyer & Lovejoy
`
`[57]
`
`ABSTRACT
`
`A network management system includes a domain admin(cid:173)
`istrating server (DAS) that stores a virtual catalog repre(cid:173)
`senting an overview of all files distributively stored across a
`network domain currently or in the past. The current and
`historical file information is used for assisting in auditing or
`locating files located anywhere in the domain. The current
`file information is used for assisting in transferring files
`across the domain. The domain administrating server (DAS)
`also includes a rule-base driven artificial administrator for
`monitoring and reacting to domain-wide alert reports and for
`detecting problematic trends in domain-wide performance
`based on information collected from the network domain.
`
`4 Claims, 4 Drawing Sheets
`
`190"
`I
`....----I..(---o,t',
`
`DOMAIN - - - - - - - - - - - - - fo -
`190
`
`Oracle Exhibit 1001, page 1
`
`

`

`'J1
`
`o•
`•;p
`i
`
`o!
`
`1·
`....
`~~....
`
`~"
`
`'-I
`
`~~.
`
`...
`
`~ ~
`
`...
`Ut
`="
`~...
`~
`
`FIG 1
`7( (CURRENT)-'-·-·-·~·-·n/-'014.0
`I------111.0;=jLQCAL\ ---f;;;;;;:ACAT. } -j DATA INTEGRI1Y MODULE ?
`(
`•
`IDOMAIN-WIDEL.--.-. / - , 103 115
`LOCAL
`}, I I VIRTUAl
`PREVIOU§_
`./-,\ CAT. r ~ -
`/
`SNAPSH0o/J..
`CATALOG
`I \_150.00 _/\ ~OCAl~- /1111121113114
`150.02
`I
`_====7/ • CAT.S J
`150.01
`P ~ rLOc~~- -1.
`lEJ \ CATS}
`12X.0
`\.
`. /
`
`180,180\180"
`/
`
`C184
`
`I
`
`B A
`
`I
`
`I
`--1COMPONENTSECURI!YJ
`~ I
`~+iTEMP~182 I 183
`t
`I
`, 181
`,....l.;....,.O-!-C....,...AL......,I:-:-::N=FRA~S=T=R.,...."UC"='T:::::-U-±R=E:-::EX=EC-:..,
`
`~51
`
`••
`.....
`DOMAIN·
`ADMINIST 14X.0
`SERVER
`
`LOCAL BACKUP EXEC 117 I 116
`119 I
`118
`. , ~
`
`IILOCAl" HSM
`
`)
`
`............ " "
`
`... 01•
`
`............ " .
`
`149
`
`I
`
`I
`
`-.!~~
`I USER I
`GUI J
`- - - ~ '.
`~
`I
`:- DOMAIN
`-,
`190'---
`
`••
`
`- . - .
`
`ADMIN
`WORK
`STA.
`
`USER
`170 WORK
`STA.
`
`DOMAIN
`190
`
`____
`It' DOMAIN 1
`ADMIN
`I OATNRULE I
`I
`BASE
`I
`\
`150.1
`/
`-.=-=.==_
`(DOMAIN \
`I STATUS!
`I
`I CONTROL
`I
`\
`150.2
`/
`-
`-
`-
`-
`....-=-=~...".., 105
`• •
`COMM
`I
`I
`GATE 102-<LJ>
`105
`L, ~.-.-.-.
`165
`161
`t
`160 ADMIN
`IADTliN'
`GUI J
`WORK
`STA.
`- - -
`
`DAS-MANAGED
`FILE-SERVR
`(COMPUTER 1101
`EXCHANGE AGENTS
`-110-
`L.......,--...;..:12:=0:...,-__,-~~~~~;:::==t;.===::;::=rr
`\"': ~.
`-140-
`-
`SNMP
`129
`101
`• •
`STATUS
`REPORTS~J'
`110a.~--:::~ -.::/': COMM
`106 I
`105 GATEWAY
`105'
`\L 107
`Er'
`I ~ 1"
`100
`
`1
`\ \\
`
`101
`
`'.~.
`: ---
`
`-150-
`
`·1·1
`
`-
`
`105"
`
`190"
`~
`- -
`
`I 104
`I
`
`I
`-I-
`
`I
`I
`
`Oracle Exhibit 1001, page 2
`
`

`

`u.s. Patent
`
`Oct. 14, 1997
`
`Sheet 2 of 4
`
`5,678,042
`
`FIG. 2
`
`TIME
`
`231
`I
`.......
`....... --203
`~......--r__~--r-----J...'" FileName
`(or other Attribute)
`
`202
`
`223
`~~_~_l..
`
`h...232
`
`LOCATION IN
`DOMAIN
`
`DRIVE-A STORAGE
`
`DRIVE-B STORAGE
`
`"
`MAX 301
`~-"1.303
`
`MAX+------
`~!--_.
`
`CURRENT
`302
`FIG.3A
`
`I..---r------. TI ME
`
`CURRENT
`
`FIG. 38
`
`401
`
`~411
`
`DRIVE-A USED SPACE
`FIG.4A
`
`DRIVE-B USED SPACE
`FIG. 48
`
`Oracle Exhibit 1001, page 3
`
`

`

`~•'
`
`J1
`
`• ~=! op
`
`. ~
`
`FIG. 5
`
`BACKBONE TRAFFIC
`CHART 500
`
`JOB
`NUMBER
`
`PROJECTED
`END
`BEGIN
`~
`+
`HSM TRANSFER 1(cid:173)
`SERVER-A TO I A
`SERVER-H
`,
`,
`I
`)
`501
`
`~l;o.
`
`~~
`
`(cid:173)w
`
`~n
`
`,I
`n,I
`
`~ l
`
`;o.
`
`503
`
`BACKUP TRANSFER I
`SERVER-B TO
`I
`SERVER-K
`
`BACKUP TRANSFER I,
`,A
`SERVER-A TO
`SERVE~-K
`-)
`502
`
`I
`
`t1
`
`t2
`
`t3
`
`t4 ts
`
`TIME
`
`...
`Ul
`=".......
`
`QO...=~
`
`Oracle Exhibit 1001, page 4
`
`

`

`00
`•
`
`o•
`~=i
`
`o~ "
`
`"""
`.1"-
`""""~
`-..J
`
`~$
`
`1
`01::>-
`
`~ 0
`
`1::>-
`
`01
`1=\
`.......
`QC....o
`~
`
`I '
`-1-----'
`
`~656
`j
`
`I~
`
`- •• -
`647
`
`TO
`118
`
`• •
`
`I
`ADM GUI165
`VIRTUAL FILE
`.-
`~.
`MA....1J\GER
`REPORTS
`AND
`636 •
`I
`I'U"\
`VIEW
`165.1
`1 646"-
`GENERATOR
`637 •
`} . ~ REMOTE INFRAl
`165.6
`I
`MANAGER
`•
`•
`165.3
`t .
`626
`"J' COMM I
`GATE
`
`627
`"J
`
`150.25
`~625)
`
`:
`
`I
`
`IHELPI
`RULE-BASE DRIVEN bSECURITY 165.5
`r FILTER
`, 165.4 ~-H PERMISSIONSl
`' '--165.7
`IKEYBD ~ -
`SCREEN
`160a
`• • -
`LOCAL
`SERVER
`COMPUTER
`110'
`
`DOMAIN CAT.'
`SNAPSHOTS
`150.00 -150.0N)
`
`~616
`
`612
`
`DAS 150
`
`INFRASTRUCT'"
`STATUS SNAPS
`150.11
`
`SNAPSHOT
`COLLECTOR
`USER
`150.21
`f-.620
`GROUPS~'
`.
`T
`ASK SCHEDULER
`150.22
`
`BACKBONE
`TRAFFIC
`PATTERNS
`150.13
`~
`
`L
`
`1---611
`
`t
`619
`
`(SNMP)
`MONITOR 62~
`~
`150.23
`.
`622
`
`-
`
`-
`
`-
`
`'------0
`......646
`
`610
`}
`
`-
`
`10--""\
`ARTIFICiAl
`603~
`16Gb
`ADMINISTRATOR
`• • -
`• •
`• • ••
`I
`-~
`-~
`I POLICY
`BACKUP
`MIGRATION 1.
`LOCALHSM A
`POLICY! ~
`I ENF.
`POLICY
`AGENT
`P-
`; 150 26 ENFORCER ENFORCER iI <.
`119a
`I
`! .
`I
`150.27
`I I 615
`I
`•
`i
`
`600
`FIG. 6
`
`•
`
`TO 619 LOCAL A
`LOCALBAKUP A
`SCAN P
`AGENT p -
`-
`119c I
`119b
`I
`117
`LOCAL CONFIGURATION ~I+- TO
`AGENT 119d
`I
`116
`
`621.
`~
`)
`I
`
`-
`
`15~.28
`
`614
`~
`
`I
`
`•
`
`Oracle Exhibit 1001, page 5
`
`

`

`5,678,042
`
`1
`NETWORK MANAGEMENT SYSTEM
`HAVING HISTORICAL VIRTUAL CATALOG
`SNAPSHOTS FOR OVERVIEW OF
`HISTORICAL CHANGES TO FILES
`DISTRmUTIVELY STORED ACROSS
`NETWORK DOMAIN
`
`This application is a division of Ser. No. 08/153,011,
`filed Nov. 15, 1993. now U.S. Pat. No. 5.495,607.
`
`BACKGROUND
`
`1. Field of the Invention
`The invention relates generally to the field of computer(cid:173)
`ized networks. The invention relates more specifically to the
`problem of managing a system having a variety of file
`storage and file serving units interconnected by a network
`2. Cross Reference to Related Applications
`The following copending U.S. patent application(s) is/are
`assigned to the assignee of the present application, is/are
`related to the present application and its/their disclosures
`is/are incorporated herein by reference:
`(A) Ser. No. 08/151.525 filed Nov. 12. 1993 by Guy A.
`Carbonneau et al and entitled. SCSI-COUPLED MOD(cid:173)
`ULE FOR MONITORING AND CONTROLLING
`SCSI-COUPLED RAID BANK AND BANK
`ENVIRONMENT, and issued Dec. 17. 1996 as U.S.
`Pat. No. 5.586.250.
`3. Description of the Related Art
`Not too long ago, mainframe computers were the primary
`means used for maintaining large databases. More recently.
`database storage strategies have begun to shift away from
`having one large mainframe computer coupled to an array of
`a few, large disk units or a few, bulk tape units, and have
`instead shifted in favor of having many desktop or mini- or
`micro-computers intercoupled by a network to one another
`and to many small. inexpensive and modularly interchange(cid:173)
`able data storage devices (e.g..
`to an array of small.
`inexpensive, magnetic storage disk and tape drives).
`One of the reasons behind this trend is a growing desire
`in the industry to maintain at least partial system function(cid:173)
`ality even in the event of a failure in a particular system
`component. If one of the numerous mini/micro-computers
`fails, the others can continue to function. If one of the
`numerous data storage devices fails, the others can continue
`to provide data access. Also increases in data storage capac(cid:173)
`ity can be economically provided in small increments as the
`need for increased capacity develops.
`A common configuration includes a so-called "client/
`server computer" that is provided at a local network site and
`has one end coupled to a local area network (LAN) or a wide
`area network (WAN) and a second end coupled to a local
`bank of data storage devices (e.g., magnetic or optical. disk
`or tape drives). Local and remote users (clients) send
`requests over the network (LANIWAN) to the client/server
`computer for read and/or write access to various data files
`contained in the local bank of storage devices. The client/
`server computer services each request on a time shared
`basis.
`In addition to performing its client servicing tasks, the
`client/server computer also typically attends to mundane
`storage-management
`tasks such as keeping track of the
`amount of memory space that is used or free in each of its
`local storage devices. maintaining a local directory in each
`local storage device that allows quick access to the files
`stored in that local storage device, minimizing file fragmen(cid:173)
`tation across various tracks of local disk drives in order to
`
`5
`
`2
`minimize seek time, monitoring the operational status of
`each local storage device. and taking corrective action. or at
`least activating an alarm, when a problem develops at its
`local network site.
`Networked storage systems tend to grow like wild vines.
`spreading their tentacles from site to site as opportunities
`present themselves. After a while. a complex mesh develops,
`with all sorts of different configurations of client/server
`computers and local data storage banks evolving at each
`10 network site. The administration of such a complex mesh
`becomes a problem.
`In the early years of network management, a human
`administrator was appointed for each site to oversee the
`local configuration of the on-site client/server computer or
`15 computers and of the on-site data storage devices.
`In particular. the human administrator was responsible for
`developing directory view-and-search software for viewing
`the directory or catalog of each on-site data storage device
`and for assisting users in searches for data contained in
`20 on-site files.
`The human administrator was also responsible for main(cid:173)
`taining backup copies of each user's files and of system(cid:173)
`shared files on a day-to-day basis.
`Also, as primary storage capacity filled up with old files,
`25 the human administrator was asked to review file utilization
`history and to migrate files that had not been accessed for
`some time (e.g., in the last 3 months) to secondary storage.
`Typically. this meant moving files that had not been accessed
`for some time, from a set of relatively-costly high-speed
`30 magnetic disk drives to a set of less-costly slower-speed disk
`drives or to even slower, but more cost-efficient sequential(cid:173)
`access tape drives. Very old files that lay unused for very
`long time periods (e.g., more than a year) on a "mounted"
`tape (which tape is one that is currently installed in a tape
`35 drive) were transferred to unmounted tapes or floppy disks
`and these were held nearby for remounting only when
`actually needed.
`When physical on-site space filled to capacity for
`demounted tapes and disks, the lesser-used ones of these
`40 were "archived" by moving them to more distant physical
`storage sites. The human administrator was responsible for
`keeping track of where in the migration path each file was
`located. Time to access the data of a particular file depended
`on how well organized the human administrator was in
`45 keeping track of the location of each file and how far down
`the chain from primary storage to archived storage, each file
`had moved.
`The human administrator at each network site was also
`responsible for maintaining the physical infrastructure and
`50 integrity of the system. This task included: making sure
`power supplies were operating properly, equipment rooms
`were properly ventilated. cables were tightly connected, and
`so forth.
`The human administrator was additionally responsible for
`55 local asset management. This task included: keeping track of
`the numbers and performance capabilities of each client/
`server computer and its corresponding set of data storage
`devices, keeping track of how full each data storage device
`was, adding more primary, secondary or backup/archive
`60 storage capacity to the local site as warranted by system
`needs, keeping track of problems developing in each device,
`and fixing or replacing problematic equipment before prob(cid:173)
`lems became too severe.
`With time. many of the manual tasks performed by each
`65 on-site human administrator came to be replaced. one at a
`time on a task-specific basis. by on-site software programs.
`A first set of one or more, on-site software programs would
`
`Oracle Exhibit 1001, page 6
`
`

`

`5,678,042
`
`3
`take care of directory view-and-search problems for files
`stored in the local primary storage. A second. independent
`set of one or more, on-site software programs would take
`care of directory view-and-search problems for files stored
`in the local secondary or backup storage. Another set of one 5
`or more, on-site software programs would take care of
`making routine backup copies and/or routinely migrating
`older files down the local storage migration hierarchy (from
`primary storage down to archived storage). Yet another set
`of on-site software programs would assist in locating files 10
`that have been archived. Still another set of independent,
`on-site software programs would oversee the task of main(cid:173)
`taining the physical infrastructure and integrity of the on-site
`system. And a further set of independent, on-site software
`programs would oversee the task of local asset management. 15
`The term "task-segregation" is used herein to refer to the
`way in which each of the manual tasks described above has
`been replaced, one at a time by a task-specific software
`program.
`At the same time that manual tasks were being replaced 20
`with task-segregated software programs, another trend
`evolved in the industry where the burden of system admin(cid:173)
`istration was slowly shifted from a loose scattering of many
`local-site. human administrators-one for each site- to a
`more centralized form where one or a few human adminis- 25
`trators oversee a large portion if not the entirety of the
`network from a remote site.
`This evolutionary movement from local to centralized
`administration. and from task-segregated manual operation
`to task-segregated automated operation is disadvantageous 30
`when viewed from the vantage point of network-wide
`administration. The term "network-wide administration" is
`used here to refer to administrative tasks which a human
`administrator located at a central control site may wish to
`carry out for one or more client/server data storage systems 35
`located at remote sites of a large network.
`A first major problem arises from the inconsistency
`among user interfaces that develops across the network. In
`the past, each local-site administrator had a tendency to
`develop a unique style for carrying out man-to-machine 40
`interactions. As a result, one site might have its administra(cid:173)
`tive programs set up to run through a graphical-user inter(cid:173)
`face based on, for example the Microsoft Windows™ oper(cid:173)
`ating environment. while another site might have its
`administrative programs running through a command-line 45
`style interface based on, for example the Microsoft DOS
`6.0™ operating system or the AT&T lJNIXTM operating
`system. A network-wide administrator has to become famil-
`iar with the user interface at each site and has to remember
`which is being used at each particular site in order to be able 50
`to effectively communicate with the local system adminis(cid:173)
`trating software programs. Inconsistencies among the inter(cid:173)
`faces of multiple network sites makes this a difficult task.
`Another problem comes about from the task-segregated
`manner in which local administrative programs have devel- 55
`oped over the years. A remote human administrator (or other
`user) has to become familiar with the local topology of each
`network site when searching for desired files.
`In other
`words, he or she has to know what kinds of primary,
`secondary, backup and archive storage mechanism are used 60
`at each site, how they are connected, how data files migrate
`through them.. and which ''file manager" program is to be
`used to view the files of each type of storage mechanism.
`More specifically, if a file cannot be found in the directory
`of a primary storage device located at a particular network 65
`site, the administrator has to switch from the primary storage
`viewing program to a separate, migration-tracking program
`
`4
`to see if perhaps the missing file has been migrated to
`secondary or archive storage at that site. The administrator
`may have to switch to a separate, backup-tracking program
`to see if a file that is missing from primary and secondary
`storage might be salvaged out of backup storage at the same
`or perhaps a different site. Sometimes, the administrator may
`wish to see a historical profile of a file in which revisions
`have been made to the file over a specified time period. A
`separate file-history tracking program at the site might have
`to be consulted, if it exists at all, to view such a historical
`profile.
`If a file cannot be found at a first site then perhaps a copy
`might be stored at another site. To find out if this is the case,
`the administrator has to log out of the first site, log-in to the
`system at a next site and repeat the above process until the
`sought after data is located or the search is terminated.
`Each switch from one site to a next, and from one
`independent file-managing program to another disadvanta(cid:173)
`geously consumes time and also introduces the problem of
`inconsistent user interfaces.
`A similar set of problems is encountered in the overseeing
`of lower-level infrastructure support operations of a net(cid:173)
`worked data storage system. Included in this category are the
`scheduling and initiation of routine file backup and file
`migration operations at each site, the tracking of problems at
`each site and so forth.
`A method and system for integrating all the various facets
`of system administration on a network-wide basis is needed.
`
`SUMMARY OF THE INVENI10N
`The invention overcomes the above-mentioned problems
`by providing a network management system having virtual
`catalog overview function for viewing of files distributively
`stored across a network domain.
`A network management system in accordance with the
`invention comprises: (a) a domain administrating server
`(DAS) coupled to a network-linking backbone of a network
`domain for scanning the network domain to retrieve or
`broadcast domain-wide information, where the domain
`administrating server (DAS) has means for storing and
`maintaining a domain-wide virtual catalog and for oversee(cid:173)
`ing other domain-wide activities, and where the domain(cid:173)
`wide virtual catalog contains file identifying information for
`plural files distributively stored in two or more file servers
`of the network domain; and (b) one or more workstations,
`coupled by way of the network-linking backbone to the
`domain administrating server for accessing the domain-wide
`information retrieved by the domain administrating server.
`A method in accordance with the invention comprises the
`steps of: (a) interrogating the local catalog of each data
`storage device in a network composed of plural data storage
`devices linked to one another by a network-linking
`backbone,
`(b) retrieving from each interrogated local
`catalog, file identifying information identifying a name, a
`storage location and/or other attributes of each file stored in
`the interrogated device; and (c) integrating the retrieved file
`identifying information collected from each local catalog
`into a domain-wide virtual catalog so that each file stored on
`the network can be identified by name,
`location anlor
`another attribute by consulting the domain-wide virtual
`catalog.
`
`BRIEF DESCRIF'TION OF THE DRAWINGS
`The below detailed description makes reference to the
`accompanying drawings, in which:
`FIG. 1 is a block diagram showing a centralized domain
`management system in accordance with the invention;
`
`Oracle Exhibit 1001, page 7
`
`

`

`5,678,042
`
`5
`FlG. 2 is a perspective view of a multi-dimensional
`viewing window for visualizing domain-wide activities
`spatially, temporally and by file attributes;
`FlGS. 3A-3B show a set of trend analysis graphs that may
`be developed from the domain-wide, virtual catalog snap-
`shots obtained by the system of FIG. 1;
`FIGS. 4A-4B show side-by-side examples of pie charts
`showing used-versus-free storage space on respective stor(cid:173)
`age drives DRNE-A and DRIVE-B within the domain of
`FIG. 1;
`FIG. 5 a job scheduling chart for minimizing traffic
`congestion on the network-linking backbone; and
`FIG. 6 shows a logical flow map between various data and
`control mechanisms distributed amongst the domain admin(cid:173)
`istrating server (DAS), an administrative workstation, and a
`given server computer.
`
`DErAILED DESCRIPTION
`
`FIG. 1 is a block diagram of a networked enterprise
`system 100 in accordance with the invention.
`Major components of the networked enterprise system
`100 include: a network-linking backbone lOS, a plurality of
`DAS-managed file-servers 110, 120, ... , 140, operatively
`coupled to the backbone 105; and a domain administrating
`server (DAS) 150 also operatively coupled to the backbone
`105.
`The network-linking backbone 105 can be of any standard
`type used for forming local-area or wide-area digital data
`networks (or even metropolitan wide networks). Examples
`of standard backbones include Ethernet coaxial or twisted
`pair cables and token ring systems.
`One or more communication gateways 104, 106 can link
`the illustrated backbone 105 to additional backbones 105',
`105". The communications gateways 104, 106 may be of the
`wired type (e.g., high-speed digital telephone lines) or a
`wireless type (e.g. microwave or satellite links). As such the
`overall communications network -165"-104-105-106-105'(cid:173)
`etc., can extend over long distances and pass through many
`geographic sites. Examples include communication net(cid:173)
`works which interlink different offices of a large building
`complex, or those which interlink multiple buildings of a
`campus, or those which interlink campuses of different cities
`or those that interlink transcontinental or global sites.
`For purposes of administration. it is convenient to call the
`overall communications network -105"-104-105-106-105'(cid:173)
`etc., and the resources connected to it, an "enterprise". n is
`convenient to subdivide the enterprise into a plurality of
`nonoverlapping "domains" The domains are logical subdi(cid:173)
`visions but may follow physical subdivisions. Examples of
`such subdivisions include but are not limited to: (a) subdi(cid:173)
`viding a building-wide enterprise into fioor-wide domains,
`one for each fioor; (b) subdividing a corporate-wide enter(cid:173)
`prise into department-wide domains, one for each depart(cid:173)
`ment of the corporate structure (e.g., accounting, marketing,
`engineering, etc.); (c) subdividing a multi-city enterprise
`according to the different cities it services; and so forth.
`A block diagram of a first domain 190 within an enterprise
`system 100 in accordance with the invention is shown in
`FIG. 1. The enterprise system 100 can be composed of the
`one illustrated domain 190 or may have a plurality of
`like-structured or differently-structured domains connected
`to the illustrated first domain 190.
`The aforementioned network-linking backbone 165 and 65
`plural file servers 110, 120, ... , 140 are included within the
`first domain 190. The domain administrating server (DAS)
`
`5
`
`6
`150 is also included within the first domain 190 as are a
`plurality of administrative workstations 160. 161. etc., and a
`plurality of user workstations 170, 171 (not shown). etc.•
`which also connect to the network-linking backbone 165.
`Although not shown, it is to be understood that numerous
`other data input and/or output devices can be connected to
`the network-linking backbone 105, including but not limited
`to: so-called "dumb" terminals which do not have a non(cid:173)
`volatile mass storage means of their own. printers, label-
`10 makers. graphical plotters, modems, data acquisition equip(cid:173)
`ment (analog-to-digital converters). digital voice and/or
`image processing equipment. and so forth. File-servers 110,
`120, ... , 140 may be used for storing or outputting the data
`created or used by these other data input and/or output
`15 devices.
`Each file server 110,120, ... , 140 has associated with it:
`(1) a respective. local server computer 110',120', ... , 140';
`(2) a set of one or more nonvolatile data storage devices (e.g.
`111-114); and (3) a respective infrastructure 180.180', ...
`20 ,180" for supporting operations of the local server computer
`(e.g., 110') and its associated data storage devices (e.g.
`111-114).
`n is to be understood that communications gateway 106
`can be used to link the first domain 190 to a variety of other
`25 structures. including a subsequent and like-structured sec(cid:173)
`ond domain 190'. Similarly, communications gateway 104
`can be used to link the first domain 190 to a variety of other
`structures. including a preceding and like-structured third
`domain 190". Data can be transferred from one domain to
`30 the next via the communications gateways 104. 106.
`In addition to being able to communicate with other
`domains, each communications gateway 104, 106 can link
`via telephone modem or by way of a radio link to remote
`35 devices such as an administrator's home computer or an
`administrator's wireless pager (beeper) 107 and send or
`receive messages by that pathway.
`The internal structure of the first of the DAS-managed file
`servers, 110, is now described as exemplary of the internal
`40 structures of the other DAS-managed file servers, 120, ...
`• 140. The term "DAS-managed" indicates. as should be
`apparent by now. that each of file servers 110, 120•...• 140
`is somehow overseen or managed by the Domain Adminis(cid:173)
`trating Server (DAS) 150. Details of the oversight and/or
`45 management operations are given below.
`The first DAS-managed file server 110 includes a client/
`server type of computer 110' which is represented by box
`110 and referred to herein as the "local server computer
`110'''. Server computer 110' is understood to include a CPU
`50 (central processing unit) that is operatively Coupled to
`internal RAM (random access memory) and/or ROM (read(cid:173)
`only memory). Examples of client/server type computers
`that form the foundation for server computer 110' include
`off-the shelf tower-style computers that are based on the
`55 Intel 80486™ microprocessor and come bundled with
`appropriate client/server supporting hardware and software.
`The local server computer 110' of the first DAS-managed
`file-server 110 has a network interface port 110a that opera(cid:173)
`tively couples the server computer 110' to the network-
`60 linking backbone 105 and a mass-storage port nOb that
`operatively couples the server computer 110' to one or more
`of: a primary mass storage means 111. a slower secondary
`storage means 112, a backup storage means 113, and an
`archived-data storage and retrieval means 114.
`The primary storage means 111 can be a high speed
`Winchester-type magnetic disk drive or the like but can also
`include battery-backed RAM disk and/or non-volatile fiash-
`
`Oracle Exhibit 1001, page 8
`
`

`

`5,678,042
`
`8
`7
`by reference. As such these will not be detailed here. In brief,
`EEPROM disk or other forms of high-performance, non-
`each file is distributively stored across two or more storage
`volatile mass storage.
`The secondary storage means 112, if present. can include
`drives so that failure of a single drive will not interfere with
`the accessibility or integrity of a stored file. The dashed
`a slower WORM-style optical storage drive (Write Once,
`Read Many times) or a '':floptical'' storage drive or other 5 symbol 115 for a RAID bank indicates the possibility of file
`distribution across redundant drives.
`The above-cited application also details the intricacies
`secondary storage devices as the term will be understood by
`involved in maintaining an infrastructure 180 for supporting
`those skilled in the art. (Secondary storage is generally
`various operations of the data storage devices 111-113 of a
`understood to cover mass storage devices that have some-
`what slower access times than the associated primary stor- 10 given server computer, and as such these will not be detailed
`here either. In brief, the infrastructure 180 of the server
`age but provide a savings in terms of the cost per stored bit.)
`The backup storage means 113 can include magnetic disk
`computer 110' preferably includes an uninterruptible power
`supply means (UPS) 181 for supplying operational power to
`drives but more preferably comprises DAT (Digital Audio
`the local data storage devices 111-113 and to the local server
`Tape) drives or other forms of tape drives or other cost-
`efficient backup storage devices. A backup copy of each file 15 computer 110'. A local temperature control means 182 (e.g.
`held in primary or secondary storage (111,112) is preferably
`cooling fans) may be included in the infrastructure 180 for
`controlling the temperatures of the local devices no',
`made on a periodic basis (e.g., nightly or every weekend) so
`111-113. A local component security means 183 (e.g. a
`that a relatively recent copy of a given file can be retrieved
`even in the case where the corresponding primary or sec-
`locked, alarmed cabinet) may be provided for assuring
`ondary storage means (111, 112) suffers catastrophic failure; 20 physical security of one or more of the local components
`110'. 111-113 (and also, if desired, of the archived-data
`e.g., a head crash or destruction.
`The archived-data storage and retrieval means 114 typi-
`storage means and tapes 114). A local data path integrity
`checking module 184 may be further included within the
`cally comes in the form of an archive createlretrieve drive
`and an associated set of removable tapes or removable disk
`local infrastructure 186 for assuring proper interconnections
`cartridges. Most if not all of the associated set of removable 25 by cable or otherwise between units 110' and 111-113 so that
`archive tapes and/or removable archive disk cartridges are
`data is properly transferred from one to the other.
`A local infrastructure support program 116 is preferably
`not physically mounted to the archive create/retrieve drive
`loaded into the local server computer 110' for monitoring
`(as indicated by the dashed connection line) and are thus not
`immediately accessible to the server computer 110'. They
`and managing one or more of the local infrastructure com-
`30 ponents 181-184 coupled to it and its associated data storage
`can be mounted when requested and thereafter accessed.
`units 111-114.
`Note: The above description is intended to be generic of
`the types of nonvolatile mass storage means 111-114 that
`A local backup execution program 117 is also preferably
`installed in the local server computer 110' for routinely
`might be connected to the mass-storage port I10b of the
`server computer 110'. In theory, each server computer can
`making, or at least requesting, backups of various data files
`have all of the primary (P), secondary (S), backup (B) and 35 held in the local primary and secondary storage means
`111-112. (Aside: As will be understood from the below
`archive (A) storage means (111-114) connected to its mass-
`storage port 110b. Due to cost and performance consider-
`discussion, a disadvantageous traffic congestion condition
`ations however. a typical set-up will instead have one or
`may develop on the network-linking backbone 105 as a
`more "groups" of server computers to which primary but not
`result of many primary file servers all trying to backup their
`secondary storage means is connected. Each such server 40 files at one time to a shared backup server. To avoid this,
`computer will be referred to as a primary file server. A
`backup making is preferably controlled on a domain-wide
`sec

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket