`Exhibit 1007
`
`
`Exhibit 1007
`
`
`
`(12) United States Patent
`Inohara et al.
`
`US006256747B1
`(10) Patent N0.2
`US 6,256,747 B1
`(45) Date of Patent:
`Jul. 3, 2001
`
`(54) METHOD OF MANAGING DISTRIBUTED
`SERVERS AND DISTRIBUTED
`INFORMATION PROCESSING SYSTEM
`USING THE METHOD
`
`(75) Inventors: Shigekazu Inohara, Kokubunji;
`YPshimasa _Masu0k?> Kodair_a;
`J lnghua M111, Kodalra; Fuml" Noda,
`Kodaira, all of (JP)
`
`(73) Assignee: Hitachi, Ltd., Tokyo (JP)
`
`( * ) Notice:
`
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 0 days.
`
`(21) AppL NO‘, 09/159,784
`(22) Filed:
`Sep. 24, 1998
`
`(30)
`
`Foreign Application Priority Data
`
`(JP) ................................................. .. 9-259591
`Sep. 25, 1997
`(51) Int. Cl.7 .................................................... .. G06F 11/00
`(52) U S C]
`714/4_ 709/201
`(58) Field 01'.
`""""""""""""""""""" " 714/2‘ 15 16
`714/25, 27, 38,39, 42, 43; 709/201, 203,
`207, 205, 220, 221, 229, 249
`References Cited
`
`(56)
`
`U_S_ PATENT DOCUMENTS
`
`4,887,204
`5,572,724
`5,619,656
`
`6,061,722 ****
`
`12/1989 Johnson et a1. .................... .. 364/200
`11/1996 Watanabe et a1. ................. .. 395/616
`4/1997 Graf .............................. .. 395/200.11
`5/2000 Lipa et a1. ......................... .. 709/224
`
`6,141,686 * 10/2000 Jackowski et a1. ................ .. 709/224
`
`OTHER PUBLICATIONS
`Menges, et al., Method and apparatus for managing com
`puter processes, EPAB, Pub. No. EP000737922A1, 1—1,
`Oct 1995*
`Ando et al., Terminal identi?cation number imparting
`method and server device, JPAB, Pub. No. JP410271117A,
`Oct 1998_*
`Matsushita Denki Sangyo, Video disk recorder con?gured
`With random access device in a video server, DerWent Week,
`Oct. 1996*
`
`* Cited by examiner
`
`Primary Examiner—Nadeem Iqbal
`g4) Atltplrgey, Agent, or Firm—Antonelli, Terry, Stout &
`aus,
`(57)
`
`ABSTRACT
`
`_
`_
`_
`In order to effectively make the grasp of operatmg condi
`tions of a plurality of Servers and a Cache management in an
`informal/19H System without_increasing a time/labor takfzn by
`an administrator, the plurality of servers forms a multi-cast
`hierarchy dynamically reconstructed by virtue of mutual
`support and the communication of server status, cache
`directory and validation is performed on the hierarchy. The
`administrator has not a need of management for cooperation
`betWeen servers excepting the designation of some other
`servers for startup thereof. Acache betWeen servers is shared
`through the exchange of a cache directory and a validation
`time is reduced, thereby shortening the response time for
`users.
`
`34 Claims, 9 Drawing Sheets
`
`NETWORK
`12x
`‘I20
`s
`CLIENT __
`u s
`
`SERVER 1_Q
`126
`__
`
`CLIENT REQUEST
`MANAGEMENT SECTION
`
`1 O
`—_
`
`121 127m CACHE
`MANAGEMENT
`SECTION
`CACHE VALUE
`EVALUATION
`SECTION M
`
`I.
`SERVERQ;
`‘I
`
`416
`
`CACHE
`MOVEMENT/
`REVOCATION
`PART @
`
`4k
`
`SERVER
`MANAGEMENT
`SECTION
`
`E
`
`VALIDATION
`MANAGEMENT
`SECTION 103
`
`4
`
`146% -\_147
`
`CACHE ‘
`LUZ TABLE —
`
`CACHE LQB
`DIRECTORY
`
`SERVER
`STATUS
`TABLE
`
`GROUP
`TABLE 1
`144
`
`1145
`
`’\-~ 106 COMMUNICATION SECTION
`
`Petitioner Ex. 1007 Page 1
`
`
`
`U.S. Patent
`
`Jul. 3, 2001
`
`Sheet 1 0f 9
`
`US 6,256,747 B1
`
`FIG. I
`
`NETWORK
`SERVER 1)
`12'\
`126
`120
`s _ s
`S
`3
`
`CUENT __
`11
`
`CLIENT REQUEST
`MANAGEMENT SECTION
`
`1 0
`TT
`
`121 127m CACHE
`MANAGEMENT
`SECTION
`CACHE VALUE
`128 EVALUATION
`SECTION 104
`3
`
`146” “"147
`134
`CACHE
`3
`1 7 TABLE —
`S __
`
`F
`
`—_ 135
`
`SERVER “-
`E122
`3
`3
`123
`
`I
`
`103101;...
`
`RNAL
`SERVER
`.
`
`1%4
`
`g
`125
`
`13,13’,---
`
`'
`3
`1?6
`CACHE KB
`129 CACHE
`MOvEMENT/ 1357 DIRECTORY
`REVOCATION 1 8
`PART
`m5 ‘Z
`T SERVER
`130
`139 $TATUS
`g
`SERVER
`144°
`ABLE L2
`MANAGEMENT 1&1
`3
`131 SECTION 102 142 GROUP
`132
`_ :
`TABLE 1~—°
`8
`VALIDATION
`143
`N44
`MANAGEMENT
`§
`SECT'ON 103
`R145
`133
`—
`-—T\¢ 106 COMMUNICATION SECTION
`
`Petitioner Ex. 1007 Page 2
`
`
`
`U.S. Patent
`
`Jul. 3, 2001
`
`Sheet 2 0f 9
`
`US 6,256,747 B1
`
`FIG. 2
`
`CACHE
`TABLE
`
`107%
`.
`URL Q1 SIZE 2_og DATE m N'?ERFEDRE-IAgESM
`CACHE CONTENT 2_05
`]
`//
`CACHE TABLE ENTRY 20o
`
`I
`
`CACHE
`DIRECTORY
`108w
`I
`
`:
`22 J
`SERVER
`I
`URL&
`CACHE DIRECTORY ENTRY 210/
`
`sERvER
`STATUS
`TABLE
`109\/\[ SERVER ID _2_2_1|THRoueHRng_2_2_| LATENCY Q5]
`5 SERVER STATUS TABLE ENTRY 220/“
`
`GROUP
`TABLE
`110% LEADER SERVER ID gal UPPER LEADER sERvER ID gg
`SERVER ID
`gag UPPER SERVER ID Q1
`SERVER ID
`232' UPPER SERVER ID 234’
`SERVER ID
`232" UPPER SERVER ID 234"
`
`Petitioner Ex. 1007 Page 3
`
`
`
`U.S. Patent
`
`Jul. 3, 2001
`
`Sheet 3 0f 9
`
`US 6,256,747 B1
`
`FIG. 3
`GROUP
`PARTICIPATION NEw SERVER ID 301
`TABLE
`300% NEW SERVER ID 301’
`NEW SERVER ID 3 ”
`
`NUMBER OF
`TRANSMITTER
`SERVER ID Jl NESTS
`§l§
`MESSAGE
`310w SERVER ID m
`SERVER ID
`312’
`SERVER ID
`QLZL
`
`GROUP UPDATE
`MESSAGE
`VALIDATION
`
`NEW LEADER
`320w SERVER ID
`
`3-21
`
`| SERVER ID 332[ URL $3 I DATE l, |
`330\/“
`:
`VALIDATION REQUEST
`331M
`‘
`MESSAGE ENTRY
`'—
`
`CACHE VALUE
`MESSAGE
`340
`
`MOVEMENT
`
`I
`555 1
`l SERVER ID & I CACHE VALUE
`3 CACHE VALUE MESSAGE ENTRY 341/
`
`SERVER IDQQZ URL 5.5.3
`MESSAGE
`350 w :
`MOVEMENT ADVANCE
`'
`NOTICE MESSAGE ENTRY
`
`SERVER_3_5_1_1
`351 r/
`
`REVOCATION
`ADVANCE
`
`‘
`
`URL @Q l
`I SERVER ID E l
`QEIQQEGE
`; REVOCATION ADVANCE NOTICE
`361w’
`360
`w MESSAGE ENTRY
`———
`
`HOSTURL
`MESSAGE
`
`3
`JFLAG Z5 1
`URL 3_7:_3
`LSERVER IDDB |
`370m
`5 HOST URL MESSAGE ENTRY 371/
`
`Petitioner Ex. 1007 Page 4
`
`
`
`U.S. Patent
`
`Jul. 3, 2001
`
`Sheet 4 0f 9
`
`US 6,256,747 B1
`
`START HIERARCHY
`FORMATION
`PROCESSING
`
`403
`5
`402 REGISTER ID OF SERVER IT SELF
`
`21-L m SE‘EEOSJS‘TTUABIS I?“ AND
`
`SELECT SERVER FROM N
`SERVER STATUS TABLE
`406
`
`TRANSMIT‘GROUP TABLE TRANSFER" REQUEST TO
`SERVER AND MEASURE COMMUNICATION SPEED
`
`RECENE GROUP TABLE AND COMPLETE m 407
`MEASUREMENT OF COMMUNICATION SPEED
`
`REGISTRATION INTO SERVER @ 408
`STATUS TABLE
`41 O
`F/
`< REPEATED N TIMES’? \ N
`409
`Y
`TRANSMIT "GROUP PART ICIPATION”
`REQUEST TO PROXIMATE SERVER “J 412
`
`RECEIVE“ GROUP UPDATE"
`
`’\-/ 413
`
`TRANSMIT “GROUP TABLE TRANSFER”
`REQUEST TO LEADER
`’\’ 414
`
`RECEIVE GROUP TABLE
`FROM LEADER
`
`,-\/ 415
`
`REGISTRATION INTO GROUP TABLE m 416
`
`END HIERARCHY
`
`(FORMATION PROCESSING)
`
`Petitioner Ex. 1007 Page 5
`
`
`
`U.S. Patent
`
`Jul. 3, 2001
`
`Sheet 5 0f 9
`
`US 6,256,747 B1
`
`F | G . 5
`
`503
`f
`TRANSFER REQUEST
`TO LEADER
`504\/\
`
`1(507
`
`START GROUP
`PARTICIPATION REQUEST
`PROOESSlNG D 502
`'
`I501
`N
`< LEADER?
`506
`Y @505 f
`<NUMBEH OF OLD MEMBERS+ Y
`NUMBER OF NEW
`MEMBERS<MAX‘?
`I508
`N @511 ;512
`TABLE
`< MEH¥EE§+923£¢ >l UPDATE
`I
`N
`f 51 8
`TRANsMrr “GROUP UPDATE
`(MEMBERS)”
`FORM NEW GROUP
`1509
`1/519
`TRANsMlT “GROUF: UPDATE @513
`(LEADER)
`
`51 0 /»
`
`514 /» UPDATE GROUP TABLE
`
`51 5 % TRANSMIT “GROUP” UPDATE
`(LEADER)
`
`END GROUP \ “J51 6
`PARTICIPATION
`‘
`REQUEST PROCESSING
`
`Petitioner Ex. 1007 Page 6
`
`
`
`U.S. Patent
`
`Jul. 3, 2001
`
`Sheet 6 6f9
`
`US 6,256,747 B1
`
`F I G. 6
`
`START HIERARCHY
`MAINTENANCE MESSAGE
`
`I
`
`602
`
`RECERrIoN PHOCESSINCD
`—SERVESRMI1'_SEEF° \Y
`N @603 /i60'I
`ADD ID OF SERVER ITSELF
`TO MESSAGE
`“604
`606
`Y
`605
`
`LEA[§§N?€§'TDS 'QII‘QBER
`N @509
`
`LEADER ? AND NUMBER
`OF NESTS Z2 ?
`N
`61 6
`
`Y
`
`607
`f
`‘ADD MEMBERS INTO MESSAGE
`@608f612
`CHANGE TRNSMITER TO SERVER
`ITSELF AND DECREASE NUMBER
`OF NESTS BY 1
`61 1
`I
`I613
`610
`TRANSMrr HIERARCHY MAINTENANCE
`MESSAGE TO MEMBER
`I614
`RECENE HIERARCHY MAINTENANCE
`MESSAGE FROM MEIVBER
`
`617
`
`I
`
`615
`S
`
`MESSAGE RECEIVED FRO
`UPPER MEMBER?
`621/\N/
`MESSAGE RECEIVED
`FROM MEMBER ?
`N
`626 J‘
`
`My?
`
`618
`Y
`
`i619
`
`TRANSMIT HIERARCHY
`MAINTENANCE MESSAGE TO
`NEXT UPPER MAEMBER
`I
`TRANSMIT HIERARCHY
`MAINTENANCE MESSAGE
`TO NEXT UPPER MEMBER
`1 624
`
`MAINTENANCE MESSAG D
`
`END HIERARCHY
`
`RECEPTION ROCESSING
`
`625
`
`Petitioner Ex. 1007 Page 7
`
`
`
`U.S. Patent
`
`Jul. 3, 2001
`
`Sheet 7 0f 9
`
`US 6,256,747 B1
`
`F I G- 7
`
`REOONSI'IERwI’S'N
`START H
`PROCESSING
`
`702
`
`NUMER OF MEMBERS<MIN ?,IS THERE
`UPPER LEADER SERVER ? AND UPPER
`LEADER SERVER=SERVER ITSELF?
`
`Y @703
`
`701
`
`TRANSMrr“GRoUP PARTICIPATION" @704
`TO UPPER SERVER
`
`END HIERARCHY
`RECONSTRUCTION
`PROCESSING
`
`FIG. 8
`
`CIRCULATION
`PRCCESSING
`
`(START CACHE VALUE)
`I
`>/,r801
`802 < LEADER ?
`808
`'
`N 5
`T? Y
`CALCULATE CACHE N803 RECEIVE CACHE @809
`VALUE
`804 VALUE MESSAGE
`810
`TRANSMIT CACHE
`IS THERE ENTRY OF ? Y 815
`VALUE MESSAGE
`SEVERITSELF? 5816
`'
`N
`RECEIVE CACHE L805 811/»
`SAVE
`VALUE MESSAGE
`MESSAGE
`
`TRANSMIT CACHE ~806 CALCULATE CACHE /\_/812
`VALUE MESSAGE
`VAII-UE
`~
`'
`~
`UPDATE CACHE /\,813
`\I/RELTJEGVIEIEQKS‘QAEEE
`807 VALUE MESSAGE
`l-_——_
`TRANSMIT CACHE /\/814
`VALUE MESSAGE
`
`END CACHE VALUE
`CIRCULATION
`PROCESSING
`
`Petitioner Ex. 1007 Page 8
`
`
`
`U.S. Patent
`
`Jul. 3, 2001
`
`Sheet 8 0f 9
`
`US 6,256,747 B1
`
`F I G - 9 (START MOVEMENT ADVANCE)
`
`NOTICE CIRCULATION
`PROCESSING
`901
`1
`LEADER ?
`907
`902 <
`N X
`I
`\/I\ Y
`N908
`RECEIVE MOVEMENT
`903M GENERATE MOVEMENT
`ADVANCE NOT|CE MESSAGE ADVANCE NOTICE MESSAGE
`I
`I
`904w TRANSMIT MOVEMENT
`UPDATE MOVEMENT
`ADVANCE NOTICE MESSAGE
`ADVANCE NOTICE LIST
`I
`I
`RECEIVE MOVEMENT
`UPDATE MOVEMENT
`905”
`ADVANCE NOTICE MESSAGE ADVANCE NOTICE MESSAGE
`I
`I
`N911
`MOVEMENT
`TRANSMIT MOVEMENT
`PROCESSING
`ADVANCE NOTICE MESSAGE
`I
`MOVEMENT
`PROCESSING
`
`M909
`
`N910
`
`N
`906
`
`906m
`
`FIG. IO
`
`I
`END MOVEMENT ADVANCE
`NOTICE CIRCULATION
`PROCESSING
`
`START REVOCATION ADVA N CI?
`NOTICE CIRCULATION
`PROCESSING
`I
`LEADER ?
`|
`
`1001
`1007
`1002 <
`N /l(
`\_/I\Y
`~1O08
`RECEIVE REVOCATION
`1003,\ GENERATE REVOCATION
`ADVANCE NOTICE MESSAGE
`ADVANCE NOTICE LIST
`I
`I
`UPDATE REVOCATION
`1004,,‘ TRANSMIT REVOCATION
`ADVANCE NOTICE LIST
`ADVANCE NOTICE MESSAGE
`I
`I
`~10“)
`UPDATE REVOCATION
`RECEIVE REVOCATION
`1005“,
`ADVANCE NOTICE MESSAGE ADVANCE NOTICE MESSAGE
`I
`I
`REVOCATION
`TRANSMIT REVOCATION ~1 O11
`PROCESSING
`ADVANCE NOTICE MESSAGE
`I
`REVOCATION
`PROCESSING
`
`1006a
`
`I
`END REVOCATION ADVANCE
`NOTICE CIRCULATION
`PROCESSING
`
`N1 009
`
`“1006
`
`Petitioner Ex. 1007 Page 9
`
`
`
`U.S. Patent
`
`Jul. 3, 2001
`
`Sheet 9 0f 9
`
`US 6,256,747 B1
`
`FIG.II
`
`START VALIDATION
`PROCESSING
`I
`LEADER '2
`
`<
`1102 Y
`\/l\
`
`1103M RECEIVE VALIDATION
`REQUEST
`I
`VALIDATION
`PROCESSING
`I
`TRANSMIT VALIDATION
`RESULT
`
`“04w
`
`1105
`
`1 101
`
`N 1106
`
`/l_/
`
`GENERATE VALIDATION M1017
`REQUEST
`I
`TRANSMIT VALIDATION “#1108
`REQUEST
`I
`RECEIVE VALIDATION x1109
`RESULT
`I
`UPDATE CACHE N11 10
`
`I
`END VALIDATION
`PROCESSING
`
`Petitioner Ex. 1007 Page 10
`
`
`
`US 6,256,747 B1
`
`1
`METHOD OF MANAGING DISTRIBUTED
`SERVERS AND DISTRIBUTED
`INFORMATION PROCESSING SYSTEM
`USING THE METHOD
`
`BACKGROUND OF THE INVENTION
`
`2
`steps, the proxies have a parent-child relationship. The
`proxies having the parent-child relationship have been dis
`closed by, for example, A. Chankhunthod et al, “A Hierar
`chical Internet Object Cache”, 1996 USENIX Technical
`Conference, pp. 153 to 163, 1996 (hereinafter referred to as
`Reference 1). The W proxy can have a cache shared
`betWeen a plurality of clients. In the recent Internet,
`therefore, the second processing con?guration embracing
`many proxies is Widely used. Which of the proxies does each
`proxy transfer the request to is set by an administrator.
`Some information processing systems as Well as the
`WWW are Widely used. In any system, hoWever, the coop
`eration of a plurality of servers With each other is limited to
`a ?xed relationship set by an administrator.
`A netWork neWs system disclosed by, for example, B.
`Kantor et al, “Network NeWs Transfer Protocol: A Proposed
`Standard for the Stream-Based Transmission of NeWs”,
`NetWork Working Group RFC-977 (hereinafter referred to
`as Reference 2) is another example of the information
`system and is constructed by a plurality of servers in a
`manner similar to that in the WWW.
`The copies of “neWs articles” Which clients subscribe for
`and contribute are distributed by the plurality of servers in
`the netWork neWs system to each other. The administrator of
`each server sets another server for Which the transfer of neWs
`articles is to be made. In the case Where a neW server is
`added, it is necessary to manually change the setting of the
`existing servers.
`Further, DNS domain name service disclosed by, for
`example, P. Mockapetris, “Domain Names-Implementation
`and Speci?cation”, NetWork Working Group RFC-1035,
`especially the Second Chapter (hereinafter referred to as
`Reference 3) is a further example of the information system
`and is constructed by a plurality of servers.
`The DNS makes a host name of the Internet and accom
`panying information of that host name (IP address and mail
`server) correspond to each other. In order to hold this
`corresponding information, a plurality of DNS servers form
`a tree structure. A request from a client is processed in such
`a manner that it is transferred betWeen the plurality of
`servers With the trace of the tree structure.
`The administrator of each DNS server sets Which another
`DNS server is a request to be transferred to. In the case
`Where a certain DNS server is replaced, the setting of DNS
`servers adjacent thereto requires to be manually changed.
`Also, each node of the above-mentioned tree structure takes
`a redundant construction including tWo or more DNS serv
`ers. In the case Where those DNS servers have troubles (or
`in the case Where netWork paths to those DNS servers have
`troubles), it is also necessary to manually change the setting
`of adjacent DNS servers.
`Also, in a distributed ?le system of a local area netWork
`(LAN), there is knoWn a method called “cooperative cach
`ing” in Which a space for caches is shared by a plurality of
`computers. For example, in Michael Dahlin et al, “Coop
`erative Caching: Using Remote Client Memory to Improve
`File System Performance”, First USENIX Symposium on
`Operating Systems Design and Implementation, pp.
`267—280, 1994 (hereinafter referred to as Reference 4), a
`server called “manager” holds information of Which ?le
`server is Which ?le block stored in. When the manager is
`requested by a client for the transfer of a ?le block, the
`manager responds to the client about a computer in Which
`that ?le block is stored or the manager transfers the client’s
`request to the corresponding ?le server. Though there can
`exist a plurality of managers, the correspondence betWeen
`
`10
`
`15
`
`25
`
`35
`
`The present invention relates to a computer processing
`system, particularly to a system managing method in struc
`turing a system in Which a plurality of computers connected
`by a netWork distribute, share and change information (or an
`information processing system), and more particularly to a
`distributed server managing method Which is suitable for
`World-Wide Web
`and a distributed information
`processing system Which uses such a method.
`The existing information system is operated in such a
`manner that a plurality of computers connected to a netWork
`provide or acquire information. HardWare and softWare of a
`computer for mainly providing information are called a
`server, and hardWare and softWare of a computer for mainly
`acquiring information are called a client.
`In the conventionally used information processing
`system, in the case Where there are a plurality of servers, the
`servers are not in cooperation With each other at all (for
`example, even if there are ?rst and second servers, any one
`of these servers does not knoW the existence of the other or
`does not depend on the other) or are in strong cooperation
`With each other (for example, ?rst and second servers
`exchange or cache data).
`In the case Where during the operation of a ?rst server a
`second server is neWly established or started (generically
`called the addition of server) or in the case Where the second
`server is temporarily stopped or has a trouble inclusive of
`both the trouble of the server itself and the trouble of a
`communication line to the server (generically called the
`deletion of server), the conventional requirement for coop
`eration of the ?rst and second servers With each other is the
`proper setting of the ?rst and second servers by an admin
`istrator (or administrative operator).
`In the case Where the number of servers is very small, the
`above requirement offers no special problem. HoWever, in
`the case Where the number of servers becomes large, a large
`load is imposed on the administrator When a server is to be
`neWly added or to be deleted.
`45
`With the explosive groWth of the Internet and the WWW
`in recent years, there has been generated the situation in
`Which the number of servers is very large. Techniques used
`in the WWW Will ?rst be described.
`In the WWW, a WWW server holds information in units
`called “home pages”. Each home page has a name called
`URL (Which is the abbreviation of Uniform Resource
`Locator). When the URL of a home page to be accessed is
`designated to a WWW client by the user of the WWW client,
`a ?rst processing con?guration in the WWW includes
`requesting a WWW server designated by the corresponding
`URL for the transmission of a home page corresponding to
`the URL. In response to this request, the designated WWW
`server provides the corresponding home page to the WWW
`client.
`In a second processing con?guration, the WWW client
`does not request the WWW server designated by the URL
`given by the user and thereinstead requests another kind of
`server called “proxy server” (or simply called “proxy”). The
`proxy acquires a home page corresponding to the URL from
`the WWW server or transfers the URL to another proxy. In
`the case Where the request to proxy includes a plurality of
`
`55
`
`65
`
`Petitioner Ex. 1007 Page 11
`
`
`
`US 6,256,747 B1
`
`3
`the managers and ?le blocks is set beforehand. In order to
`change this correspondence, it is necessary for an adminis
`trator to manually change the setting.
`In the WWW, rapid groWth is being continued. For
`example, certain statistics shoWed the increase of 2.8 times
`in the number of servers in a half-year period from June of
`1996 to January of 1997. Therefore, the Whole ofWWW has
`an explosively increasing amount of information. Though
`the proxy acts as a cache shared by a plurality of clients to
`reduce the response time for users (or a time from the
`issuance of a request by a user for information to the arrival
`of that information at the user’s hand), an effect obtained in
`the existing circumstances is loW since the capacity of the
`cache is small as compared With the amount of information
`in the Whole of WWW.
`A?rst problem to be solved by the present invention is to
`reduce a time/labor taken by an administrator When a
`plurality of servers are caused to cooperate With each other
`as in the WWW proxies. Thus, there is solved the problem
`that a long time and a large number of persons are required
`for management in realiZing a large-scale cache embracing
`many proxies. Particularly, the ?rst problem is such that in
`the case Where a ?rst server is to be added or deleted,
`existing second servers are informed of the existence (or
`non-existence) of the ?rst server Without the setting of the
`existing second servers by the administrator and the ?rst
`server is informed of the existence (or non-existence) of the
`second servers.
`Since the WWW operates on the Internet With an enor
`mous number of machines connected and With a multiplicity
`of management organiZations, a method using broadcast or
`a method using the centraliZation of all settings upon a single
`server is not practical for the solution of the ?rst problem.
`Also, in the case Where the number of second servers
`becomes large, even the cooperation of the ?rst server With
`all the second servers becomes unpractical. Thus, in the case
`Where the number of servers becomes very large, a contriv
`ance for limiting the number of servers to be subjected to
`cooperation is required.
`In the case Where the ?rst problem is solved, there is
`required a method in Which in order to operate caches
`existing distributed to a plurality of servers, these servers
`effectively exchange their cache lists therebetWeen so that
`information absent in the cache of one server is inquired of
`another server. The realiZation of this method is a second
`problem to be solved by the present invention.
`The solution of the ?rst and second problems Will result
`in the realiZation of a large-scale cache Which extends over
`a plurality of servers. HoWever, it has been found out that
`though the communication protocol HTTP used in the
`WWW requires an operation of con?rming Whether or not a
`cache is holding suf?ciently fresh information (hereinafter
`referred to as validation operation), this validation operation
`deteriorates the response time for users. Therefore, a third
`problem to be solved by the present invention is to prevent
`the validation operation from deteriorating the response time
`for users.
`
`10
`
`15
`
`25
`
`35
`
`45
`
`55
`
`BRIEF SUMMARY OF THE INVENTION
`
`An object of the present invention is to provide a distrib
`uted server managing method for solving the above ?rst to
`third problems and a distributed information processing
`system using the method.
`(1) In order to solve the ?rst problem, a “server status
`propagation protocol” for distributed management of server
`operation information (or a list of operating servers) is
`
`65
`
`4
`provided. Each server starts from the names of a small
`number of other servers given from an administrator to
`dynamically structure a multi-cast hierarchy of server opera
`tion information by virtue of the mutual support of server
`groups. The multi-cast hierarchy includes groups of a tree
`structure formed by server groups. Each group is composed
`of several to several-tens servers. As a part of servers are
`stopped or restarted, the dynamic reconstruction of the
`multi-cast hierarchy is made.
`In the present invention, it is only required that at the time
`of start of a certain server, the names of some other servers
`are given. Thereafter, the administrator is not required to
`change the setting of each server each time a part of plural
`servers are stopped or restarted. The server status propaga
`tion protocol of the present invention is ?t for use on the
`Internet since broadcast is not required and multi-cast is
`made for only server groups under operation. When each
`server makes the exchange of server operation information,
`the communication is performed taking preference of proxi
`mate servers. Thereby, the system is operable even in the
`case Where the number of servers becomes very large.
`Namely or thereby, the ?rst problem is solved.
`(2) In order to solve the second problem, a “Wide-area
`cooperative cache management protocol” is provided. This
`protocol performs the propagation of a cache directory (a list
`of URL’s and servers Which hold the URL’s in caches) using
`the multi-cast hierarchy formed by the above-mentioned
`server status propagation protocol and a cache control
`(Which URL does Which server hold in a cache, and When is
`Which URL to be transferred from a certain server to another
`server) Without needing a centrally managing server corre
`sponding to the manager in Reference 4.
`There is a case Where a server moves certain information
`to another server. In this case, it is necessary to determine a
`server Which is the destination of movement. For this
`purpose, a server circulates the minimum value for its oWn
`acceptable cache value (hereinafter referred to as acceptable
`cache value) through other servers. Also, in the case Where
`a plurality of servers in one group are simultaneously
`intending to move information, there may be a possibility
`that the destination of movement is centraliZed onto one
`server. In order to solve this problem, each server circulates
`a list of information to be moved and intended destinations
`of movement. Thereby, each server can determine a server
`for destination of movement While avoiding the destination
`of movement Which is not selected by another server.
`Further, in the case Where a plurality of proximate servers
`hold a cache of the same information, it is necessary to
`manage so that all the copies of this cache are not revoked
`simultaneously. In order to make the approach to this state
`through an approximate method Without using a central
`server, a list of information to be revoked (referred to as
`revocation slate) is circulated through servers. A member of
`a group gives up an intended revocation in the case Where
`the cache of information the revocation of Which is intended
`by the member himself or herself is the last copy held by
`members of the group. With the above, the second problem
`is solved.
`(3) In order to solve the third problem, “anticipatory
`validation” is provided in Which a validation operation is
`performed not at a point of time of request by a user for
`information but beforehand prior to the request from the
`user. The anticipatory validation is performed, for example,
`every ?xed time or after the lapse of a ?xed time from the
`reference to URL. This anticipatory validation is performed
`Within a time Zone alloWed by the HTTP protocol. Thereby,
`a time consumed for validation operation and having hith
`
`Petitioner Ex. 1007 Page 12
`
`
`
`US 6,256,747 B1
`
`5
`erto occupied a considerable part of the response time for
`users can be removed from the response time for users.
`Thus, the third problem is solved.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`FIG. 1 is a block diagram shoWing the internal construc
`tion of the present invention;
`FIG. 2 is diagrams shoWing data structures;
`FIG. 3 is diagrams for explaining the formats of mes
`sages;
`FIG. 4 is a flow chart of a hierarchy formation processing
`at the time of server initialiZation;
`FIG. 5 is a flow chart of a group participation processing;
`FIG. 6 is a flow chart of a hierarchy maintenance message
`reception processing;
`FIG. 7 is a flow chart of a hierarchy reconstruction
`processing;
`FIG. 8 is a flow chart of a cache advance notice circulation
`processing;
`FIG. 9 is a flow chart of a movement advance notice
`circulation processing;
`FIG. 10 is a flow chart of a revocation advance notice
`circulation processing; and
`FIG. 11 is a flow chart of a validation processing.
`
`DETAILED DESCRIPTION OF THE
`INVENTION
`
`The construction of an embodiment Will be described
`using FIGS. 1 and 2.
`. is a computer Which
`.
`An external server 13, 13‘, 13“, .
`holds information provided by an information system and
`provides the information to a client 11 or a server 10, 10‘,
`10“, .
`.
`. in accordance With a request. The server 10, 10‘, 10“,
`. is a server Which holds a cache of the information
`system. The client 11 is a computer Which is utiliZed by a
`user and With Which information provided through a net
`Work 12 from the external server 13, 13‘, 13“, .
`.
`. or the
`server 10, 10‘, 10“, .
`.
`. is displayed for the user. The server
`10, 10‘, 10“, .
`.
`. holds as a cache that information in the
`external servers 13, 13‘, 13“, .
`.
`. Which has recently been
`utiliZed by the client 11 and that information in the external
`servers 13, 13‘, 13“, .
`.
`. for Which the utiliZation by the client
`11 in the future is expected. Though several purposes may be
`considered for this cache, a typical purpose is to shorten a
`time Which the client 11 takes for acquiring information.
`With the construction in Which the client 11 makes access to
`the server 10, 10‘, 10“, .
`.
`. in place of the external server 13,
`13‘, 13“, .
`.
`. , it is possible for the client 11 to make access
`to information at a high speed even in the case Where the
`external server 13, 13‘, 13“, .
`.
`. is far distant on the netWork
`12.
`The computer in this embodiment may be any computer
`Which includes a so-called personal computer, a Work
`station, a parallel computer, a large siZe computer, and so
`forth. The client 11 may be various computer terminal
`equipments, portable communication terminals (so-called
`personal digital assistance PDA and hand-held personal
`computer HPC), netWork computers and so forth, so far as
`it has a function of communicating With the servers 10, 10‘,
`10“, .
`.
`. and displaying information.
`
`10
`
`15
`
`25
`
`35
`
`45
`
`55
`
`. , the client 11 or the
`.
`The external server 13, 13‘, 13“, .
`server 10, 10‘, 10“, .
`.
`. may be realiZed by not a computer
`but the combination of a computer and softWare. In
`particular, the present invention may be embodied without
`
`65
`
`6
`applying any change to a computer itself but by a program
`(or process) Which operates on the computer.
`The netWork 12 may be LAN Which is Well used by the
`Whole or one department of a certain body (enterprise,
`school or similar body) or a part or the Whole of WAN Which
`couples a plurality of geographically distributed locations.
`Also, the netWork 12 may be a coupling netWork betWeen
`computers or a coupling netWork betWeen processor ele
`ments in a parallel computer.
`The server 10 has processing sections Which include a
`client request processing section 100, a cache management
`section 101, a server management section 102 and a com
`munication section 106. The server 10 further has data
`structures Which include a cache table 107, a cache directory
`108, a server status table 109 and a group table 110. The
`cache management section 101 includes processing parts
`composed of a cache value evaluation part 104 and a cache
`movement/revocation part 105. The servers 10, 10‘, 10“, .
`.
`.
`have their server ID’s (or simply ID’s) Which are respective
`unique numbers. As the server ID is used an ID Which is
`capable of making the server ID correspond to a communi
`cation address of the server to perform communication. For
`example, an IP address may be used as the server ID.
`The client request processing section 100 is a section
`Which is responsive When the client 11 makes a request for
`the acquisition of information. A message from the client
`request processing section 100 is sent to the netWork 12
`through the communication section 106 (127). Reversely, a
`message from the netWork 12 to the client request process
`ing section 100 is sent through the communication section
`106 to the client request processing section 100 (126). The
`client request processing section 100 performs the reading
`(146) for the cache table 107. On rare occasions, the client
`request processing section 100 performs the Writing (147)
`for the cache table 107.
`The cache management section 101 is a section for
`managing a cache Which the server 10 holds. The cache
`management section 101 performs the reception and trans
`mission (128 and 129) of a message for the netWork 12
`through the communication section 106. Also, the cache
`management section 101 performs the Writing and reading
`(134 and 135) for the cache table 107, the Writing and
`reading (136 and 137) for the cache directory 108 and the
`Writing and reading (138 and 139) for the server status table
`109. The cache value evaluation part 104 in the cache
`management section 101 is a part Which judges hoW useful
`is the holding of speci?ed information. Also, the cache
`movement/revocation part 105 in the cache management
`section 101 is a part by Which, in the case Where a cache of
`certain information is not to be placed in the server 10, the
`cache is revoked or the cache is moved to one or more other
`
`servers 10‘, 10“, .
`.
`.
`The server management section 102 is a section for
`making up a list of the other servers 10‘, 10“, .
`.
`. to manage
`the operating conditions of these servers. The server man
`agement section 102 performs the reception and transmis
`sion (130 and 131) of a message for the netWork 12 through
`the communication section 106. Also, the server manage
`ment section 102 performs the Writing and reading (140 and
`141) for the server status table 109 and the Writing (142) and
`reading (143) for the group table 110.
`A validation management section 103 is a section for
`making a control of anticipatory validation. The validation
`management section 103 performs the reception and trans
`mission (132 and 133) of a massage for the netWork 12
`through the communication section 106. The validation
`
`Petitioner Ex. 1007 Page 13
`
`
`
`US 6,256,747 B1
`
`15
`
`25
`
`35
`
`7
`management section 103 performs the Writing and reading
`(144 and 145) for the cache table 107.
`The communication section 106 is a section for commu
`nicating With the client 11, the other servers 10‘, 10“, .
`.
`. and
`the external servers 13, 13‘, 13“, .
`.
`.
`. The reception and
`transmission for the client 11 are performed by 120 and 121.
`The reception and transmission for the other servers 10‘, 10“,
`.