`Watabe et al.
`
`54) DISTRIBUTED CONTROL SYSTEM IN
`WHICH INDIVIDUAL CONTROLLERS
`EXECUTED BY SHARNG LOADS
`
`75 Inventors: Mitsuru Watabe, Urizura-machi;
`Hiromasa Yamaoka. Hitachi, both of
`Japan
`Assignee: Hitachi, Ltd., Tokyo, Japan
`
`(73
`
`(21) Appl. No.: 801,923
`22 Filed:
`Feb. 14, 1997
`Related U.S. Application Data
`63 Continuation of Ser. No. 203.295, Mar. 1, 1994, abandoned.
`30
`Foreign Application Priority Data
`Mar. 1. 1993
`JP
`Japan ...................................... 5-396.43
`(51
`Int. Cl. .................................... G06F 13/00
`52 U.S. Cl. ................................................... 395/182.09
`58 Field of Search ......................... 395/182,09. 182.08.
`395/182,02, 182.01, 183.01
`
`56)
`
`References Cited
`U.S. PATENT DOCUMENTS
`4,432,962 11/1984 Amano et al. ..................... 364/431.11
`
`III
`5,796,936
`Aug. 18, 1998
`
`US005.796936A
`11) Patent Number:
`(45) Date of Patent:
`
`7/1991 Liu et al. ................................ 364/200
`5.031,089
`5.313,584 5/1994 Tickner et al. ......................... 395/275
`
`Primary Examiner-Robert W. Beausoliel, Jr.
`Assistant Examiner-Norman M. Wright
`Attorney, Agent, or Firm-Antonelli. Terry. Stout & Kraus.
`LLP
`
`57
`
`ABSTRACT
`
`A distributed control system includes a plurality of control
`lers each composed of a plurality of processors and being
`coupled through a network. Each controller includes a
`scheduler for measuring a load; internal backup means for
`controlling which controller bears the load; backup request
`means for requesting another controller to bear the load; and
`backup accept means for answering to a request for bearing
`the load from another controller in accordance with the load
`of the requesting controller. Thus, the load which cannot be
`executed by one controller can be distributed and executed
`by other controllers in accordance with their loads.
`
`21 Claims, 14 Drawing Sheets
`
`TTI)
`3:
`if ACT-R
`
`LL
`N1i
`
`All ACCO(CNG TC
`;
`: A RACF
`RCC.D.E.
`
`-
`6605
`
`| CN RU, 0; it lice coup TER OF
`FAU F
`?i. AN, RECSIR
`RDI
`Clt:CUTER (USN
`XECUs is
`Fx ECUTE
`
`:
`
`- - - -
`RNE CROCC PIFR (ATRG E
`GHS (AD (F. A.. O HERS AS BACX1:P
`Eois Cla?? (Us? G ACCEPTANCE
`RECENCE ABL)
`-
`RS WESSAGL 600 REO S G
`ACP. T.
`E. (ESTEC ECR00CPTER
`Awaii utsag stoo FRON REQUESTED
`?h; F:
`oteur uncours fr having LIGH
`H. PRESENT
`CAC NEXT TO
`CROCOf TERS
`R
`REOUS) C:
`RS, AS RE0ESTED
`F OTHER
`CROCO-TE
`AccFPTANCE. Pit:(ENCF TARLE}
`
`660
`
`-6620
`
`663
`
`Y.
`s - the ETA.
`-
`Act to EY PEQ.E.S.E
`- Y 66.42
`
`
`
`
`
`
`
`CROCOMPUTFRS
`hi? R CONTRO FRS
`REQUESC (R
`RACKUP 2
`
`!
`
`662.8
`
`
`
`... . . ------
`- -
`-
`CAC ?y. UA TABLEi, &CRCG
`USAC 6 ft)
`
`
`
`L} t. Ait ADC EXPE.L.F. is 10
`TA
`REGIS 1AS
`Fu
`
`
`
`- IS TASK S
`(N R. J.T.
`TASK Oly.
`
`AHM, Exh. 1008, p. 1
`
`
`
`U.S. Patent
`
`Aug. 18, 1998
`
`Sheet 1 of 14
`
`5,796,936
`
`08379
`
`0899
`
`| 0/-/
`
`0829
`
`9NISS3008d
`
`[IN]
`
`0929
`
`0,729
`
`
`
`
`
`000||
`
`ÅHOMEN
`[IN]
`
`AHM, Exh. 1008, p. 2
`
`
`
`U.S. Patent
`Aug. 18, 1998
`FIG 2
`
`Sheet 2 of 14
`
`5,796,936
`
`V
`
`UPON DETECTION OF INTERNA
`FAULT, START WITH ALL INTERNAL
`BACKUP MEANS ALL ACTIVE
`MICROCOMPUTERS IN A
`CONTROLLER
`
`IS ITS OWN
`LOAD THE LIGHTEST OF ACTIVE
`MICROCOMPUTERS 2
`6510
`
`NO
`
`/\END
`
`6515
`
`6520
`
`YES
`REGISTER ACCEPTING TASK QUEUE
`WITH TASK TO BE BACKED UP
`
`ADD THE TASK OF MICROCOMPUTER IN
`CHARGE OF BACKUP TO ACCEPTING
`TASK QUEUE
`
`
`
`SOR, ACCEPTING TASK QUEUE WITH
`ATTRIBUTE AND LOAD FACTOR OF TASK,
`AND SELECT ASKS OF COMBINATION
`CLOSEST TO THE LIMIT LOAD FACTOR
`(PRIORITY FIXED2INTERNALLY2COMMU
`NICATIVE2SOMEWHERE)
`(TO DETERMINE TASKS TO BE BACKED
`UP AND EXPELLED)
`
`DELETE TASKS OF THE COMBINATION
`FROM ACCEPTING ASK OUEUE
`T A S K S O F
`
`650
`
`6550
`
`IS TASK
`OF HIGH PRIORITY
`LEFT N TASKS IN AC
`CEPTING ASK QUEUE
`
`HERE ALL
`MICROCOMPUTER
`GIVEN CHARGE OF
`BACKUP
`
`
`
`IS TASK
`IN ACCEPTING TASK
`OUEyE
`
`SELECT MICROCOMPUTER HAVING
`LOW LOAD NEXT TO THE MICRO-
`COMPUTER NOW TAKING CHARGE OF
`A: AS ONE IN CHARGE OF
`
`6580
`
`BROADCAST MESSAGE 6700
`INDICATING NEW SITUATIONS
`OF EXECUTION
`/NEND
`BROADCAS MESSAGE 6700 INDICATING NEW
`657.
`SITUATIONS OF EXECUTION
`
`6572
`
`REGISTER REQUESTING TASK OUEUE WITH THE
`CONTENT OF ACCEPTING TASK QUEUE, EMPTY
`ACCEPTING TASK QUEUE, AND START BACKUP
`REQUESTING MEANS
`
`
`
`6590
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`AHM, Exh. 1008, p. 3
`
`
`
`U.S. Patent
`
`Aug. 18, 1998
`
`Sheet 3 of 14
`
`5,796,936
`
`FIG. 3 SART UPON DETECTION
`Of FAULT OF ANOTHER
`CONTROLLER
`V
`
`START ACCORDING TO
`INTERNAL BACKUP
`PROCEDURE
`V
`
`OfN BUS 70a O M I CROCOMPUTER OF
`FAUTED CONTROLER, AND REGISTER
`REQUEST ING TASK OUEUE WITH TASK
`EXECUTED BY THE MICROCOMPUTER (USING
`EXECUTION SHARING TABLE)
`
`6605
`
`?
`
`
`
`DEERMINE CROCOMPUTER HAVING THE
`LIGHTEST LOAD OF A. OTHERS AS BACKUP
`REQUESTED CONTROLER (USING ACCEPTANCE
`PRECEDENCE TABLE)
`
`6610
`
`RANSMIT MESSAGE 6600 REQUESTING
`BACKUP TO REOJESTED MICROCOMPUTER
`
`AWAIT MESSAGE 8700 FROM REQUESTED
`CONTROLLER
`
`6620
`
`66.30
`
`DEERMINE MICROCOMPUTER HAVING LIGHT
`LOAD NEXT TO THAT OF THE PRESENT
`REO UESTED CONTROLER OF MICROCOMPUERS
`OF OTHER CONTROLLERS, AS REQUESTED
`MICROCOMPUTER FOR NEXT BACKUP (USING
`ACCEPTANCE PRECEDENCE TABLE)
`
`6646
`
`NO
`
`
`
`
`
`NO
`
`
`
`ARE ALL
`MICROCOMPUERS
`OF OTHER CONTROLLERS
`REQUESTED FOR
`BACKUP
`
`
`
`
`
`YES
`
`S
`HERE TASK
`BACKED UP BY REQUESTED
`CONTROLLER
`2
`
`66 (42
`
`YES
`
`66ll.
`
`STAR O GENERATE MEANS BECAUSE OF
`FAUR. O. BACKUP
`
`6650
`
`66 l8
`
`/M
`
`CHANGE INDIVIDUAL TABLES ACCORDING
`TO MESSAGE 6700
`
`-- - - - - - - - -
`
`DELETE ACCEPTED TASK FROM ReOUESTING
`TASK QUEUE, AND ADD EXPELEO TASK TO
`REQUEST ING TASK OUEUE
`
`YES
`
`IS TASK
`
`FT IN REQUESTING
`
`TASK OUEU 2
`
`
`
`NO
`
`6660
`
`6670
`
`AHM, Exh. 1008, p. 4
`
`
`
`U.S. Patent
`
`Aug. 18, 1998
`
`Sheet 4 of 14
`
`5,796,936
`
`FIG. 4.
`
`START UPON RECEPTION OF
`MESSAGE 6600
`
`
`
`ADD ITS OWN TASK TO REQUESTED TASK,
`ANEGISTER
`IN ACCEPTING TASK
`
`SORT ACCEPTING TASK QUEUE WITH
`PRIORITY AND LOAD FACTOR OF TASK,
`AND SELECT TASKS OF COMBINATION
`CLOSEST TO THE LIMIT LOAD FACTOR
`(PRIORITY FIXEDINTERNALLY2COMMU
`NICATIVE2SOMEWHERE)
`(TO DETERMINE TASKS TO BE BACKED
`UP AND EXPELLED)
`
`DELETE TASK TO BE EXPELLED FROM
`SCHEDULER, REGISTER SCHEDULER WITH
`TASK TO BE BACKED UP, AND START
`EXECUTION IN THIS RENEWED STATE
`
`BROADCAST TO ENTIRE SYSTEM MESSAGE
`6700 INDICATING REQUESTED TASK TO BE
`BACKED UP AND ITS OWN TASK TO BE
`EXPELLED
`
`6710
`
`6720
`
`6730
`
`6740
`
`AHM, Exh. 1008, p. 5
`
`
`
`U.S. Patent
`U.S. Patent
`
`Aug. 18, 1998
`Aug. 18, 1998
`
`Sheet 5 of 14
`Sheet 5 of 14
`
`5,796,936
`5,796,936
`
`
`
`SVLm
`
`ve)
`
`
`
`BOLOSHNOD
`
`
`
`
`
`LYSPROTORINO8YOSHSS
`
`
`
`
`3HNO 0001
`
`000||
`
`BOLO
`
`
`
`AHM, Exh. 1008, p. 6
`
`AHM, Exh. 1008, p. 6
`
`
`
`U.S. Patent
`
`Aug. 18, 1998
`
`Sheet 6 of 14
`
`5,796,936
`
`FIG. 6
`
`
`
`NEURAL
`ENGINE
`
`SEE
`
`L0CAL
`MEMORY
`
`PROCESSOR
`
`MICRO
`COMPUTER
`BUS
`
`BUS
`INTERFACE
`
`LA
`ADAPTOR
`
`FREE
`RUNNING
`TIMER
`
`DIRECT
`MEMORY
`ACCESS
`CONTROL
`
`FAULT
`CONTROL
`CIRCUIT
`
`AHM, Exh. 1008, p. 7
`
`
`
`U.S. Patent
`
`Aug. 18, 1998
`Sheet 7 of 14
`FIG 7
`
`EXECUTION SHARING TABLE
`
`5,796,936
`
`MICROCOMPUTER
`
`ASK
`
`LOAD FACTOR (%)
`
`INTERNAL LOAD
`PRECEDENCE
`
`
`
`
`
`DC1, MC1
`
`DC1, MC3
`DC1, MC4
`DC2, MC1
`DC2, MC2
`
`DC2, MC4
`DC3, MC1
`DC3, MC2
`
`DC3, MC4
`
`72
`
`75
`57
`63
`58
`
`75
`58
`72
`
`74
`
`
`
`THE REST IS OMITTED
`
`
`
`FIG 3
`
`
`
`ACCEPTANCE PRECEDENCE TABLE
`PRECEDENCE
`MICROCOMPUTER
`
`2
`3
`
`DC2, MC2
`DC3. Mc
`
`DC3, MC3
`DC2, MC1
`- THE REST IS -
`OMITTED
`
`•
`
`AHM, Exh. 1008, p. 8
`
`
`
`U.S. Patent
`
`Aug. 18, 1998
`
`Sheet 8 of 14
`
`5,796,936
`
`TASK
`NC
`CM
`O S
`OS 12
`OS 13
`OS 14
`T 11
`T 2
`T3
`14
`15
`16
`17
`18
`NC2
`
`OS22
`OS23
`
`T 2
`T22
`T23
`T24
`
`NC3
`3
`OS3
`
`
`
`
`
`
`
`
`TASK LOAD TABLE
`LOAD FACTOR (%)
`ATTRIBUTE
`26
`INTERNALY
`2
`INTERNALLY
`FIXED
`FIXED
`FIXED
`FIXED
`COMMUNICATIVE
`SOMEWHERE
`COMMUNICATIVE
`SOMEWHERE
`SOMEWHERE
`COMMUNICATIVE
`COMMUNICATIVE
`SOMEWHERE
`INTERNALLY
`
`3
`
`31
`
`2 84.5
`
`25
`
`DOMICIE
`DC 1. MC
`DC1, MC1
`DC, MC1
`DC1, MC2
`DC1, MC3
`DC1, MC4
`DC, MC
`DC, MC2
`DC, MC2
`DC1, MC2
`DC1, MC3
`DC, MC3
`DC1, MC4
`DC1, MC4
`DC2, MC1
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`OMITTED
`
`OMITTED
`
`FIXED
`FIXED
`
`DC2, MC2
`DC2, MC3
`
`5
`23
`32
`
`COMMUNICATIVE
`SOMEWHERE
`COMMUNICATIVE
`SOMEWHERE
`
`DC2, MC2
`DC2, MC2
`DC2, MC3
`DC2, MC3
`
`OWITTED
`
`27
`
`INTERNALY
`INTERNALLY
`FIXED
`THE REST Is OMITTED
`
`OC3, MC1
`DC3, MC1
`DC3, MC1
`
`AHM, Exh. 1008, p. 9
`
`
`
`U.S. Patent
`
`Aug. 18, 1998
`
`Sheet 9 of 14
`
`5,796,936
`
`
`
`}}
`
`}
`
`
`
`||| LIIT
`
`AHM, Exh. 1008, p. 10
`
`
`
`U.S. Patent
`
`Aug. 18, 1998
`Sheet 10 of 14
`FIG 11
`
`5,796,936
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`TASK
`
`LOAD FACTOR (%)
`
`ATTRIBUTE
`
`DOMICIE
`
`RESULTS OF
`SELECTION
`
`OS 14
`NC
`CM1
`T 11
`T 17
`T18
`
`26
`28
`13
`25
`26
`
`FIXED
`INTERNALLY
`INTERNALLY
`COMMUNICATIVE
`COMMUNICATIVE
`SOMEWHERE
`
`DC1, MC4
`DC1, MC1
`DC1, MC1
`DC1, MC1
`DC1, MC4
`DC1, MC4
`
`FIG. 12
`
`TASK
`
`LOAD FACTOR (%)
`
`ATTRIBUTE
`
`DOMICILE
`
`RESULTS OF
`SELECTION
`
`OS 3
`T11
`T16
`T18
`T15
`
`13
`4.
`26
`28
`
`FIXED
`COMMUNICATIVE
`COMMUNICATIVE
`SOMEWHERE
`SOMEWHERE
`
`DC1, MC3
`DC1, MC1
`DC1, MC3
`DC1, MC4
`DC1, MC3
`
`FIG 13
`
`TASK
`
`LOAD FACTOR (%)
`
`ATTRIBUTE
`
`DOMICIE
`
`OS22
`T21
`T18
`T22
`
`5
`18
`26
`35
`
`FIXED
`COMMUNICATIVE
`SOMEWHERE
`SOMEWHERE
`
`
`
`DC2, MC2
`DC2, MC2
`DC2, MC4
`DC2, MC2
`
`RESULTS OF
`SELECTION
`SELECTED
`SELECTED
`SELECTED
`SELECTED
`
`
`
`
`
`AHM, Exh. 1008, p. 11
`
`
`
`U.S. Patent
`
`Aug. 18, 1998
`Sheet 11 of 14
`FIG 14.
`
`EXECUTION SHARING TABLE
`
`5,796,936
`
`
`
`EAD
`
`Os12, 12, T13, 14
`
`... .
`
`81
`
`MICROCOMPUTER
`DC1, MC1
`DC1, MC2
`DC1, MC3
`DC, MC4
`
`
`
`
`
`
`
`
`
`
`
`DC3, MC1
`
`Os31.Nc3.cm3
`
`58
`
`
`
`
`
`
`
`THE REST IS OMITTED
`
`
`
`ACCEPTANCE PRECEDENCE TABLE
`PRECEDENCE
`MICROCOMPUTER
`
`
`
`HALFTONE PORTIONS ARE UPDATE)
`
`AHM, Exh. 1008, p. 12
`
`
`
`U.S. Patent
`
`Aug. 18, 1998
`
`Sheet 12 of 14
`
`5,796,936
`
`FIG 16
`
`DETEC FAULTS
`IN OTHER
`CONTROLLERS
`
`DETECT FAULTS
`IN IS OWN
`CONTROLLER
`
`SELECT WICROCOMPUTER HAVING THE
`LIGHTEST LOAD OF ITS ACTIVE WICRO
`COMPUTERS IN ITS OWN CONTROLLER
`FRON ACCEPTANCE PRECEDENCE TABLE
`
`EXECUTE INTERNAL BACKUP PROCEDURE
`
`SELECT FRON ACCEPTANCE PRECEDENCE
`TABLE THE CROCOMPUTERS TO BE
`BACKED UP, SEQUENTIALLY FRON MICRO
`COMPUTERS HAVING LIGHTER LOADS
`
`SELECT TASK TO BE BACKED P BY THE
`MICROCOMPUTER ACCORDING TO THE LOAD
`FACTOR OF TASK LOAD TABLE SO LONG AS
`THERE ARE TASKS TO BE BACKED UP
`
`
`
`BACKUP WITH THE MICROCOMPUTER
`
`START UPON REQUEST
`FOR BACKUP
`
`SELECT FROW ACCEPTANCE PRECEDENCE
`TABLE THE MICROCOMPUTERS TO BE
`BACKED UP, SEQUENTIALLY FROM MICRO
`COMPUTERS HAVING LIGHTER LOADS
`
`REQUEST BACKUP
`
`REPEAT REQUESTS FOR BACKUP SO LONG
`AS THERE ARE TASKS TO BE BACKED UP
`
`ACQUIRE FRON TASK LOAD FACTOR OF
`REQUESTED ASK
`
`SELECT FRON REQUESTED TASK AND ITS
`OWN TASK THE TASK TO BE EXECUTED BY
`REQUESTED NICROCOMPUTER, WITHIN THE
`MAXIMUM LOAD FACTOR
`
`INFORN THE REQUESTING CONTROLLER OF
`THE RESULT, AND EXECUTE BACKUP
`
`REQUESTING
`CONTROLLER
`
`REQUESTED
`CONTROLLER
`
`AHM, Exh. 1008, p. 13
`
`
`
`5,796,936
`5,796,936
`
`Sheet 13 of 14
`Sheet 13 of 14
`
`
`
`LN3WZOVTESTONOISNSdSNS+LNdh]
`
`
`
`TOULNODNOTSNIdSAS
`
`NOTLTSOdAca
`
`
`
`GaidSJITRSA
`
`
`
`STONYONTS34LS
`
`
`
`FUNSSudSVDSINLNG
`
`U.S. Patent
`U.S. Patent
`
`Aug. 18, 1998
`Aug. 18, 1998
`
`
`
`YSCYOOSYJATYC
`
`
`SNOTLYHYOANTSNOTYVA©LAdNI
`NOISYOL4OSTONY=LNdNIQ33dSJIOIH3A‘*68)
`
`
`
`
`3NOYOLLSISSY=1NdiNO(CACYSNTOND
`
`
`
`SLVLS30SITLIINYNO©LMeNT
`
`SISONDVIGLiv
`
`ONINYVMJONOTLVOTON]
`
`ZLOld
`
`Zl 0/-/
`
`
`
`
`
`TOYLNODONINISILSY3KIdALWLSJOS3TLTLNVAD
`
`
`
`JONOTIVOTONT«indi
`
`SSHOLIMS+iNT
`
`
`
`
`
`TOMLNODJOVSYSINTANTHOVH-NVV
`
`
`JIONYONTH33LSJeNlVeId3iINTONS
`033dSJVQIH3AW3dYOLVY373090"
`
`
`WiedU3utLt1AdNI30FIONVYNOTSS3ud30
`
`
`
`
`
`
`TOeLKODSHVYSW'd/dINTONS
`
`
`
`
`
`SATS¥INVAG3LYINdINVA
`
`LiAVd40LIASSY¢ndino
`
`SISONOVIG
`
`
`
`uT¥30LNMOWY*iMdNi
`
`
`
`TOULNODANTONI
`
`
`
`
`
`JOEOAFHVYE§LNdLNOJUNLVYIdAALLSAIVIVO
`
`
`SIOVLSY¥I9JOY3ASWNN+LNdiNO
`
`NOTLISOdAGO
`
`
`Q43d$JTOIHSA+LNdNT40NOLLONYLSNI
`Wid“WYSNIONSNOILINOI
`
`
`
`
`
`
`VOXLNODNOTSSIWSNVYLNOTLOSCNIT9N4
`
`JONOTLONYLSNI+INdino
`
`FLVLS
`
`
`
`
`
`Qvo1TWOINWHOSHOYL9974
`
`AHM, Exh. 1008, p. 14
`
`AHM, Exh. 1008, p. 14
`
`
`
`
`U.S. Patent
`
`Aug. 18, 1998
`
`Sheet 14 of 14
`
`5,796,936
`
`e)
`
`28
`
`98
`
`g
`
`L
`
`Bg
`
`
`
` ONTYSSLSYSAMOdesg
`
`GLOld
`
`o
`
`ANTONI
`
`0001
`
`YATIOWINOD
`
`IVvua
`
`@a
`
`AHM, Exh. 1008, p. 15
`
`Y3T10¥LNOO
`YATIONLNOD
`Y3T1OULNOD
`i i i—™N
`NOTSSIWSNVYL
`NOISNAdSNS
`
`™N
`
`9
`
`oy
`
`AHM, Exh. 1008, p. 15
`
`
`
`
`5,796,936
`
`DISTRIBUTED CONTROL SYSTEM IN
`WHICH INDIVIDUAL CONTROLLERS
`EXECUTED BY SHARING LOADS
`
`10
`
`5
`
`25
`
`30
`
`35
`
`CROSS-REFERENCE TO RELATED
`APPLICATION
`This is a continuation of application Ser, No. 08/203.295
`filed on Mar. 1, 1994, now abandoned.
`The present invention relates to a distributed or decen
`tralized control system for controlling a plurality of control
`units (or controllers) in a distributed or decentralized manner
`for a variety of controls, such as process controls, plant
`controls or vehicle controls; and, more particularly, to a
`distributed control system in which individual controllers
`perform backup operations by sharing the loads without
`using any backup controller.
`In a conventional plant control system, there has been a
`control system called a "distributed control system” for
`distributing and controlling a plurality of controllers. In this
`distributed control system, the plurality of controllers are
`connected through a network to control power plants, steel
`rolling plants, elevators or automobiles. This control system
`is used in either an important plant control system forming
`the basis of an industry or for controlling a system affecting
`human lives. In this distributed control system, therefore, it
`is essential to improve the operating efficiency and the
`reliability of the system.
`In a distributed control system of this type, it is customary
`that, in case one controller becomes faulty (down), another
`controller takes charge of (or backs up) the load of the faulty
`controller. In Japanese Patent Laid-Open Nos. 177634/1987
`and 118860/1988. for example, there is disclosed a *concept
`of preparing a dedicated controller for backing up a faulty
`controller. In other examples, as in Japanese Patent Laid
`Opens Nos. 224169/1990 and 296851/1991, there is dis
`closed a concept of providing a centralized supervising
`computer for sharing the load of a faulty computer in
`accordance with the load situations of individual computers,
`so that the faulty computer may be backed up by a plurality
`of other computers in accordance with an instruction issued
`by the centralized supervising computer.
`The former technology requires a dedicated backup con
`troller so that it is uneconomical and disadvantageous for
`controlling an automobile, because this type of control
`requires a downsizing of the system. According to the latter
`technology, on the other hand, since the load distribution is
`under centralized supervision, the computer system may be
`halted in its entirety if the centralized supervision computer
`becomes faulty.
`SUMMARY OF THE INVENTION
`An object of the present invention is to provide a distrib
`Lited control system for backing up a controller, which is
`considered faulty because it has deviated from its normal
`operation, by decentralizing/distributing the load of the
`faulty controller autonomously, but without providing a
`centralized supervision controller or a dedicated backup
`controller.
`In short, the present invention proposes to enhance the
`Synthetic performance of the distributed control system by
`making effective use of the excess performance of control
`lers which still provide normal operations. In short, another
`embodiment of the present invention is to provide a distrib
`uted control system for backing up controllers, which are
`multiplexly faulty, coordinatively by the use of individual
`controllers.
`
`45
`
`50
`
`55
`
`65
`
`2
`According to one aspect of the present invention. there is
`provided a distributed control system including a plurality of
`controllers, each comprising: detect means for detecting a
`fault or overload condition in another controller; memory
`means for detecting and storing the amount of load or task
`of a controller to be backed up; and backup means for
`backing up a controller to be backed up, if a fault or overload
`is detected by the detect means, to specify the faulty or
`overloaded controller, by distributing and assigning the load
`or task of the faulty or overloaded controller to the controller
`to perform the backup.
`The aforementioned load memory means may desirably
`be provided with both an execution sharing table indicating
`the tasks and load situations being executed by the indi
`vidual controllers and a task load table indicating the loads
`when the individual tasks are executed. With these
`provisions, the priority of controller to perform the backup
`can be determined on the basis of the task load table. If there
`is further provided an acceptance precedence table which is
`stored with the result of determining the priority of the
`controller to perform the backup on the basis of the task load
`table, a controller having a margin and which will easily
`perform the backup can be promptly determined according
`to the load situations of the individual controllers.
`The aforementioned backup means may include: backup
`request means for requesting another controller for perform
`ing a backup; and backup accept means for deciding whether
`or not the backup is possible in accordance with the accep
`tance precedence table, to answer the acceptance, if any, and
`to instruct execution of the accepted task.
`According to another embodiment of the present
`invention. moreover, each controller includes a plurality of
`processors. In order to improve the fault tolerance and
`reliability of each controller, therefore, the aforementioned
`execution sharing table has a region for indicating how each
`processor in a controller is loaded and what processor has a
`margin to back up the load. Thus, the backup means has
`internal backup means for determining a processor to
`backup the load of the faulty processor in accordance with
`the execution sharing table and task load table.
`Thanks to the execution sharing table, task load table and
`acceptance precedence table belonging to each controller,
`according to the distributed control system of the present
`invention, the task to be backed up and the controller suited
`for the backup can be retrieved to determine what controller
`is to have its backup request means used. The load upon the
`faulty controller can be distributed by informing the deter
`mined result to the backup accept means of another con
`troller.
`Moreover, the backup accept means of the controller
`requested for backup is based upon its load as stored in its
`own execution sharing table to decide by itself whether or
`not the target task to be backed up can be executed by its
`own controller, and accepts the target task to be backed up
`selectively. By broadcasting this result of selection as an
`acceptance message, the requesting and other controllers
`correct their execution sharing tables and acceptance prece
`dence tables in a manner to reflect a change in the load
`situations of the controller performing the backup.
`Moreover, the requested controller transmits to the request
`ing controller a target task which is not accepted, but which
`is to be backed up. Therefore, in accordance with the
`corrected acceptance precedence table (which will not have
`the backed-up controller positioned at its head because the
`load factor is raised), the controller to be newly backed up
`(or registered in the acceptance precedence table) is
`
`AHM, Exh. 1008, p. 16
`
`
`
`5,796,936
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`3
`requested for those rejected. Thus, according to the present
`invention, the individual controllers can always grasp the
`executing situation of the system precisely. Moreover, the
`controller requested for the backup can reject the request So
`that the target tasks to be autonomously backed up can be
`processed in a decentralized/distributed manner.
`Even if. furthermore, one of the plurality of processors
`contained in one controller is faulty, the internal backup
`means selects a processor having a light load in accordance
`with the execution sharing table. decides whether or not the
`selected processor can back up all the loads of the faulty
`processor, executes the backup such that the processor may
`not be overloaded, and selects another processor, if a load is
`left, to cause it to perform the backup. Thus, the internal
`backup means can distribute the load of the faulty processor
`such that it is confined in the load limit of each processor so
`that the individual processors may not be overloaded, but
`can perform the backup sequentially.
`BRIEF DESCRIPTION OF THE DRAWINGS
`These and other objects, features and advantages of the
`present invention will be understood more clearly from the
`following detailed description with reference to the accom
`panying drawings, wherein:
`FIG. 1 is a diagram showing a functional construction of
`a self-distributed control system according to the present
`invention;
`FIG. 2 is a flow chart showing one example of an internal
`backup procedure of the present invention;
`FIG. 3 is a flow chart showing one example of a autono
`mous backup requesting procedure of the present invention;
`FIG. 4 is a flow chart showing one example of a autono
`mous backup accepting procedure of the present invention;
`FIG. 5 is a schematic circuit diagram showing one
`embodiment of the self-distributed control system according
`to the present invention;
`FIG. 6 is a block diagram showing one embodiment of a
`microcomputer for the self-distributed control system shown
`in FIG. 5;
`FIG. 7 is a diagram presenting one example of the
`executing sharing table shown in FIG. 1;
`FIG. 8 is a diagram presenting one example of the
`acceptance precedence table shown in FIG. 1;
`FIG. 9 is a diagram presenting one example the task load
`table shown in FIG. 1;
`FIG. 10 is a selective circuit diagram showing a backup
`procedure of the self-distributed control system according to
`the present invention;
`FIG. 11 is a diagram presenting one example of the state
`in a first internal backup of the accepting task queue shown
`in FIG. 1;
`FIG. 12 is a diagram presenting one example of the state
`in a second internal backup, as advanced from the state of
`FIG. 11;
`FIG. 13 is a diagram presenting one example of the state
`in a first external backup of the accepting task queue shown
`in FIG. 1;
`FIG. 14 is a diagram presenting one example of state of
`the execution sharing table after an internal backup accord
`ing to the present invention;
`FIG. 15 is a diagram presenting one example of the state
`of the acceptance precedence table after an internal backup
`according to the present invention;
`FIG. 16 is a flow chart showing the using procedures of
`the execution sharing table. task load table and acceptance
`
`4
`precedence table by the backup requesting controller and the
`backup requested controller according to the present inven
`tion:
`FIG. 17 is a diagram showing the control objects of an
`automobile; and
`FIG. 18 is a block diagram showing an embodiment in
`which the distributed control system of the present invention
`is applied to the control of an automobile.
`DESCRIPTION OF THE PREFERRED
`EMBODIMENTS
`FIG. 1 is a diagram schematically showing a distributed
`control system of the present invention. The distributed
`control system is constructed such that, when multiple
`controllers become faulty, other active controllers distribute
`the load autonomously and back up the disabled controllers.
`In FIG. 1, a plurality of controllers 1, 2, 3 and 4 are
`connected with the sensors and control objects of a plant 100
`to control the plant. These controllers 1 to 4 are mutually
`connected through a network 1000 to constitute a distributed
`control system. The controllers 1,2,3 and 4 are individually
`constructed to include processing units 6100, 6200, 6300
`and 6400, memory units 6160, 6260, 6360 and 6460, and
`I/OS 83a, 83b. 83c and 83d. Moreover, the controllers 1, 2.
`3 and 4 individually execute communications 6110. 6210.
`6310 and 6410, backup requests 6120, 6220.6320 and 6420,
`backup acceptances 6130. 6230. 6330 and 6430, schedulers
`6140, 6240, 6340 and 6440, control task executions 6150,
`6250, 6350 and 6450, internal backup controls 6170, 6270,
`6370 and 6470, and faulted portion isolations 6180, 6280.
`6380 and 6480. Since these individual controllers have
`similar constructions and execute similar functions, the
`controller 1 will be taken as an example and described as a
`representative of the others. Incidentally, each controller is
`equipped with four microcomputers, as will be described
`with reference to FIG. 5. Moreover, each microcomputer is
`equipped with hardware sufficient for providing the indi
`vidual functions for the communications, backup requests.
`backup acceptances, internal backup controls, faulted por
`tion isolations, schedulers and control task executions. and
`is given a processing ability sufficient for executing software
`(or tasks) for providing the individual functions. Thus, the
`above specified functions can be executed on any of the
`microcomputers. These individual functions will be
`described in the following.
`The communication 610 uses the network 1000 and the
`memory unit 6160 to execute such message or data
`exchanges with other controllers as are necessary for the
`backup request 6120, the backup acceptance 6130, the
`scheduler 6140, the control task execution 6150, the internal
`backup control 6170 and the isolation 6180. The scheduler
`6140 determines and provides information concerning the
`sequence and time slot, at which a plurality of tasks regis
`tered therein are assigned to the microcomputers. The con
`trol task execution 6150 controls the sensors and control
`objects of the plant 100. The scheduler 6140 and the control
`task execution 6150 are connected with the memory unit
`6160 to supervise the scheduling and the control of the plant
`100 on the basis of the tasks, data and scheduling informa
`tion stored therein. On the other hand, the scheduler 6140
`and the control task execution 6150 are connected with the
`communication 6110 and exchanges messages with other
`controllers through the communication 6110 and the net
`work 1000 so as to be in synchronization with other con
`trollers.
`The functions to supervise the backup operations will be
`described in the following. The faulted portion isolation
`
`45
`
`50
`
`55
`
`65
`
`AHM, Exh. 1008, p. 17
`
`
`
`5
`6180 is connected with internal backup control 6170 and the
`communication 610. The isolation function is to detect the
`down condition, such as a fault of the hardware or a runaway
`of the software in the microcomputer, and to halt the
`operation of the malfunctioning microcomputer. Similar
`operations are executed, too, even in case another controller
`identifies faults in other controllers including itself through
`the network 1000. The faulted portion isolation 6180 is
`connected with communication 6110 so that it can start the
`backup operation in response to the information of faults
`from other controllers. Moreover, the faulted portion isola
`tion 6180 informs the internal backup control 6170 of what
`microcomputer has gone down through the aforementioned
`connection in case it detects faults by itself or in case it is
`informed of the faults.
`The internal backup control 6170 is connected with the
`communication 6110. the memory unit 6160, the faulted
`portion isolation 6180 and the backup request 6120. This
`function backs up tasks which have gone down so that they
`cannot be executed in the controller in accordance with their
`priority while following the procedure of FIG. 2, in case the
`malfunction in the controller is identified from the faulted
`portion isolation 6180. In order to prevent the extension of
`a partial malfunction to a disabling of the entire controller.
`a task having the highest priority is indispensable for super
`vising the controller. The task given the next priority is one
`requiring a large amount of communication in the case of
`backups by other controllers, so that a bottleneck of the
`communication through the network 1000 may not be cre
`ated to cause a delay or malfunction of the communication
`frequently. These backups are executed sequentially from
`microcomputers having lighter loads to microcomputers
`having heavier loads. So long as there is a task to be backed
`up, the load is distributed so that the divided portions are
`sequentially backed up by a plurality of microcomputers.
`When the task to be backed up disappears midway, the
`processing is then ended.
`If any task to be backed up is left even after backups by
`all microcomputers, this situation is indicated to the backup
`request 6120 to start the procedure of the backups between
`the controllers, as shown in FIG. 3, and the internal backup
`6170 is ended. This procedure, as shown in FIG. 2, is
`referred to as the "internal backup procedure". as will be
`described in detail in connection with a specific example, but
`is schematically described in the following for explaining
`the function of the internal backup control 6170.
`First of all, at Steps 6510 and 6515, the internal backup
`control 6170 determines a microcomputer from the control
`ler which will accept the backup of a task which cannot be
`executed because of a malfunction. This determination uses
`the information on the execution states of the individual
`microcomputers stored in the memory unit 6160.
`Specifically, a microcomputer having a low load or a high
`excess performance is determined for backing up many
`tasks. Incidentally, this information is stored in a table.
`which will be called the "execution sharing table", as shown
`in FIG. 7. Of the tasks to be executed, which have been
`affected by the malfunction, moreover, a task to be backed
`up is determined. This determination information is con
`cerned with the attributes of tasks and excludes tasks which
`need not be backed up by other microcomputers. This
`information on the tasks attributes is stored in the table,
`which will be called the "task load table'. as shown in FG.
`9
`At subsequent Steps 6520 to 6540, the function adds a
`plurality of tasks, which were being executed before the
`backup by the portions in charge of the backup, to the task
`
`25
`
`30
`
`35
`
`45
`
`50
`
`55
`
`65
`
`5.796,936
`
`O
`
`15
`
`6
`to be backed up. Of these, the task to be executed after the
`backup by the backup microcomputer is selected in accor
`dance with priority. At this time, a task which is indispens
`able for supervising the controller, such as the task for
`providing the communication 6110. the backup request 6120
`and the scheduler 6140, is selected with the highest priority
`so that a partial disabling may not cause the entire controller
`to be disabled. The task requiring a large amount of com
`munication in the case of backups by other controllers is
`selected with the next highest priority so that a bottleneck of
`the communication through the network 1000 may not be
`created to cause a communication fault. After this, other
`tasks are selected. Thus, the individual tasks are given
`priorities. In each priority, a task is selected in a combination
`of the highest load so that the load upon the backup
`microcomputer may not exceed a limit value. This load limit
`is stored in the memory unit 6160 and will be called the
`"limit load factor". The information to be used in this
`selection is concerned with the load factor of each task
`stored in the memory unit 6160, the priority in this selection,
`and the domicile of the task. The load factor of a task is the
`processing time which is required for each unit time in case
`a task under consideration is executed by a corresponding
`microcomputer, and is measured when the execution is made
`by a microcomputer having a similar processing ability or by
`a corresponding portion. A value determined in advance by
`an estimation is used in case an actual measurement is
`impossible for some reason. The domicile of a task indicates
`either the controller to which each task belongs by nature, or
`its portion. Incidentally, this information is stored in a table
`called the "task load table". as shown in FIG. 9. In this
`selection, moreover, in case the task has a low priority even
`if it was executed before the backup by the micro