throbber
.
`
`United States Patent [19]
`Horton et al.
`
`USOO5089958A
`
`[11] Patent Number:
`[45] Date of Patent:
`
`5,089,958
`Feb. 18, 1992
`
`[54] FAULT TOLERANT COMPUTER BACKUP
`SYSTEM
`‘
`
`[75] Inventors: James A. Horton, Bethlehem;
`Stephen M. Getz, Pittsburgh, both of
`P3_
`
`_
`_
`[731 Asslsnw Vortex Systems, 1119-, Pmsbursh, Pa-
`
`[2]] App!‘ N°" 301L469
`[22] Filed:
`Jill. 23, 1989
`
`[51] Int. Cl.5 ............................................ .. G06]? 11/16
`[52] US. Cl. ............................... .. 395/575; 364/256.3;
`364/2685, 364/2692, 364/2851,
`‘11,
`[58] Field or Search ................ .. 364/200, 2321, 256.3,
`364/2564, 256.6, 268.5, 269.2, 285.]; 371/101
`References Cited
`Us. PATENT DOCUMENTS
`
`[56]
`
`:
`-4:141:066 2/1979 1661166 ..........
`“56.907 5/1979 hw?ngs ct 8L
`4,164,017 3/1979 Rude" e! a], _
`4,191,996 3/1980 Chesley ........ ..
`4,351,023 9/1982 Richer
`413784583 3/1983 human" e‘ “I
`i’zgg‘g?
`‘ ' ' '
`4:459:65a 7/1984 Gabbe e,
`4,479,214 10/1984 Ryan ......... ..
`4,483,001 11/1984 Ryan ................ ..
`4,484,275 11/1984 Katzman et a1... .
`4,486,826 12/1984 Wolff 81 al. . . . . . . . .
`
`4:521:84‘! .6/19115 2161161 8181. 1...:
`4,531,701 4/1986 1-1666 61 al. . . . . . .
`4,607,365 8/1986 Greig 61111.
`4,608,688 8/1986 Hansen et al. ....... ..
`
`""""""""" "
`11.1’... 364/119
`____ __ 364,200
`____ __ 364/200
`371/91
`364/187
`--- -- 364/200
`' ' ' ‘ “
`"" " 364/200
`371/22
`371/2.2
`.... .. 364/200
`. . . . .. 364/
`"" "
`5.1:. 364/184
`. . . . .. 364/187
`371/162
`371/11
`
`4,639,856 1/1987 Hl'llStlCh C1131. .................. .. 364/200
`4,648,031 3/ 1987 Jenner .......... ..
`.. 364/200
`4,652,940 3/1987 Sumiyosht
`360/5
`4,654,319 3/1987 Stif?er e161, ..
`364/900
`4,654,857 3/1987 Samson et a1.
`371/9.1
`4,674,038 6/1987 Brelsford et al. ..
`364/200
`4,703,421 10/1987 Abrant et al.
`364/200
`4,703,481 10/1987 Fremont .... ..
`371/12
`4,713,811 12/1987 Fre .......... ..
`371/82
`4,727,516 2/1988 116511166 61281.
`365/200
`4,736,339 4/1988 Crabbe, Jr.
`364/200
`4,750,177 6/1983 Hendrie et al. .
`371/32
`4.754.397 6/|933 Varaiya ct a]
`__ 354/200
`4,373,167 10/1939 Kapulka er a1. ................... .. 364/200
`Primary Examiner_'cha?eS E. Atkinson
`“0m?- Agem' °' F'”"—B“°h‘"‘a“ ingem"; Lyn" 1'
`AIM“
`ABsTRACr
`[57]
`A method and apparatus for storing and retrieving any
`prior state of a computer system is provided. Upon
`installation of the system, a base image corresponding to
`the initial “a” °f ‘h° “mpg” sy-stem is estab?§hed
`and stored on a storage device. The system momtors
`changes to the state of the host and subsequently stores
`at least some of these changes on the storage device.
`Thcsg changes are thgn mapped by location of the
`stored changes in an address table to enable subsequent
`retrieval of any selected changes. When the address
`table becomes full, at least one differential image of the
`5W9 OM19 System is Subsequently created by compiling
`all changes to the base image which occurred between
`creation of the base image and the selected time. A
`image may thus be created of data stored on the
`storage device, as such data existed at a predetermined
`Pm’ 1min‘ 1“ time' mm which data “"1 mes may be
`retrieval and ""1"" 1° the 11°51 disk
`
`19 Claims, 13 Drawing Sheets
`
`1
`
`HOST NTERFACE \
`
`mocesson
`
`60
`
`PER?-ERALS - a
`
`5° 3
`coNTRotLER
`DASD
`7
`
`9°
`
`1- - - - - - - - - - - -1
`
`?
`F
`:
`:
`51:51 P0111- o
`-
`I
`>
`i
`I
`40 J
`1
`\ 1
`' m‘ ‘
`'1 sea PORT! L ;
`1
`0480
`1
`1
`1
`1
`1
`:
`1
`I
`1
`
`4
`0
`
`e
`
`DO
`
`c4110 EDGE
`con-£07011
`;/
`
`I
`
`.'.
`:
`§"
`,
`211
`.
`1- HOST
`.
`n1
`{
`:1 011711 0.13
`,
`.1
`1
`1/0 15/10
`11
`”‘r_—-
`. :1
`V0 WE ,
`
`FIFO
`,
`11-1061’ TO BOA 1
`,
`FFO RD '
`100/1110 TO 110671
`,
`1
`.
`i
`I
`1
`=
`:
`
`'
`
`mnonv/onm
`CONTROLLER
`
`2°
`
`.--1
`
`1127121511011
`
`|
`1
`MI‘
`'
`0114
`FFO
`,
`i.
`11019110 EDGE I
`'
`'-
`“ - mm =
`=1
`11 W '
`|
`::
`11501557
`;
`:
`
`SEEK:
`
`I
`"
`
`1-051’
`
`1
`1
`
`2"‘ "Timid-‘f
`
`1
`
`{\l ,0
`
`1- . . _ . _ _ _ . - _ - _ J
`
`APPLE 1034
`IPR2016-01839
`
`

`

`US. Patent
`
`Feb. 18, 1992
`
`Sheet 1 of 13
`
`5,089,958
`
`____
`
`_
`
`_
`
`_
`
`Hom
`
`”5.305.200
`
`Qw<o
`
`.ow<o">m<§mm
`
`n00.
`
`4___8.3n.2920
`
`0m
`
`IE!
`
`
`
`n..m._<mmIn=mwm
`
`239888..
`
`oo
`
`IuEon.mow”-Hfl
`
`'96:?mums9..u.
`(:0"R2mm0):E:U
`<5_"0.8....o_
`
`
`ogwpzoo88.35%L58u"
`
`
`
`$3.2:onn"zéaimozwz__
`
`.—.
`
`__.._
`
`
`
` hmOI.“T.S."m
`
`«2.<55.M.C
`
`.2
`
`538m__
`
`”.8me8“._/85.35.50:\u.
`.«28728NPg_woe.9.5
`
`xuod0.8.388as$38..."28%....59._.
`
`
`
`
`
`
`
`.-
`
`
`

`

`US. Patent
`
`Feb. 18, 1992
`
`Sheet 2 of 13
`
`5,089,958
`
`F192.
`
`BEGIN
`
`DETERMINE PRIMARY DASD SIZE
`
`CALCULATE BLOCK FOR BASE IMAGE.
`ABBREVIATED TRANSACTION LOG.
`
`GENERATE OPTICAL DESCRIPTOR.
`
`GENERATE CHECKSUM FOR DESCRIPTOR.
`_ WRITE DESCRIPTOR TO OPTICAL DASD
`
`COPY PRIMARY DASD USER AREA TO
`BASE IMAGE LOCATION ON OPTICAL DASD.
`
`CREATE INITIAL INTERVAL MAP
`TABLE (IMTI ENTRY. FILL IN
`DATE/TIME. SET OTHER FIELDS TO ZERO.
`
`GENERATE CHECKSUM FOR INTERVAL MAP
`TABLE IIMT) ENTRY. WRITE ENTRY TO
`FIRST LOCATION IN IMT.
`
`CLEAR DIFFERENTIAL MAP AND INCREMENTAL TABLES.
`
`INITILIZE COMPOSITE DIFFERENTIAL TABLE WITH
`LOCATION OF EACH PRIMARY DASD CLUSTER.
`
`SET HEAD AND TAIL POINTERS TO ZERO.
`
`CLEAR DIFFERENTIAL IN PROGRESS BLOCK IN CLOCK.
`
`COMPLETE.
`
`

`

`US. Patent
`
`Feb. 18, 1992
`
`Sheet 3 of 13
`
`5,089,958
`
`_ d8: 5% Q2
`
`
`
`.w imam”: E2 85%
`
`wmlzu
`
`wDE.
`
`
`
`
`
`Kg 23.- IU<w mom
`
`6.9K
`
`
`6d. 85% a8
`56 m8
`
`
`
`
`
`@ ozomwmm P02 wUSwQ
`
`wm4<m
`
`wDE.
`
`U
`
`0 e
`
`2.0mm
`
`O
`
`29m: H25:
`
`

`

`US. Patent
`
`Feb. 18, 1992
`
`Sheet 4 of 13
`
`5,089,958
`
`BEGIN
`
`<\
`J
`
`QUEUE
`FLUSHED.
`
`_._..__j\
`
`HEAD AND TAIL POINTERS
`ARE NOT EQUAL.
`
`'
`
`F194’
`
`READ OUEUE LOCATION EOUAL TO
`SIZE OF A TRANSACTION DESCRIPTOR
`PLUS SIZE OF PRIMARY DASD BLOCK.
`
`TRANSFER DATA TO LOCATION IN
`OPTICAL DASD INDICATED BY THE
`ARC-LOCATION FIELD.
`
`DOES THE ARC-LOCATION FIELD OF
`INCREMENTAL TABLE ENTRY MATCH
`THE TAIL POINTER ?
`
`TRUE
`
`FALSE
`
`CHANGE ARC-LOCATION FIELD OF
`INCREMENTAL TABLE ENTRY TO
`ARC-LOCATION FIELD OF DESCRIPTOR
`FROM ARCHIVAL OUEUE.
`
`INCREMENT TAIL POINTER BY SIZE
`OF DESCRIPTOR PLUS SIZE OF
`PRIMARY DASD' BLOCK.
`
`

`

`US. Patent
`
`Feb. 18, 1992
`
`Sheet 5 of 13
`
`5,089,958
`
`BEGIN
`
`I
`
`LOCATE LAST VALID ENTRY
`IN INTERVAL MAP
`
`Fig. 5.
`
`LOCATE LAST VALID
`ABBREVIATED TRANSACTION
`IN ABBREVIATED TRANSACTION LOG.
`
`I
`
`USE O-LOCATION FIELD OF
`ABBREVIATED DESCRIPTOR PLUS
`SIZE OF DESCRIPTOR.
`PLUS SIZE OF BLOCK OF PRIMARY DASD
`AS INITIAL VALUE.
`
`WHILE DESCRIPTOR OPTICAL
`INDEX IS VALID.
`
`ABBREvIATED TRANSACTION LOG
`CLEANUP Is COMPLETE.
`
`ADD SIZE OF DESCRIPTORI PLUS
`SIZE OF BLOCK OF PRIMARY DASD
`T0 OPTICAL INDEX.
`
`I
`
`APPEND DESCRIPTOR AT OPTICAL INDEX
`TO END OF ABBREVIATED
`TRANSACTION LOG.
`
`

`

`US. Patent
`
`Feb. 18,1992
`
`Sheet 6 of 13
`
`5,089,958
`
`BEGIN
`
`TRUE
`
`FALSE
`
`DIFFERENTIAL PROGRESS FLAG CLEARED ‘.7
`
`DIFFEcRF§GIEL*RNE=%LTEIIIII:TuRE
`| USE A-LOG FIELD OF LAST VALID '
`
`
`INTERVAL TABLE ENTRY A5
`INITIAL VALUE.
`I
`READ TRANSACTION FROM OPTICAL DASD
`FROM LOCATION.
`
`F19.
`
`UNTIL LAST ABBREVIATED TRANSACTION
`IS REACHED.
`"
`
`DIFFERENTIAL MAP COMPLETE.
`
`CALCULATE CLlUSTER NUMBER.
`
`SET DIFFERENTIAL WRITE LDcATIoN EouAL
`TO LocATloN SPECIFIED 0F LAsT vALID
`ENTRY IN INTERVAL MAP TABLE.
`I
`
`SET CLUSTERS TO ZERO.
`
`SET APPROPRIATE BIT IN
`DIFFERENTIAL "AP
`I
`ADD SIZE DF DESCRIPTOR To
`OPTICAL INDEX.
`I
`READ TRANSACTION AT OPTICAL INDEX.
`
`FROM CLUSTER ZERO TO
`LAST DIFFERENTIAL MAP BIT.
`\
`J
`
`CLEAR DIFFERENTIAL
`IN PROGRESS FLAG.
`
`INDExED BY
`CLUSTER SET ?
`
`FALSE
`
`\l/ TRANSFER 32K PRIMARY DASD DATA TO
`OPTICAL DASD PLUS 32K
`DIFFERENTIAL
`TIMES CLUSTERS WRITTEN.
`|
`IMAGE COMPLETE.
`I
`INCREMENT CLUSTERS.
`
`

`

`US. Patent
`
`Feb. 18, 1992
`
`Sheet 7 of 13
`
`5,089,958
`
`.
`
`F1 7.
`
`BEGIN
`
`|
`CREATE ENTRY POINTS TABLE.
`INCREMENTAL TABLE AND
`DIFFERENTIAL MAP.
`
`INITIALIZE ENTRY POINTS TABLE
`VALUES TO OFFFFH.
`
`INITIALIZE INCREMENTAL TABLE P-BLOCK
`ARC-LOCATION TO ZERO.
`LINK FIELDS TO OFFFFI-I.
`
`CLEAR DIFFERENTIAL MAP.
`
`SET OPTICAL INDEX EQUAL TO
`‘
`A-LOG LOCATION
`IN THE LAST VALID INTERVAL MAP ENTRY.
`
`READ ABBREVIATED TRANSACTION
`AT OPTICAL INDEX.
`
`UNTIL LAST VALID ABBREVIATED
`TRANSACTION.
`
`C'\
`J
`
`INCREMEmDL TABLE
`DIFFERENTIAL MAP
`HAVE BEEN CREATED.
`
`ADD .ABBREVIATED TRANSACTION
`TO INCREMENTALTABLE.
`
`CALCULATE CLUSTER NUMBER
`TRANSACTION REFERS TO
`AND SET CORRESPONDING BIT IN
`DIFFERENTIAL MAP.
`I
`ADD SIZE OF DESCRIPTOR
`T0 OPTICAL INDEX.
`

`
`READ NEXT TRANSACTION
`FROM OPTICAL DASD.
`
`

`

`US. Patent
`
`Feb. 18, 1992
`
`Sheet 8 of 13
`
`5,089,958
`
`BEGIN
`
`cREATE COMPOSITE DIFFERENTIAL TABLE
`
`F18‘
`
`INITIALIZE TABLE LOCATION EACH PRIMARY
`DASD CLUSTER WITHIN BASE IMAGE
`ON OPTICAL DASD.
`I
`READ FIRST INTERVAL MAP TABLE ENTRY.
`
`\ UNTIL LAST ENTRY.
`
`c,
`
`I
`
`READ DIFFERENTIAL MAP INDICATED IN
`INTERVAL MAP ENTRY.
`
`SET CLUSTER INDEX AND CLUSTER
`NUMBER EQUAL TO ZERO.
`
`CLUSTER INDEX LESS THAN
`NIMBER OF CLUSTERS.
`
`(N J
`
`MOVE TO NEXT INTERVAL MAP ENTRY.
`
`O
`
`MAP IS COMPLETE.
`
`.____I
`
`INDEXED BY CLUSTER INDEX SET ?
`
`TRUE
`
`FALSE
`
`COMPOSITE DIFFERENTIAL TABLE ENTRY
`VALUE OF DIFFERENTIAL DATA START FIELD
`PLUS OFFSET VALUE OF 32K
`TMES CLUSTER NUMBER.
`
`NCREVENT CLUSTER INUI’BER.
`
`INCREMENT CLUSTER INDEX.
`
`

`

`US. Patent ,
`
`Feb. 18, 1992
`
`Sheet 9 of 13
`
`5,089,958
`
`Fig.9.
`
`BEGIN
`
`INSERT TRANSACTION FOR PRIMARY DASD
`USING HEAD POINTER ARCHIVAL OUEUE AS
`ARC-LOCATION FIELD.
`AND SETTING MOST SIGNIFICANT BIT
`OF ARC-LOCATION FIELD.
`
`CREATE. A TRANSACTION DESCRIPTOR USING
`BLOCK OF PRIMARY DASD WRITTEN AS
`P-BLOCK FIELD.
`INCREMENTAL TABLE ENTRY NUMBER
`AND NEXT FREE LOCATION IN
`'
`OPTICAL DASD AS O-LOCATION FIELD.
`
`APPEND DESCRIPTOR TO
`ABBREVIATED TRANSACTION LOG.
`
`APPEND DESCRIPTOR AND PRIMARY DASD DATA
`TO ARCHIVAL QUEUE.
`
`TRANSFER DATA TO PRIMARY DASD.
`
`

`

`US. Patent
`
`Feb. 18, 1992
`
`Sheet 10 of 13
`
`5,089,958
`
`BEGIN
`PRIMARY DASD
`FUNCTIONING ?
`FALSE
`
`TRUE
`
`Fl’ 10.
`
`ATTEMPT TO READ INDICATE PRIMARY DASD
`FROM PRIMARY DASD.
`READ ERROR.
`
`TRUE
`
`PRIMARY DASD
`READ WITHOUT ERROR ?
`FALSE
`
`REFERENCE TO BLOCK
`IN INCREMENTAL TABLE ?
`
`FALSE
`
`MSB OF ARC-LOCATION SET '?
`-FALSE
`
`CALCULATE CLUSTER NUMBER.
`
`READ
`
`UMP/9R3?“
`
`SEASPW
`
`_
`ARCHIVAL QUEUE. ARC LOCATION HE”
`PLUS ME UP
`IF READ ERROR
`INCREMENTAL
`USE
`DESCRIPTOR.
`MIRRORED QUEUE.
`
`GET LOCATION CLUSTER
`
`o o I
`C
`
`I
`TEABLEE E
`I
`
`IA
`L
`
`READ CLUSTER
`
`INSTRUCT PRIMARY DASD DRIVER
`T0 REMAP DEFECTIVE BLOCK.
`REMAP FAIL ?
`FALSE
`
`TRUE
`
`INDICATE PRIMARY DASD
`_
`N T
`0 FUNCTIQNING NORMAL“
`
`TRANSFER DATA READ TO
`REMAPPED BLOCK
`0N OPTICAL DASD.
`
`

`

`US. Patent
`
`Feb. 18, 1992
`
`Sheet 11 of 13
`
`5,089,958
`
`BEGIN
`
`TRUE
`
`'
`SLEEP FOR IOOms.
`
`Fig.1].
`
`HEAD AND TAIL POINTERS EOUAL ?
`FALSE
`
`READ DATA FROM
`TAIL POINTER LOCATION
`THE ARCI-WAL OUEUE
`THE SIZE OF
`INCREMENTAL DESCRIPTOR
`PLUS SIZE OF A
`PRIMARY DASD BLOCK.
`
`DOES ARC~LOCATION FIELD
`OF INCRENENTAL TABLE ENTRY
`MATCH TAIL POINTER OF
`ARCHIVAL OUEUE ?
`
`TRUE
`
`FALSE
`
`FILL IN
`ARC-LOCATION FIELD OF
`INDEXED INCREMENTAL TABLE
`ENTRY WITH O-LOCATION
`FIELD OF DESCRIPTOR.
`
`WRITE DESCRIPTOR AND DATA TO
`LOCATION INCREMENTAL DATA
`AREA SPECIFIED BY
`O-LOCATION FIELD OF
`THE DESCRIPTOR.
`
`ADD SIZE OF DESCRIPTOR AND
`SIZE OF A BLOCK OF
`PRIMARY DASD TO
`TAIL POINTER.
`
`

`

`US. Patent
`
`Feb. 18, 1992
`
`_ Sheet 12 of 13
`
`5,089,958
`
`9.12.
`
`BEGIN
`
`CREATE AND INTERVAL ENTRY DESCRIPTOR.
`
`COPY CURRENT DIFFERENTIAL MAP
`LOCATION IN OPTICAL DASD SPECIFIED BY
`INTERVAL DESCRIPTOR.
`
`COPY DIFFERENTIAL MAP AND INITIATE
`A DIFFERENTIAL IMAGE TO LOCATION IN
`OPTICAL DASD INDICATED BY INTERVAL DESCRIPTOR.
`
`CLEAR ORIGINAL DIFFERENTIAL MAP TO
`INDICATE NO CLUSTERS HAVE BEEN ALTERED.
`
`COPY CURRENT INCREMENTAL TABLE TO
`LOCATION IN OPTICAL DASD SPECIFIED BY
`INTERVAL DESCRIPTOR.
`
`CLEAR INCREMENTAL TABLE AND ENTRY POINTS TABLE
`TO INDICATE NO INCREMENTAL TRANSACTIONS.
`
`

`

`US. Patent
`US. Patent
`
`Feb. 18, 1992
`Feb. 18, 1992
`
`Sheet 13 of 13
`Sheet 13 of 13
`
`5,089,958
`5,089,958
`
`.MRE
`
`4<hzwzmeZ
`
`<._.<o
`
`o:
`
`
`
`Qm<o4<U:.n_o
`
`
`
`._<_.szmwu_u=o4<hzwzwm02_
`
`._<>mm:.z_
`
`om:.<_>mmmm<
`
`00..ZOFU<wz<mhmjmf.n2:
`
`
`wo<r=g«0520wa
`
`Eng
`
`Eng
`
`35%
`
`§-%
`
`29:00:70”?ax23
`
`._<._.zmzm_muz_
`
`mwihzm
`
`mij.
`
`00EEEaE
`
`20:53-82? v.2: _ 603$ 22 >55 55
`
`Om
`
`>~:.zw-_kz_0m>E.zm ._.z_on.
`
`H25“.>52“...
`
`
`
`
`
`
`
`._.z_on_>m_._.zm.05
`
`mOFmEwao
`
`waDO._<>_IUm<
`
`
`
`Qm<omm._._0m._.zou
`
`>E.zm
`
`9.2.9.
`
`w._m<._.
`
`
`
`hz_on_>E.2w
`
`$530N1_I
`nI29.28..
`
`zo_._.<Uo._«$.de
`
`zo:.<uo._«$.wa
`
`zo:.<oo._mmhmbdo
`
`
`
`zo_._.<oo._mwhwsd
`
`z
`
`mtwonfoo
`
`._<szmmuu_o
`
`
`
`w..m<._.mmhmsqu
`
`
`
`
`
`
`
`
`

`

`1
`
`5,089,958
`
`FAULT TOLERANT COMPUTER BACKUP
`SYSTEM
`
`5
`
`BACKGROUND OF THE INVENTION
`1. Field of the Invention
`The present invention relates to a system for main
`taining a backup for a computer disk drive in the event
`of failure, for the prevention of data loss. More speci?
`cally, the invention relates to a method and apparatus
`for enabling the user to restore either a ?le or the entire
`contents of a disk, as they existed on that disk at any
`prior point in time by creating a virtual copy of the disk
`at that point in time.
`2. Description of the Prior Art
`Computers have become an integral part of both
`business and personal life. Each day, countless transac
`tions are made either utilizing computers or being com
`pletely controlled by them. While today's computers
`are faster and more ef?cient than prior methods of rec
`ordkeeping and data transfer, older techniques employ
`ing paper records had one distinct advantage: backup in
`the case of lost data. Duplicate records could be stored
`in any one of a number of places, and in various forms.
`Similarly, if the equipment or personnel creating, read
`ing or interpreting the data were to fail or be unable to
`complete a task, continuation of the task was easily
`accomplished by transferring the past work to substitute
`machines or people. This is not always possible with an
`electronic computing device.
`One particular difficulty with computers is the stor
`age medium. The vast majority of data utilized in con
`junction with computers is stored on magnetic media.
`While great care is taken to insure the accuracy of this
`media, some ?aws may be present. Additionally, contin
`ued use of such media leads to faults or other malfunc
`tions related to the electromechanical methods of read
`ing and writing data to these data storage devices.
`A second difficulty, closely related to the ?rst, is the
`inability to recover lost data, once a fault has occurred.
`When utilizing paper records, the older versions of a
`document, for example, could always be retrieved. This
`is not the case with magnetic media if: (1) the data has
`been over-written on a disk, or (2) the disk is inaccessi
`ble due to physical faults. Additionally, a general short
`coming of prior art backup systems is that backups are
`only kept for a short period of time. The physical space
`required to keep a constant stream of new backup disks
`or tapes would soon outgrow even the largest storage
`facility. This limitation thus dictates the re-use of
`backup media, and the continuous trail of past data is
`gone.
`Prior inventors have proposed several solutions to
`this problem, both from a hardware and software van
`tage point. While many of the devices are concerned
`with failure of an electronic component, i.e., the pro
`cessing device or the volatile memory, rather than the
`more permanent storage media, the techniques are anal
`ogous in most respects to permanent media. The solu
`tions frequently posed in this area are the use of redun
`60
`dant components which may be utilized in the event of
`failure in a primary component. Transparent and imme
`diate error detection and switchover means are there
`fore a primary adjunct focus in the utilization of these
`redundant devices. Redundant data backup is also well
`documented in the prior art, where the backup device
`contains a duplicate copy or "mirror" of the informa
`tion utilized by or stored within the primary device.
`
`65
`
`2
`Frequently, however, no teaching is present for the
`restoration of data from the redundant device to the
`primary device once the primary device has come back
`"on line.“
`Redundant hardware components are disclosed in
`Grieg, et. al., U.S. Pat. No. 4,607,365, which teaches a
`system that will automatically select the secondary
`components as needed because of faults in the system.
`Similarly, Yoshida, et. al., U.S. Pat. No. 4,727,516, dis
`closes a system having redundant memory arrays. If a
`defective memory cell is detected, a sister cell is utilized
`in its place. Redundant processors are disclosed in U.S.
`Pat. Nos. 4,484,275 and 4,378,588, issued to Katzrnan,
`et. al., which utilize the redundancy to insure that if a
`processor or data path is interrupted, secondary parallel
`processors are available to take the load.
`Even a single chip may have redundant components
`to be utilized in the event of a failure of the primary
`systems as disclosed in Chesley, U.S. Pat. No. 4,191,996,
`which discloses such multiple processors and memory
`circuits.
`Mirrored data, which is stored on an alternate device,
`is discussed in Varaiya, et. al., U.S. Pat. No. 4,754,397,
`which discloses a multiple disk drive apparatus which
`mirrors data storage operations to a disk, or “writes," to
`a number of disks. Should a disk fail, a redundant drive
`is selected. Richer, U.S. Pat. No. 4,351,023, discloses
`the use of redundant controllers in a similar manner.
`One controller serves in a primary role, the other as
`backup. In the event of a shutdown, the state of the ?rst
`controller is transferred to the second, which has been
`“tracking” the functions of the ?rst controller on a
`delayed basis. After restoration of the ?rst controller, its
`primary status is resumed, and the data is transferred
`thereto. While the patent is not particularly explicit in
`reference to the data transfer from the second controller
`to the ?rst, such a transfer appears necessary in light of
`the need for the state of the machine to be continued
`intact.
`>
`Hess, et. al., U.S. Pat. No. 4,581,701, discloses an
`apparatus which utilizes redundant hardware to pro
`vide a backup in the event of a failure. More impor
`tantly, however, the apparatus monitors and stores the
`status of the host through the use of a continuously
`updated memory buffer. Upon failure, the status, which
`is contained in the buffer, is transferred from one unit to
`another, permitting continued function without inter
`ruption.
`Error detection to facilitate the automatic implemen
`tation of the secondary devices is also well developed.
`Hendrie, et. al., U.S. Pat. No. 4,750,177, Reid, U.S. Pat.
`No. 4,453,215, Samson, et. al., US Pat. No. 4,654,857,
`and Wolff, et. al., U.S. Pat. No. 4,486,826, all disclose a
`method and apparatus for detecting errors in a system
`and selecting secondary devices for replacement.
`The most pertinent art, however, relates to the ability
`to store and retrieve a prior state or status of the ma
`chine after a failure. Ziehm, et. al., U.S. Pat. No.
`4,521,847, discloses a system which permits the restora
`tion of the state and status of a processing device from
`a non-volatile memory source. The state and status of
`the processor and memory are continually updated in
`the non-volatile memory area. After a power shutdown
`or other malfunction, the operating state of the host
`machine may be restored to its last prior state, through
`transfer of stored information from the non-volatile
`memory.
`
`15
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`

`

`5,089,958
`3
`Others have taught that processing and memory
`functions ay be assumed, if the host's operation is
`halted, by a node of a network. Data may then be recov
`ered from the node at a later time. Rawlings, et. al, US.
`Pat. No. 4,156,907, discloses a data communications
`subsystem which provides that in the event of a shut
`down, both processing and memory functions are trans
`ferred from the host to each of several nodes. This
`information may then be uploaded back to the host after
`restoration of service.
`The primary focus of most of the prior art is thus the
`use of redundant components for utilization during a
`failure of the host, or the restoration of the prior state of
`the system after the ?rst components are replaced. A
`secondary focus is error detection and implementation
`of such a redundant system thereafter.
`There is thus a need for a device which will enable
`the user to not only keep a constant backup of a data
`storage device, such as a magnetic disk, but also keep
`the backup in the form of a readily retrievable continu
`ous record of the state of that storage device.
`A secondary ability not found in the prior art is the
`physical replacement of a faulty device and the replace
`ment of data on the substitute without halting the sys
`tem. In a network system especially, where many users
`share the data storage space, the loss of a physical de
`vice means more than just lost data—-there is a consider
`able cost in "downtime" as the unit is repaired or re
`placed. No system is known which enables the rapid
`and easy replacement of physical devices while main
`taining a portion of the functionality of the network.
`
`30
`
`10
`
`25
`
`4
`quent retrieval of any selected changes; and creating, at
`a selected time subsequent to the establishment of the
`base image, at least one differential image of the state of
`the system. The differential image is created by compil
`ing all changes to the base image which occurred be
`tween creation of the base image and the selected time.
`A virtual image of data as stored on the storage device
`as such data existed at a preselected prior point in time
`is then created. The virtual image comprises the base
`image altered by selected stored changes, as directed by
`the address table, and is presented as a second, indepen
`dent disk to the host system. Relevant portions of the
`virtual image corresponding to a tile or files are selected
`by the user, and these portions are then copied onto a
`data storage device associated with the host computer
`system.
`A method for storing any prior state of the storage
`device, and retrieving the prior state in its entirety for
`restoration onto a storage device, is also provided. This
`process is particularly useful if, for example, an entire
`disk drive is defective, or otherwise needs to be re
`placed. The data is already stored on a separate me
`dium, and the physical device may be simply discon
`nected, and a fresh unit installed. The data is then re
`stored to the new unit.
`This is accomplished by ?rst establishing a base image -
`corresponding to an initial state of the host computer
`system. This image is then stored on a storage device.
`Changes to the state of the host system are then continu
`ally monitored and stored on the same storage device.
`The device is adapted to map the locations of the stored
`changes in an address table to enable subsequent re
`trieval of any selected changes. At a selected time sub
`sequent to the establishment of the base image, at least
`one differential image of the state of the system is then
`created. The differential image is created by compiling
`all changes to the base image which occurred between
`creation of the base image and the selected time. A
`virtual image is then compiled of data stored on the
`storage device, as such data existed at a preselected
`prior point in time. The virtual image comprises the
`base image altered by selected stored changes, as di
`rected by the address table.
`The software then interrogates the storage device to
`determine the type of the device which has been in
`stalled. The device is prepared to accept data according
`to preset parameters determined by the type of the
`device. The virtual image is then copied, sector by
`sector, onto the storage device.
`Our apparatus is utilized for storing and retrieving
`any prior state of a computer system. The system is
`preferably utilized in con]unction with a personal com
`puter which may or may not be serving as a local area
`network file server. The computer has at least one pri
`mary data source, which may be a magnetically read
`hard disk, to and from which data is transferred. Such
`transfer changes the state of the system, and in the pre
`ferred embodiment, these changes are write actions to
`the hard disk. The hardware of the system is utilized to
`completely control and monitor the primary data
`source, here a hard disk, which would generally be
`controlled by the host computer itself.
`The apparatus of the system centers on processing
`means and memory means associated with the process
`ing means for the transient storage of data. Encoded
`within the system is software which provides the oper
`ating instructions and logic utilized by the apparatus.
`
`35
`
`SUMMARY OF THE INVENTION
`A method and apparatus for storing and retrieving
`any prior state of a computer system in which data is
`transferred, and such transfer changes the state of the
`system, is provided. A present preferred embodiment is
`installed in a standalone computer or network ?le
`server for the backup of the data storage devices, which
`are usually magnetic disk drives.
`Upon installation, a base image corresponding to the
`initial state of the computer system is established and
`stored on a storage device. During the operation of the
`host computer, our system monitors changes to the state
`of the system and subsequently stores at least some of
`45
`these changes on the storage device. In the preferred
`embodiment, these changes are the write actions to the
`magnetic disk. The changes are then mapped by loca
`tion of the stored changes in an address table to enable
`subsequent retrieval of any selected changes. At a se
`lected time subsequent to the establishment of the base
`image, when the address table has become full, at least
`one differential image of the state of the system is cre~
`ated. The differential image is created by compiling all
`changes to the base image which occurred 0 between
`creation of the base image and the selected time.
`A virtual image may thus be created of data stored on
`the storage device, as such data existed at a preselected
`prior point in time. The virtual image comprises the
`base image altered by selected stored changes, as di
`rected by the address table.
`A file from such a prior state may be stored and re
`trieved by establishing a base image corresponding to
`the initial state of the computer system; storing the base
`image on a storage device; monitoring changes to the
`state of the system; storing the changes to the state of
`the system on the storage device; mapping locations of
`the stored changes in an address table to enable subse
`
`55
`
`60
`
`

`

`5,089,958
`5
`The software is adapted to establish a base image
`corresponding to the initial state of the computer sys
`tem upon installation of the system. Next, the base
`image is stored on the storage device, which may be a
`magnetically read hard disk, but is preferably an opti
`cally read disk. Changes to the state of the system are
`monitored by the software and associated hardware,
`and such changes to the state of the system are stored on
`the storage device. In the preferred embodiment, such
`changes constitute write actions to the primary data
`storage device. The locations of these changes are
`stored in an address table to enable subsequent retrieval
`of the changes as they are selected by the user. At least
`one differential image of the state of the system is also
`created, when the address table becomes full, by com
`piling all changes to the base image which occurred
`between creation of the base image and the selected
`time.
`Means are provided for data communication between
`the primary data source and the processing means to
`enable the processing means to detect and record the
`changes to the state of the system within the memory
`means. This data communication may be either an Small
`Computer Systems Interface ("SCSI”) type link or a
`Direct Memory Access (“DMA") type link.
`At least one storage device is connected to the pro
`cessing means for maintaining a record of the incremen
`tal changes to the primary data source. Additionally, a
`secondary storage device may be connected to the pro
`cessing means wherein data from the primary data
`source may be mirrored onto the secondary storage
`device.
`The software is also adapted to compile the virtual
`image of the storage device, as such data existed at a
`preselected point in time. As previously stated the vir
`tual image comprises the base image, altered by selected
`stored changes, as directed by the address table.
`These and other advantages and features of the pres
`ent invention will be more fully understood on refer
`ence to the presently preferred embodiments thereof
`40
`and to the appended drawings.
`
`20
`
`10
`
`30
`
`35
`
`6
`FIG. 13 is a tableidiagram which illustrates the inter
`connection of thelo'cational and archival data utilized in
`the system.
`The ?owchart diagrams illustrated in this application
`utilize a diagrammatic shorthand for iterative loop con
`ditions. A circle having a cross is the initial conditional,
`while a terminal circle marks the end of the loop and the
`return to the initial condition. An upward arrow marks
`the end of a procedure or subroutine.
`
`DESCRIPTION OF THE PREFERRED
`EMBODIMENTS
`Our media fault tolerant disk backup system is de
`signed to provide a means for providing fault tolerant
`storage and real-time data archiving/recovery for di
`rect access storage devices (DASD). A virtual read/
`write disk constructed from archival information is
`present concurrently with the primary DASD, but hid
`den from the host system, and serves as a mirror of the
`primary DASD data. The system provides a means to
`create additional read-only virtual disks representing
`prior states of the primary DASD from the archival
`information, concurrently with the primary DASD and
`the current mirror image virtual disk.
`The software consists of several elements, or layers.
`25
`' Each layer consists of at least one independent program
`or software driver which performs discrete functions. A
`multitasking, preemptive priority nucleus is provided
`for task management, task synchronization and message
`passing, dynamic memory allocation, hardware inter
`rupt support and abstraction, DMA control and dy
`namic program loading.
`A physical device driver layer is provided, consisting
`of dynamically loaded device drivers which support the
`specific DASD devices to be communicated with along
`the SCSI channels.
`A logical function layer performs logical mapping of
`I/O requests to primary DASD devices, maintenance of
`differential, incremental and queue tables for the Opti
`cal DASD, error recovery and virtual disk algorithms.
`The backup system hardware is preferably installed
`in a server, or host, of a network of computers. The
`system may, however, be utilized in conjunction with
`any standalone computing system. The system is de
`signed to provide a means for intelligent control of file
`server peripherals, uncomplicated management of un
`usual pen'pheral types, the creation of abstract periph
`eral devices, online image backups and fault tolerance at
`the media level through mirroring and/or incremental
`backup.
`Referring to FIG. 1, a present preferred embodiment
`of the apparatus of our system is disclosed. The appara
`tus may be broken out into two major component areas:
`(a) the controller circuit board 1, which is inserted into
`a host computer 2, and (b) the peripherals section 3,
`having one or more magnetic primaryDASDs 100, one
`or more optical DASDs 110, and an additional mag
`netic controller DASD 90. The controller DASD 90 is
`used by the controller circuit board 1 and its associated
`circuitry for storage of loadable physical device drivers,
`utilities and a queue for optical DASD 110. Each of the
`DASD devices has its own unique format. The control
`ler circuit board 1 has three subsections: the host inter
`face section, the processor section, and the I/O section.
`We provide a single board computer with an SCSI
`interface (not shown) or DMA/FIFO(f1rst in first out)
`10 to the host system bus. Processor 20, which is prefer
`ably an Intel 80186 processor, forms the hub of the
`
`45
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`FIG. 1 is a diagrammatic view of the hardware of a
`present preferred embodiment of our invention.
`FIG. 2 is a flowchart diagram of the initialization
`process for an optical direct access storage device of the
`system.
`FIG. 3 is a ?owchart diagram of the initialization of
`the SCSI ports of the system.
`50
`FIG. 4 is a ?owchart diagram of the

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket