`(12) Patent Application Publication (10) Pub. No.: US 2006/0080515 A1
`Spiers et al.
`(43) Pub. Date:
`Apr. 13, 2006
`
`US 20060O80515A1
`
`(54) NON-VOLATILE MEMORY BACKUP FOR
`NETWORKSTORAGE SYSTEM
`
`(75) Inventors: John Spiers, Louisville, CO (US);
`Mark Loffredo, Libertyville, IL (US);
`Mark G. Hayden, Fairfield, CA (US);
`Mike A. Hayward, Boulder, CO (US)
`Correspondence Address:
`KENNETH. C. WINTERTON
`HOLLAND & HART LLP
`P. O. BOX 8749
`DENVER, CO 80201-8749 (US)
`(73) Assignee: LEFTHAND NETWORKS,
`Boulder, CO (US)
`(21) Appl. No.:
`10/711.901
`
`INC.,
`
`(22) Filed:
`
`Oct. 12, 2004
`
`Publication Classification
`
`(51) Int. Cl.
`G06F 12/00
`
`(2006.01)
`
`(52) U.S. Cl. ............................................... 711/162: 714/14
`
`(57)
`ABSTRACT
`A data storage system including a primary data storage
`device and a backup data storage device stores data with
`enhanced performance. The primary data storage device has
`a primary data storage device memory for holding data, and
`the backup data storage device has a backup volatile
`memory, a backup non-volatile memory, and a processor.
`The backup storage device processor causes a copy of data
`provided to the primary data storage device to be provided
`to the backup data storage device volatile memory, and in
`the event of a power interruption moves the data from the
`backup volatile memory to the backup non-volatile memory.
`In Such a manner, data stored at the backup data storage
`device is not lost in the event of a power interruption. The
`backup data storage device further includes a backup power
`Source Such as a capacitor, a battery, or any other Suitable
`power source, and upon detection of a power interruption,
`Switches to the backup power source and receives power
`from the backup power source while moving the data from
`the backup volatile memory to the backup non-volatile
`memory.
`
`RECEIVE DATA TO BESTORED
`FROMAPPLICATION
`
`SEND COMMAND TOBACKUP
`DEVICE TO STORE DATA
`
`2
`
`RECEIVE
`ACKNOWLEDGEMENT
`
`
`
`YES
`
`REPORT TO APPLICATION THAT
`DATAS STORED
`
`ANALYZE PHYSICAL ADDRESS(ES)
`OF DATA AND RE-ORDER DATA, AND
`ANY OTHER DATAPRESENT, BASED
`ON PHYSICAL ADDRESS(ES)
`
`WRITE DATA TO STORAGE DEVICE
`
`25O
`
`254
`
`262
`
`266
`
`27O
`
`TO THE STORAGE DEVICE MEDIA |-
`
`274
`
`278
`
`VERIFY DATA HAS BEEN WRITTEN
`
`REMOVE DATA FROM BACKUP DEVICE
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`Petitioners
`Ex. 1025, p. 1
`
`
`
`Patent Application Publication Apr. 13, 2006 Sheet 1 of 14
`
`US 2006/0080515 A1
`
`
`
`APPLICATION
`
`APPLICATION
`
`1O4
`
`FIG.1
`
`Petitioners
`Ex. 1025, p. 2
`
`
`
`Patent Application Publication Apr. 13, 2006 Sheet 2 of 14
`
`US 2006/0080515 A1
`
`
`
`NETWORK ATTACHED STORAGE
`
`NETWORK INTERFACE
`
`OPERATING
`SYSTEM
`12O
`
`MEMORY
`
`124
`
`BACKUP
`STORAGE
`STORAGE
`DEVICE(S) ||CONTROLLER DEVICE(S)
`
`41N1
`
`112
`
`128
`
`136
`
`132
`
`14O
`
`FIG.2
`
`Petitioners
`Ex. 1025, p. 3
`
`
`
`Patent Application Publication Apr. 13, 2006 Sheet 3 of 14
`
`US 2006/0080515 A1
`
`132
`
`STORAGE
`CONTROLLER
`
`128
`
`14O
`
`
`
`STORAGE DEVICE
`
`BACKUP
`DEVICE
`
`WRITE-BACK
`CACHE
`
`148
`
`FIG.3
`
`Petitioners
`Ex. 1025, p. 4
`
`
`
`Patent Application Publication Apr. 13, 2006 Sheet 4 of 14
`
`US 2006/0080515 A1
`
`152
`
`144
`
`156
`
`
`
`BACKUP DEVICE
`
`INTERFACE
`
`POWER
`SUPPLY
`
`16O
`
`PROCESSOR
`
`VOLATILE
`MEMORY
`
`NON-
`VOLATILE
`MEMORY
`
`168
`
`164
`
`FIG.4
`
`Petitioners
`Ex. 1025, p. 5
`
`
`
`Patent Application Publication Apr. 13, 2006 Sheet 5 of 14
`
`US 2006/0080515 A1
`
`176
`
`21O
`
`194
`
`218
`
`190
`
`144
`
`83.33 EL3H
`
`82.83 EL33:
`
`SIX
`
`SIP&
`
`EE
`Ew
`'')
`
`184 || |
`
`E{R}
`QTCH
`
`
`
`3CR)
`x&T
`
`EEPROM
`
`3 x
`
`C
`R
`
`PROCESSOR
`
`? IIIE
`18O
`
`198
`
`K 4
`
`SI32M
`
`3DR&R
`
`G
`
`| 190
`
`FIG.5
`
`Petitioners
`Ex. 1025, p. 6
`
`
`
`Patent Application Publication Apr. 13, 2006 Sheet 6 of 14
`
`US 2006/0080515 A1
`
`2
`
`54
`2
`
`258
`
`RECEIVE DATA TO BESTORED
`FROMAPPLICATION
`
`SEND COMMAND TOBACKUP
`DEVICE TO STORE DATA
`
`RECEIVE
`ACKNOWLEDGEMENT
`
`
`
`REPORT TO APPLICATION THAT
`DATAS STORED
`
`
`
`ANALYZE PHYSICAL ADDRESS(ES)
`OF DATA AND RE-ORDER DATA, AND
`ANY OTHER DATA PRESENT, BASED
`ON PHYSICAL ADDRESS(ES)
`
`WRITE DATA TO STORAGE DEVICE
`
`VERIFY DATA HAS BEEN WRITTEN
`TO THE STORAGE DEVICE MEDIA
`
`REMOVE DATA FROM BACKUP DEVICE
`
`FIG.6
`
`266
`
`27O
`
`274
`
`278
`
`Petitioners
`Ex. 1025, p. 7
`
`
`
`Patent Application Publication Apr. 13, 2006 Sheet 7 of 14
`
`US 2006/0080515 A1
`
`POWER ON
`
`LOAD PROCESSOR OPERATING
`INSTRUCTIONS FROMPROM
`
`BEGIN CHARGING CAPACTORS
`
`INITIALIZE, TEST, AND ZERO
`SDRAM
`
`CHECKNVRAMSTATUS IN
`EEPROM
`
`3OO
`
`304
`
`3O8
`
`312
`
`316
`
`32O
`
`
`
`IS NVRAM
`VALID2
`
`324
`NO, UPDATEEEPROMsTATIsTics
`UPDATE EEPROMSTATISTICS
`
`TRANSFERNVRAM TO SDRAM
`
`332
`
`MARK SDRAM WALD
`
`NO
`
`
`
`
`
`CAPACTORS
`CHARGED 2
`
`336
`
`YES
`
`34O
`
`ENABLE WRITES
`
`
`
`
`
`ENABLE SDRAM TO NVRAM
`TRANSFER
`
`MARKNVRAMAS INVALIDIN
`EEPROM
`
`344
`
`
`
`348
`
`FIG.7
`
`Petitioners
`Ex. 1025, p. 8
`
`
`
`Patent Application Publication Apr. 13, 2006 Sheet 8 of 14
`
`US 2006/0080515 A1
`
`RESET
`
`IS SDRAMVALID 2
`
`
`
`YES
`
`388
`
`IS SDRAM-NVRAM. N. YES
`XFER IN PROGRESS2
`
`356
`
`36O
`
`
`
`
`
`YES
`
`384
`
`TRENFA
`
`NO
`INITIALIZE, TEST, AND ZERO
`SDRAM
`
`364
`
`
`
`
`
`368
`
`CHECKNVRAM STATUS IN
`EEPROM
`
`372
`
`IS NVRAMVALID 2
`NO
`
`376
`
`38O
`
`UPDATE EEPROMSTATISTICS
`
`ABORT SDRAM TO
`
`al
`
`MARK SDRAM WALD
`
`e-gs
`
`396
`
`4OO
`
`4O4
`
`4O6
`
`412
`
`CAPACTORS
`CHARGED 2
`YES
`ENABLE WRITE S
`
`ENABLE SDRAM TO NVRAM
`
`MARKNVRAMASINVALIDIN
`EEPROM
`
`READY
`
`FIG.8
`
`Petitioners
`Ex. 1025, p. 9
`
`
`
`Patent Application Publication Apr. 13, 2006 Sheet 9 of 14
`
`US 2006/0080515 A1
`
`READY
`
`42O
`
`YES
`
`IS DESCRIPTOR POINTER
`FIFO EMPTY?
`
`O
`READDESCRIPTOR POINTERFIFO AND
`LOADDESCRIPTOR BASE ADDRESS
`
`424
`
`428
`
`ASSERT BUS REQUEST
`
`432
`
`to- BUS GRANTED?
`
`436
`
`YES
`READDESCRIPTOR AND WRITE
`DESCRIPTOR DATATOLOCAL RAM
`
`440
`
`448
`
`452
`
`444
`
`46O
`
`464
`
`
`
`472
`
`48O
`
`
`
`488
`
`
`
`496
`
`
`
`YES
`
`SRCHOST
`DEST=SDRAM
`
`NO
`SRC=SDRAM
`DEST=HOST
`
`NO
`SRC=SDRAM
`DEST=NVRAM
`
`NO
`SRC=NVRAM
`DEST=SDRAM
`
`NO
`SDRAM
`INITIALIZATION
`
`NO
`
`NO
`
`INCREMENT BAD
`DESCRIPTOR
`COUNTIN
`EEPROM
`
`456
`
`GENERATE BAD
`DESCRIPTOR
`INTERRUPT
`
`
`
`GENERATE
`UNKNOWNERROR
`INTERRUPT
`
`YES ( TRANSFER FROMHOST
`MEMORY TO SDRAM
`
`476
`YES (TRANSFER FROMSDRAM
`TOHOSTMEMORY
`
`484
`YES (TRANSFER FROMSDRAM
`TONMRAM
`
`492
`YES (TRANSFER FROMNVRAM
`TOSDRAM
`
`5OO
`
`YES
`
`SEND SDRAM
`INITIALIZATIONCYCLES
`
`FIG.9
`
`Petitioners
`Ex. 1025, p. 10
`
`
`
`Patent Application Publication Apr. 13, 2006 Sheet 10 of 14
`
`US 2006/0080515 A1
`
`
`
`
`
`
`
`
`
`
`
`532
`
`468
`
`MEMORY TO SDRAM
`
`508
`
`NO
`
`516
`
`52O
`
`524
`
`528
`
`o
`536
`
`54O
`
`
`
`
`
`
`
`YES
`
`HOST MEMORY
`
`WRITE DATA TO SDRAM
`
`
`
`GENERATE CRC VALUE
`
`ASSERT BUS REQUEST
`
`BUS GRANT
`YES
`CALCULATE DESCRIPTOR
`CRC RESULT ADDRESS
`
`STORE CRC RESULT AND
`DESCRIPTOR STATUS
`
`FIG.10
`
`Petitioners
`Ex. 1025, p. 11
`
`
`
`Patent Application Publication Apr. 13, 2006 Sheet 11 of 14
`
`US 2006/0080515 A1
`
`476
`
`
`
`TRANSFER FROM SDRAM
`TO HOST MEMORY
`
`SET SDRAM WRITE ADDRESS
`
`READ SDRAM DATA
`
`WRITE DATA TO INTF FIFO
`AND GENERATE CRC
`
`ASSERT BUS REQUEST
`
`- e.
`
`NO
`
`64
`
`568
`
`56O
`
`572
`
`>
`
`BUS GRANTP
`
`YES
`READ DATA FROMINTF FIFO,
`WRITE DATA TOBUS
`
`ASSERT BUS REQUEST
`
`BUS GRANT 2
`
`NO
`
`> - YES
`
`576
`
`CALCULATE DESCRIPTOR CRC
`RESULT ADDRESS
`
`58O
`
`STORE CRC RESULT AND
`DESCRIPTOR STATUS
`
`FIG.11
`
`Petitioners
`Ex. 1025, p. 12
`
`
`
`Patent Application Publication Apr. 13, 2006 Sheet 12 of 14
`
`US 2006/0080515 A1
`
`TRANSFER FROM
`SDRAMTONVRAM
`
`584
`
`664
`588
`MARKNVRAM
`668
`AS VALID
`QS ASSERIES
`REQUEST
`
`N
`O
`
`484
`
`592
`
`INTIALIZENVRAMBLOCK
`ERASE ADDRE
`S
`SS
`SEND NVRAMBLOCK
`ERASE COMMAND
`SE CO
`
`BLOCKERASE DONE 2
`
`YES
`SET SDRAMREAD ADDRESS;
`INITIALIZE CRC
`
`WRITE DATATO INTF FIFO,
`GENERATE CRC
`
`
`
`
`
`
`
`
`
`
`
`596
`672
`YES
`3. CALCULATE
`DESCRIPTOR
`604
`CRC RESULT
`ADDRESS
`
`68O
`608
`
`STORE CRC
`RESULTAND
`DESCRIPTOR
`STATUS
`
`SEND NVRAMPAGE WRITECMD-12
`READDATAFROMINTFFIFORAM
`WRITE DATATONMRAMPAGERAM
`
`660 MARKNVRAM
`TRANSFER
`ADDRESS
`
`656
`
`MARK BLOCK
`AS BAD IN PAGE
`NO
`G) BADBLOC
`
`
`
`652
`
`
`
`INCREMENT
`648 \ BADBLOCK
`COUNT
`
`PAGE BURST DONE 2
`62O
`YES
`
`624
`NO
`SET SDRAMREAD ADDRESS;
`INITIALIZE CRC
`
`NVRAM WRITE DONE 2
`
`READ SDRAM DATA
`
`YES
`
`WRITE DATATO INTF FIFO
`
`SEND NVRAM PAGE READ CMD
`
`READ DATA FROMINTF FIFO
`AND FROMNVRAMPAGE RAM
`
`NO
`
`COMPARE OK?
`
`YES
`
`FIG.12
`
`Petitioners
`Ex. 1025, p. 13
`
`
`
`Patent Application Publication Apr. 13, 2006 Sheet 13 of 14
`
`US 2006/0080515 A1
`
`TRANSFER FROM
`NVRAM TO SDRAM
`OS
`
`SET NVRAMREAD ADDRESS
`
`SEND NVRAM PAGE
`READ COMMAND
`
`READ DATA FROM NVRAM
`PAGE RAM; WRITE DATA TO
`INTF FIFO
`
`SET SDRAM WRITE ADDRESS;
`INITIALIZE CRC
`
`READ DATA FROMINTF FIFO
`AND GENERATE CRC VALUES
`
`492
`
`684
`
`688
`
`692
`
`696
`
`7OO
`
`704
`
`ASSERT BUS REQUEST
`
`BUS GRANT 2
`
`> - YES
`
`O
`
`712
`
`CALCULATE DESCRIPTOR CRC
`RESULT ADDRESS
`
`716
`
`STORE CRC RESULT AND
`DESCRIPTOR STATUS
`
`FIG.13
`
`Petitioners
`Ex. 1025, p. 14
`
`
`
`Patent Application Publication Apr. 13, 2006 Sheet 14 of 14
`
`US 2006/0080515 A1
`
`764
`760
`INTFLASHBLOCK
`ERASEADDRESS
`
`
`
`SEND FLASHBLOCK
`ERASE COMMAND
`
`768
`
`784
`
`POWERFAILDETECT
`720
`AUTOMATIC
`POWER SWITCH
`724
`TO CAPS
`
`ABORTCURRENT
`PCOPERATION
`7éAN STATE
`
`INCREMENT
`POWERFAIL
`COUNTERN
`EEPROM
`
`NO
`
`ISXFER
`OKENABLED?
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`YES
`
`ISVCAP)
`WCAP MIN?
`
`INCREMENTBAD
`BLOCK COUNTIN
`EEPROM
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`NVRAMSTATUSIS
`BADWCAP
`INXFER
`
`DISABLEDXFER
`
`"TES (ADDRESS524)
`
`BAD BLOCKMAX
`
`856
`
`MARK FLASH
`BLOCKAS BAD IN
`FLASHPAGE
`
`UPDATEFLASH
`XFERADDRESS
`(XFERADDRESS+=
`PAGEBURSTLEN)
`852
`INEEPROMMARK
`NVRAMINVALID
`
`
`
`FIG.14
`
`744
`
`748
`
`78O
`SETSDRAMREAD
`ADDRESSBURST
`LENROTATE
`AMOUNT BYTE
`ENABLES
`NIT CRC
`s f s RE
`D 5. F
`
`812
`SETSDRAMREAD
`ADDRESSBURST
`LENROTATE
`AMOUNT BYTE
`ENABLES
`NTCRC
`START READ OF
`SDRAMDATA 82O
`
`816
`
`DATAWRITTENTO
`
`824
`INTF FIFO
`GENERATECRC || SEND FLASHPAGE
`VALUES DURING
`READ COMMAND 828
`WRITETO FIFO
`forOM
`
`DATAREAD FROM
`INTF FIFO
`DATAREAD FROM
`FLASHPAGERAM
`
`DATAREAD FROM
`INTFFIFO
`DATAWRITETO
`FLASHPAGERAM
`
`COMPARE
`DATAOK?
`
`BURSTDONE2
`
`UPDATEXFER
`ADDRESS (XFER
`ADDRESS-PAGE
`BURST LEN)
`UPDATEXFER
`
`E.
`
`BURST LEN)
`
`872
`868
`HALTAND POWER Y INEEPROMINCRNVRAM
`DOWN
`COPY COUNT & STOPLED
`INEEPROMMARK
`NVRAMVALID
`
`37%L
`
`Petitioners
`Ex. 1025, p. 15
`
`
`
`US 2006/0O80515 A1
`
`Apr. 13, 2006
`
`NON-VOLATILE MEMORY BACKUP FOR
`NETWORKSTORAGE SYSTEM
`
`FIELD OF THE INVENTION
`0001. The present invention relates to non-volatile data
`backup in a storage system, and, more specifically, to a data
`backup device utilizing Volatile memory and non-volatile
`memory.
`
`BACKGROUND OF THE INVENTION
`0002 Data storage systems are used in numerous appli
`cations and have widely varying complexity related to the
`application storing the data, the amount of data required to
`be stored, and numerous other factors. A common require
`ment is that the data storage system securely store data,
`meaning that stored data will not be lost in the event of a
`power loss or other failure of the storage system. In fact,
`many applications store data at primary data storage systems
`and this data is then backed-up, or archived, at predeter
`mined time intervals in order to provide additional levels of
`data security.
`0003. In many applications, a key measure of perfor
`mance is the amount of time the storage system takes to store
`data sent to it from a host computer. Generally, when storing
`data, a host computer will send a write command, including
`data to be written, to the storage system. The storage system
`will store the data and report to the host computer that the
`data has been stored. The host computer generally keeps the
`write command open, or in a “pending State, until the
`storage system reports that the data has been stored, at which
`point the host computer will close the write command. This
`is done so that the host computer retains the data to be
`written until the storage system has stored the data. In this
`manner, data is kept secure and in the event of an error in the
`storage system, the host computer retains the data and may
`attempt to issue another write command.
`0004. When a host computer issues a write command,
`overhead within the computer is consumed while waiting for
`the storage system to report that the write is complete. This
`is because the host computer dedicates a portion of memory
`to the data being stored, and because the host computer uses
`computing resources to monitor the write command. The
`amount of time required for the storage system to write data
`depends on a number of factors, including the amount of
`read/write operations pending when the write command was
`received, and the latency of the storage devices used by the
`storage system. Some applications utilize methods of reduc
`ing the amount of time required for the storage system to
`report that the write command is complete, such as, for
`example, utilizing a write back cache which reports that a
`write command is complete before that data is written to the
`media in the storage system. While this increases the per
`formance of the storage system, if there is a failure within
`the storage system prior to the data being written to the
`media, the data may be lost.
`
`SUMMARY OF THE INVENTION
`0005 The present invention has recognized that a sig
`nificant amount of resources may be consumed in perform
`ing write operations to write data to a data storage device
`within a data storage system. The resources consumed in
`Such operations may be computing resources associated with
`
`a host computer, or other applications, which utilize the data
`storage system to store data. Computing resources associ
`ated with the host computer may be underutilized when the
`host computer is waiting to receive an acknowledgment that
`the data has been written to the storage device. This wait
`time is a result of the speed and efficiency with which the
`data storage system stores data.
`0006 The present invention increases resource utilization
`when storing data at a storage system by reducing the
`amount of time a host computer waits to receive an acknowl
`edgment that data has been stored by increasing the speed
`and efficiency of data storage in a data storage system.
`Consequently, in a computing system utilizing the present
`invention, host computing resources are preserved, thus
`enhancing the efficiency of the computing system.
`0007. In one embodiment, the present invention provides
`a data storage system comprising (a) a first data storage
`device including a first data storage device memory for
`holding data, (b) a second data storage device including (i)
`a second data storage device volatile memory, (ii) a second
`data storage device non-volatile memory, and (iii) a proces
`sor for causing a copy of data provided to the first data
`storage device to be provided to the second data storage
`device volatile memory, and in the event of a power inter
`ruption moving the data from the second data storage device
`Volatile memory to the second data storage device non
`Volatile memory. In Such a manner, data stored at the second
`data storage device is not lost in the event of a power
`interruption.
`0008. The first data storage device, in an embodiment
`comprises at least one hard disk drive having an enabled
`Volatile write-back cache and a storage media capable
`storing data. The first data storage device may, upon receiv
`ing data to be stored on the storage media, Store the data in
`the volatile write-back cache and generate an indication that
`the data has been stored before storing the data on the media.
`The first data storage device may also include a processor
`executing operations to modify the order in which the data
`is stored on the media after the data is stored in the
`write-back cache. In the event of a power interruption, data
`in the write-back cache may be lost, however, a copy of the
`data will continue to be available at the second data storage
`device, thus data is not lost in Such a situation.
`0009. In an embodiment, the second data storage device
`further comprises a secondary power Source. The secondary
`power source may comprise a capacitor, a battery, or any
`other Suitable power source. The second data storage device,
`upon detection of a power interruption, Switches to the
`secondary power source and receives power from the sec
`ondary power source while moving the data from the second
`data storage device volatile memory to the second data
`storage device non-volatile memory. Upon completion of
`moving the data from the second data storage device volatile
`memory to the second data storage device non-volatile
`memory, the second data storage device shuts down, thus
`preserving the secondary power source.
`0010. In one embodiment, the second data storage device
`non-volatile memory comprises an electrically erasable pro
`grammable read-only-memory, or a flash memory. The sec
`ond data storage device volatile memory may be a random
`access memory, such as a SDRAM. In this embodiment,
`upon detection of a power interruption, the processor reads
`
`Petitioners
`Ex. 1025, p. 16
`
`
`
`US 2006/0O80515 A1
`
`Apr. 13, 2006
`
`the data from the second data storage device volatile
`memory, writes the data to the second data storage device
`non-volatile memory, and verifies that the data stored in the
`second data storage device non-volatile memory is correct.
`The processor may verify that the data stored in the second
`data storage device non-volatile memory is correct by com
`paring the data from the second data storage device non
`Volatile memory with the data from the second data storage
`device volatile memory, and re-writing the data to the
`second data storage device non-volatile memory when the
`comparison indicates that the data is not the same. In another
`embodiment, the processor, upon detection of a power
`interruption, reads the data from the second data storage
`device volatile memory, computes an ECC for the data, and
`writes the data and ECC to the second data storage device
`non-volatile memory.
`0011. In a further embodiment, the first data storage
`device and second data storage device are operably inter
`connected to a storage server. The storage server is operable
`to cause data to be provided to each of the first and second
`data storage devices. The storage server may comprise an
`operating system, a CPU, and a disk I/O controller. The
`storage server, in an embodiment, (a) receives block data to
`be written to the first data storage device, the block data
`comprising unique block addresses within the first data
`storage device and data to be stored at the unique block
`addresses, (b) stores the block data in the second data
`storage device, (c) manipulates the block data, based on the
`unique block addresses, to enhance the efficiency of the first
`data storage device when the first data storage device stores
`the block data to the first data storage device memory, and
`(d) issues one or more write commands to the first data
`storage device to write the block data to the first data storage
`device memory. Manipulating the block data may include
`reordering the block data based on the unique block
`addresses such that seek time within the first data storage
`device is reduced.
`0012 Another embodiment of the invention provides a
`method for storing data in a data storage system. The method
`comprising: (a) providing a first data storage device com
`prising a first memory for holding data; (b) providing a
`second data storage device comprising a second volatile
`memory and a second non-volatile memory; (c) storing data
`to be stored at the first data storage device at the second data
`storage device in the second volatile memory; and (d)
`moving the data from the second volatile memory to the
`second non-volatile memory in the event of a power inter
`ruption. The first data storage device may comprise at least
`one hard disk drive having a volatile write-back cache and
`a storage media capable storing the data. The first data
`storage device, upon receiving data to be stored on the
`storage media, stores the data in the Volatile write-back
`cache and generates an indication that the data has been
`stored at the first data storage device before storing the data
`on the media.
`0013 In one embodiment, the second data storage device
`further comprises a secondary power source. The secondary
`power source may comprise a capacitor, a battery, or other
`Suitable power source. In this embodiment, the moving step
`comprises: (a) Switching the second memory device to the
`secondary power Source; (b) reading the data from the
`second data storage device Volatile memory; and (c) Writing
`the data to the second data storage device non-volatile
`
`memory. In another embodiment, the moving step further
`comprises: (d) Switching the second memory device off
`following the writing step. The moving step comprises, in
`another embodiment: (a) detecting a power interruption; (b)
`reading the data from the second data storage device volatile
`memory; (c) computing an ECC for the data; and (d) writing
`the data and ECC to the second data storage device non
`Volatile memory.
`0014. In another embodiment, the moving step com
`prises: (a) detecting a power interruption; (b) reading the
`data from the second data storage device Volatile memory;
`(c) Writing the data to the second data storage device
`non-volatile memory; and (d) verifying that the data stored
`in the second data storage device non-volatile memory is
`correct. The verifying step comprises, in an embodiment: (i)
`comparing the data from the second data storage device
`non-volatile memory with the data from the second data
`storage device Volatile memory; and (ii) re-writing the data
`to the second data storage device non-volatile memory when
`the comparing step indicates that the data is not the same.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`0015 FIG. 1 is a block diagram illustration of a network
`having applications and network attached storage;
`0016 FIG. 2 is a block diagram illustration of a data
`storage system of an embodiment of the present invention;
`0017 FIG. 3 is a block diagram illustration of a data
`storage system of another embodiment of the present inven
`tion;
`0018 FIG. 4 is a block diagram illustration of a backup
`device of an embodiment of the present invention;
`0.019
`FIG. 5 is a block diagram illustration of a PCI
`backup device of an embodiment of the present invention;
`0020 FIG. 6 is a flow chart diagram illustrating the
`operational steps performed by a storage controller of an
`embodiment of the present invention;
`0021
`FIG. 7 is a flow chart diagram illustrating the
`operational steps performed by a backup device processor
`following the power on of the backup device of an embodi
`ment of the present invention;
`0022 FIG. 8 is a flow chart diagram illustrating the
`operational steps performed by a backup device processor
`following a reset of the backup device of an embodiment of
`the present invention;
`0023 FIG. 9 is a flow chart diagram illustrating the
`operational steps performed by a backup device processor
`when receiving commands, for an embodiment of the
`present invention;
`0024 FIG. 10 is a flow chart diagram illustrating the
`operational steps performed by a backup device processor
`when transferring data from host memory to SDRAM, for an
`embodiment of the present invention;
`0025 FIG. 11 is a flow chart diagram illustrating the
`operational steps performed by a backup device processor
`when transferring data from SDRAM to host memory, for an
`embodiment of the present invention;
`0026 FIG. 12 is a flow chart diagram illustrating the
`operational steps performed by a backup device processor
`
`Petitioners
`Ex. 1025, p. 17
`
`
`
`US 2006/0O80515 A1
`
`Apr. 13, 2006
`
`when transferring data from SDRAM to NVRAM, for an
`embodiment of the present invention:
`0027 FIG. 13 is a flow chart diagram illustrating the
`operational steps performed by a backup device processor
`when transferring data from NVRAM to SDRAM, for an
`embodiment of the present invention; and
`0028 FIG. 14 is a flow chart diagram illustrating the
`operational steps performed by a backup device processor
`when a power failure is detected, for an embodiment of the
`present invention.
`DETAILED DESCRIPTION
`0029 Referring to FIG. 1, a block diagram illustration of
`a computing network and associated devices, of an embodi
`ment of the present invention. In this embodiment, a net
`work 100 has various connections to applications 104 and
`network attached storage (NAS) devices 108. The network
`100, as will be understood, may be any computing network
`utilized for communications between attached network
`devices, and may include, for example, a distributed net
`work, a local area network, and a wide area network, to
`name but a few. The applications 104 may be any of a
`number of computing applications connected to the network,
`and may include, for example, a database application, an
`email server application, an enterprise resource planning
`application, a personal computer, and a network server
`application, to name but a few. The NAS devices 108 are
`utilized in this embodiment for storage of data provided by
`the applications 104. Such network attached storage is
`utilized to store data from one application, and make the data
`available to the same application, or another application.
`Furthermore, such NAS devices 108 may provide a rela
`tively large amount of data storage, and also provide data
`storage that may be backed up, mirrored, or otherwise
`secured such that loss of data is unlikely. Utilizing such NAS
`devices 108 can reduce the requirements of individual
`applications requiring such measures to prevent data loss,
`and by storing data at one or more NAS devices 108, data
`may be securely retained with a reduced cost for the appli
`cations 104. Furthermore, such NAS devices 108 may
`provide increased performance relative to, for example,
`local storage of data. This improved performance may result
`from relatively high speed at which the NAS devices 108
`may store data.
`0030. A key performance measurement of NAS devices
`108 is the rate at which data may be written to the devices
`and the rate at which data may be read from the devices. In
`one embodiment, the NAS devices 108 of the present
`invention receive data from applications 104, and acknowl
`edge back to the application 104 that the data is securely
`stored at the NAS device 108, before the data is actually
`stored on storage media located within the NAS 108. In this
`embodiment, the performance of the NAS is increased,
`because there is no requirement for the NAS device to wait
`for the data to be stored at storage media. For example, one
`or more hard disk drives may be utilized in the NAS 108,
`with the NAS reporting to the application 104 that a data
`write is complete before the data is stored on storage media
`within the hard disk drive(s). In order to provide security to
`the data before it is stored on storage media, the NAS
`devices 108, of this embodiment, store the data in a non
`volatile memory, such that if a power failure, or other failure,
`occurs prior to writing the data to the storage media, the data
`may still be recovered.
`
`Referring now to FIG. 2, a block diagram illustra
`0031
`tion of a NAS device 108 of an embodiment of the present
`invention is now described. In this embodiment, the NAS
`108 includes a network interface 112, which provides an
`appropriate physical connection to the network and operates
`as an interface between the network 100 and the NAS device
`108. The network interface 112 may provide any available
`physical connection to the network 100, including optical
`fiber, coaxial cable, and twisted pair, to name but a few. The
`network interface 112 may also operate to send and receive
`data over the network 100 using any of a number of
`transmission protocols, such as, for example, iSCSI and
`Fibre Channel. The NAS 108 includes an operating system
`120, with an associated memory 124. The operating system
`120 controls operations for the NAS device 108, including
`the communications over the network interface 112. The
`NAS device 108 includes a data communication bus 128
`that, in one embodiment, is a PCI bus. The NAS device 108
`also includes a storage controller 132 that is coupled to the
`bus 128. The storage controller 132, in this embodiment,
`controls the operations for the storage and retrieval of data
`stored at the data storage components of the NAS device
`108. The NAS device 108 includes one or more storage
`devices 140, which are utilized to store data. In one embodi
`ment, the storage devices 140 include a number of hard disk
`drives. It will be understood that the storage device(s) 140
`could be any type of data storage device, including Storage
`devices that store data on storage media, Such as magnetic
`media, tape media, and optical media. The storage devices
`may also include Solid-state storage devices that store data
`in electronic components within the storage device. In one
`embodiment, as mentioned, the storage device(s) 140 com
`prise a number of hard disk drives. In another embodiment,
`the storage device(s) 140 comprise a number of hard disk
`drives configured in a RAID configuration. The NAS device
`108 also includes one or more backup devices 144 con
`nected to the bus 128. In the embodiment of FIG. 2, the
`NAS device 108 includes one backup device 144, having a
`non-volatile memory, in which the storage controller 132
`causes a copy of data to be stored at Storage devices 140 to
`be provided to the backup device 144 in order to help
`prevent data loss in the event of a power interruption or other
`failure within the NAS device 108. In other embodiments,
`more than one backup device 144 may be utilized in the
`NAS device 108.
`0032 Referring now to FIG. 3, a storage controller 132,
`storage device 140, and backup memory 144 of an embodi
`ment are described in more detail. In this embodiment, the
`storage device 140 is a hard disk drive having an enabled
`write-back cache 148. It will be understood that the storage
`device 140 may comprise a number of hard disk drives,
`and/or one or more other storage devices, and that the
`embodiment of FIG. 3 is described with a single hard disk
`drive for the purposes of discussion and illustration only.
`The principles and concepts as described with respect to
`FIG. 3 fully apply to other systems having more or other
`types of storage devices. As mentioned, the storage device
`140 includes an enabled write-back cache 148. A write-back
`cache 140 is utilized in this embodiment to store data written
`to the storage device 140 before the data is actually written
`to the media within the storage device 140. When the data
`is stored in the write-back cache 148, the storage device 140
`acknowledges that the data has been stored. By utilizing the
`write-back cache 148, the storage device 140 in most cases
`
`Petitioners
`Ex. 1025, p. 18
`
`
`
`US 2006/0O80515 A1
`
`Apr. 13, 2006
`
`has significantly improved performance relative to the per
`formance of a storage device that does not have an enabled
`write-back cache.
`0033. As is understood, storage devices may utilize a
`write-back cache to enhance performance by reducing the
`time related to the latency within the storage device. For
`example, in a hard disk drive, prior to writing data to the
`storage media, the drive must first position the read/write
`head at the physical location on the media where the data is
`to be stored, referred to as a seek. Seek operations move an
`actuator arm having the read/write head located thereon to a
`target data track on the media. Once the read/write head is
`positioned at the proper track, it then waits for the particular
`portion of the media where the data is to be stored to rotate
`into position where data may then be read or written. The
`time required to position the actuator arm and wait for the
`media to move into the location where data may be read or
`written depends upon a number of factors, and is largely
`dependent upon the location of the actuator arm prior to
`moving it to the target track. In order to reduce seek times
`for write operations, a disk drive may evaluate data stored in
`the write-back cache 148, and select data to be written which
`requires a reduced seek time compared to other data in the
`write-back cache, taking into consideration the current loca
`tion of the read/write head on the storage media. The data
`within the write-back cache may thus be written to the media
`in a