throbber
I 1111111111111111 11111 1111111111 11111 111111111111111 11111 111111111111111111
`US007506187B2
`
`c12) United States Patent
`Maddock
`
`(IO) Patent No.:
`(45) Date of Patent:
`
`US 7,506,187 B2
`Mar.17,2009
`
`(54) METHODS, APPARATUS AND
`CONTROLLERS FORA RAID STORAGE
`SYSTEM
`
`(75)
`
`Inventor: Robert Frank Maddock, Dorset (GB)
`
`(73) Assignee: International Business Machines
`Corporation, Armonk, NY (US)
`
`( *) Notice:
`
`Subject to any disclaimer, the term ofthis
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 497 days.
`
`(21) Appl. No.: 10/930,356
`
`(22) Filed:
`
`Aug. 30, 2004
`
`(65)
`
`Prior Publication Data
`
`US 2005/0050381 Al
`
`Mar. 3, 2005
`
`(30)
`
`Foreign Application Priority Data
`
`Sep. 2, 2003
`
`(GB)
`
`................................. 0320494.8
`
`(51)
`
`Int. Cl.
`G06F 1100
`(2006.01)
`(52) U.S. Cl. ............................ 713/310; 714/7; 711/114
`( 58) Field of Classification Search . ... ... ... ... .. .. 713/31 O;
`711/114; 714/6, 7
`See application file for complete search history.
`
`(56)
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`5,459,857 A
`10/1995 Ludlam et al. ......... 395/182.04
`
`5,659,704 A *
`5,666,512 A *
`5,742,792 A
`5,848,230 A *
`6,378,038 Bl *
`6,598,174 Bl*
`6,807,605 B2 *
`2002/0124139 Al*
`2003/0105829 Al*
`2003/0196023 Al*
`2003/0200388 Al*
`
`8/1997
`9/1997
`4/1998
`12/1998
`4/2002
`7/2003
`10/2004
`9/2002
`6/2003
`10/2003
`10/2003
`
`Burkes et al. ............... 711/114
`Nelson et al. ............... 711/114
`Yanai et al .................. 395/489
`Walker .......................... 714/7
`Richardson et al. . . . . . . . . . 7111114
`Parks et al.
`.................... 714/6
`Umberger et al. ........... 711/114
`Baek et al. .................. 711/114
`Hayward .................... 709/214
`Dickson . . . . . . . . . . . . . . . . . . . . . . . . 711/ l
`Hetrick ....................... 711/114
`
`* cited by examiner
`
`Primary Examiner-Thomas Lee
`Assistant Examiner-Jaweed A Abbaszadeh
`(74) Attorney, Agent, or Firm-Harrington & Smith, PC
`
`(57)
`
`ABSTRACT
`
`Provided are RAID storage systems, methods, and controllers
`for RAID storage systems. A first method includes storing a
`first copy of the data in a first RAID array corresponding to a
`first RAID level providing redundancy (such as RAID-5), and
`storing a second copy of the data in a second RAID array
`corresponding to a second RAID level (such as RAID-0)
`which differs from the first RAID level. Data is read from the
`two RAID arrays in parallel for improved read performance.
`A controller is responsive to a disk failure which results in
`data becoming inaccessible from one of the arrays to retrieve
`the data from the other one of the arrays. The redundancy
`within the first RAID array also enables the controller to
`restore data following a failure of one disk drive by reference
`to the remaining disk drives of the first array.
`
`10 Claims, 7 Drawing Sheets
`
`' '
`.. _ ------- --- - --
`
`L---30
`
`Petitioners Microsoft Corporation and HP Inc. - Ex. 1042, p. 1
`
`

`

`U.S. Patent
`
`Mar.17,2009
`
`Sheet 1 of 7
`
`US 7,506,187 B2
`
`FIG URE 1
`

`
`_,.
`
`...
`
`...
`....
`
`40
`
`.-
`
`--------------vo
`.... g
`.... g
`..
`- g
`- g
`
`' ...
`I ,. ___________
`
`I
`I
`
`....
`
`...
`
`J
`
`~o
`
`I . , .
`
`------------
`.... g
`.._ g
`..
`- g
`...
`.... g
`...
`
`Petitioners Microsoft Corporation and HP Inc. - Ex. 1042, p. 2
`
`

`

`U.S. Patent
`
`Mar.17,2009
`
`Sheet 2 of 7
`
`US 7,506,187 B2
`
`FIGURE 2
`
`0
`:------------- --: FILE 1 (BLOCKS 1,5)
`~ - - - j FILE 2 (BLOCK 3)
`r:._-:_-_-__ -- FILE 4 (BLOCK 2)
`30
`
`40
`
`RAID
`CONTROLLER
`
`60
`FILE 1 (BLOCKS 2,6)
`FILE 2 (BLOCK 4)
`FILE 4 (BLOCK 3)
`
`70
`
`FILE 1 (BLOCK 3)
`FILE 2 (BLOCK 1)
`FILE3
`0
`FILE 1 (BLOCK 4)
`FILE 2 (BLOCK 2)
`_______________ FILE 4 (BLOCK 1)
`
`Petitioners Microsoft Corporation and HP Inc. - Ex. 1042, p. 3
`
`

`

`U.S. Patent
`
`Mar. 17, 2009
`
`Sheet 3 of 7
`
`US 7,506,187 B2
`
`FIGURE 3
`
`A
`
`B
`
`C
`
`D
`
`5
`
`7
`
`Petitioners Microsoft Corporation and HP Inc. - Ex. 1042, p. 4
`
`

`

`U.S. Patent
`
`Mar. 17, 2009
`
`Sheet 4 of 7
`
`US 7,506,187 B2
`
`FIGURE 4
`
`PARITY
`
`A
`
`8
`
`C
`
`C
`
`PARITY(cid:173)(cid:143)
`
`90
`
`100
`
`110
`
`120
`
`Petitioners Microsoft Corporation and HP Inc. - Ex. 1042, p. 5
`
`

`

`U.S. Patent
`
`Mar. 17, 2009
`
`Sheet 5 of 7
`
`US 7,506,187 B2
`
`FIGURE 5
`
`220
`
`RAID CONTROLLER TRANSLATES
`ADDRESS INFORMATION FOR FIRST
`(RAID-5) ARRAY, AND TRANSLATES
`ADDRESS INFORMATION FOR
`SECOND (RAID-O) ARRAY
`
`260
`
`/
`
`SECOND TRANSLATION
`IS USED BY DISK
`CONTROLLER TO MOVE
`WRITE HEAD TO TRACK
`OF SECOND ARRAY
`
`230
`\
`
`FIRST TRANSLATION IS
`USED BY DISK
`CONTROLLER TO MOVE
`WRITE HEAD TO TRACK
`OF FIRST ARRAY
`240
`
`OLD DATA+ PARITY IS
`READ, NEW PARITY
`COMPUTED, AND NEW
`DATA+ NEW PARITY
`ARE WRITTEN IN
`
`SINGLE DISK WRITE
`IS PERFORMED IN
`RAID-O ARRAY
`
`270
`
`280
`
`PERFORMANCE OF
`2WRITESARE
`VERIFIED
`
`PERFORMANCE OF
`WRITE OPERATION
`IS VERIFIED
`
`250
`
`Petitioners Microsoft Corporation and HP Inc. - Ex. 1042, p. 6
`
`

`

`U.S. Patent
`
`Mar. 17, 2009
`
`Sheet 6 of 7
`
`US 7,506,187 B2
`
`FIGURE 6A
`
`RAID
`CONTROLLER
`RECEIVES READ
`REQUEST
`
`10
`
`RAID CONTROLLER
`CHECKS CACHE
`
`330
`
`RAID CONTROLLER
`RESPONDS TO
`REQUEST FROM
`CACHE
`
`RAID CONTROLLER TRANSLATES
`ADDRESS INFORMATION TO A BLOCK
`ADDRESS AND ISSUES REQUEST
`
`340
`
`DISK CONTROLLER ACTIVATES
`MOVEMENTOFREADHEADTOTRACK
`OF FIRST ARRAY
`
`DISK CONTROLLER ACTIVATES
`HEAD TO READ SECTOR
`
`360
`
`IF READ IS SUCCESSFUL,
`CONTROL CIRCUIT PASSES DATA
`ACROSS 1/0 INTERFACE TO
`SYSTEM MEMORY
`
`GOTO ~
`~IG.68_/
`
`(
`
`Petitioners Microsoft Corporation and HP Inc. - Ex. 1042, p. 7
`
`

`

`U.S. Patent
`
`Mar. 17, 2009
`
`Sheet 7 of 7
`
`US 7,506,187 B2
`
`FIGURE 6B
`
`80
`
`390
`
`400
`
`10
`
`IF READ UNSUCCESSFUL, RAID
`CONTROLLER PERFORMS
`TRANSLATION OF ADDRESS TO
`BLOCK ADDRESS WITHIN SECOND
`ARRAY
`
`DISK CONTROLLER TRANSLATES TO
`PHYSICAL ADDRESS AND CONTROLS
`MOVEMENT OF READ HEAD
`
`WHEN IN POSITION, DISK
`CONTROLLER ACTIVATES READ
`HEAD TO READ SECTOR
`
`IF READ IS SUCCESSFUL, CONTROL
`CIRCUIT PASSES DATA ACROSS 1/0
`INTERFACE TO SYSTEM MEMORY
`
`Petitioners Microsoft Corporation and HP Inc. - Ex. 1042, p. 8
`
`

`

`US 7,506,187 B2
`
`1
`METHODS, APPARATUS AND
`CONTROLLERS FORA RAID STORAGE
`SYSTEM
`
`FIELD OF INVENTION
`
`The present invention relates to data storage for a computer
`system or network of computers, and in particular to methods,
`apparatus and controllers for a RAID storage system.
`
`BACKGROUND OF THE INVENTION
`
`In data storage systems, an array of independent storage
`devices can be configured to operate as a single virtual storage
`device using a technology known as RAID (Redundant Array
`oflndependent Disks-first referred to as a 'Redundant Array
`oflnexpensive Disks' by researchers at University of Califor(cid:173)
`nia at Berkeley). In this context, 'disk' is often used as a
`short-hand for 'disk drive'.
`A RAID storage system includes an array of independent
`storage devices and at least one RAID controller. A RAID
`controller provides a virtualised view of the array ofindepen(cid:173)
`dent storage devices-such that a computer system config(cid:173)
`ured to operate with the RAID storage system can perform
`input and output (I/O) operations as if the array of indepen(cid:173)
`dent storage devices of the RAID storage system were a
`single storage device. The array of storage devices thus
`appear as a single virtual storage device with a sequential list
`of storage elements. The storage elements are commonly
`known as blocks of storage, and the data stored within the data
`blocks are known as data blocks. I/O operations ( such as read
`and write) are qualified with reference to one or more blocks
`of storage in the virtual storage device. When an I/O operation
`is performed on the virtual storage device, the RAID control-
`!er maps the I/O operation onto the array of independent
`storage devices. In order to virtualise the array of storage
`devices and map I/O operations the RAID controller may
`employ standard RAID techniques that are well known in the
`art. Some of these techniques are considered below.
`A RAID controller spreads data blocks across the array of
`independent storage devices. One way to achieve this is using
`a technique known as Striping. Striping involves spreading
`data blocks across storage devices in a round-robin fashion.
`When storing data blocks in a RAID storage system, a num- 45
`ber of data blocks known as a strip is stored in each storage
`device. The size of a strip may be determined by a particular
`RAID implementation or may be configurable. A row of
`strips comprising a first strip stored on a first storage device
`and subsequent strips stored on subsequent storage devices is 50
`known as a stripe. The size of a stripe is the total size of all
`strips comprising the stripe.
`The use of multiple independent storage devices to store
`data blocks in this way provides for high performance I/O
`operations when compared to a single storage device, because 55
`the multiple storage devices can act in parallel during I/O
`operations. Performance improvements are one of the major
`benefits of RAID technology. Hard disk drive performance is
`important in computer systems, because hard disk drives are
`some of the slowest internal components of a typical com- 60
`puter.
`Physical storage devices such as storage devices are known
`for poor reliability, and yet hard disk drive reliability is criti-
`cal because of the serious consequences of an irretrievable
`loss of data ( or even a temporary inaccessibility of data). An 65
`important purpose of typical RAID storage systems is to
`provide reliable data storage.
`
`30
`
`2
`One technique to provide reliability involves the storage of
`check information along with data in an array of independent
`storage devices. Check information is redundant information
`that allows regeneration of data which has become unread-
`5 able due to a single point of failure, such as the failure of a
`single storage device in an array of such devices. Unreadable
`data is regenerated from a combination of readable data and
`redundant check information. Check information is recorded
`as 'parity' data which may occupy a single strip in a stripe,
`10 and is calculated by applying the EXCLUSIVE OR (XOR)
`logical operator to all data strips in the stripe. For example, a
`stripe comprising data strips A, B and C would have an
`associated parity strip calculated as A XOR B XOR C. In the
`15 event of a single point of failure in the storage system, the
`parity strip is used to regenerate an inaccessible data strip. If
`a stripe comprising data strips A, B, C and PARITY is stored
`across four independent storage devices W, X, Y and Z
`respectively, and storage device X fails, strip B stored on
`20 device X would be inaccessible. Strip B can be computed
`from the remaining data strips and the PARITY strip through
`an XOR computation. This restorative computation is AX OR
`CXORPARITY=B. This exploits thereversiblenatureofthe
`XOR operation to yield any single lost strip, A, B or C. Of
`25 course, the previous XOR can be repeated if the lost data is the
`PARITY information.
`In addition to striping (for the performance benefits of
`parallel operation) and parity (for redundancy), another
`redundancy technique used in some RAID solutions is mir(cid:173)
`roring. In a RAID system using mirroring, all data in the
`system is written simultaneously to two hard disk drives. This
`protects against failure of either of the disks containing the
`duplicated data and enables relatively fast recovery from a
`disk failure ( since the data is ready for use on one disk even if
`the other failed). These advantages have to be balanced
`against the disadvantage of increased cost ( since half the disk
`space is used to store duplicate data). Duplexing is an exten(cid:173)
`sion of mirroring that duplicates the RAID controller as well
`as the disk drives-thereby protecting against a failure of a
`40 controller as well as against disk drive failure.
`Different RAID implementations use different combina(cid:173)
`tions of the above techniques. A number of standardized
`RAID methods are identified as single RAID "levels" 0
`through 7, and "nested" RAID levels have also been defined.
`For example:
`RAID 1 uses mirroring ( or duplexing) for fault tolerance;
`whereas
`RAID O uses block-level striping without parity-i.e. no
`redundancy and so without the fault tolerance of other
`RAID levels, and therefore good performance relative to its
`cost; RAID O is typically used for non-critical data ( or data
`that changes infrequently and is backed up regularly) and
`where high speed and low cost are more important than
`reliability;
`RAID 3 and RAID 7 use byte-level striping with parity; and
`RAID 4, RAID 5 and RAID 6 use block-level striping with
`parity. RAID 5 uses a distributed parity algorithm, writing
`data and parity blocks across all the drives in an array
`(which improves write performance slightly and enables
`improved parallelism compared with the dedicated parity
`drive of RAID 4 ). Fault tolerance is maintained in RAID 5
`by ensuring that the parity information for any given block
`of data is stored on a drive separate from the drive used to
`store the data itself. RAID 5 combines good performance,
`good fault tolerance and high capacity and storage effi(cid:173)
`ciency, and has been considered the best compromise of
`
`35
`
`Petitioners Microsoft Corporation and HP Inc. - Ex. 1042, p. 9
`
`

`

`US 7,506,187 B2
`
`3
`any single RAID level for applications such as transaction
`processing and other applications which are not write(cid:173)
`intensive.
`In addition to the single RAID levels described above,
`nested RAID levels are also used to further improve perfor(cid:173)
`mance. For example, features of high performance RAID 0
`may be combined in a nested configuration with features of
`redundant RAID levels such as 1, 3 or 5 to also provide fault
`tolerance.
`RAID 01 is a mirrored configuration of two striped sets,
`and RAID 10 is a stripe across a number of mirrored sets.
`Both RAID 01 and RAID 10 can yield large arrays with (in
`most uses) high performance and good fault tolerance.
`A RAID 15 array can be formed by creating a striped set
`with parity using multiple mirrored pairs as components.
`Similarly, RAID 51 is created by mirroring entire RAID 5
`arrays---each member of either RAID 5 array is stored as a
`mirrored (RAID 1) pair of disk drives. The two copies of the
`data can be physically located in different places for addi(cid:173)
`tional protection. Excellent fault tolerance and availability
`are achievable by combining the redundancy methods of par(cid:173)
`ity and mirroring in this way. For example, an eight drive
`RAID 15 array can tolerate failure of any three drives simul(cid:173)
`taneously. After a single disk failure, the data can still be read
`from a single disk drive-whereas RAID 5 would require a
`more complex rebuild.
`However, the potential benefits of nested RAID levels must
`generally be paid for in terms of cost, complexity and low
`storage efficiency. Complexity has consequences for man(cid:173)
`agement and maintenance. A minimum of 6 identical hard 30
`disk drives ( and possibly specialized hardware or software or
`multiple systems) are required for a RAID 15 or RAID 51
`implementation. The performance of RAID 51, measured in
`throughput of disk operations per second, is low given the
`number of disks employed. This is because each write opera- 35
`ti on requires a read of the old data, a read of the old parity, and
`then four write operations.
`Therefore, although RAID offers significant advantages
`over single hard disk drives, such as the potential for
`increased capacity, performance and reliability, there are also 40
`significant costs and other trade-offs involved in implement(cid:173)
`ing a RAID system. Some of the benefits of different RAID
`levels have been obtained by designing arrays that combine
`RAID techniques in nested configurations, but the success of
`such implementations has been balanced by increased costs 45
`and complexity. For this reason, many single level RAID
`solutions are still in productive use.
`
`SUMMARY
`
`A first embodiment of the invention provides a method of
`operating a RAID storage system comprising the steps of:
`storing a first copy of the data in a first array of disk drives in
`accordance with a first RAID level providing redundancy;
`and storing a second copy of the data in a second array of disk
`drives in accordance with a second RAID level which differs
`from the first RAID level. If a disk failure in one array results
`in data becoming inaccessible from that array, the data can be
`retrieved from the other one of the arrays. The redundancy
`within the first RAID array enables recovery of the first array
`from a failure of one of its disks by reference to the remaining
`disk drives of the first array.
`The second array of disk drives preferably implements
`striping of data across the disk drives of the array without
`redundancy, such as in RAID-0. The first array of disk drives
`implements redundancy, and preferably implements block(cid:173)
`level striping with parity (such as in RAID-4, RAID-5 and
`
`4
`RAID-6). The replication of data between the first and second
`arrays provides additional redundancy to that provided within
`the first array.
`In one embodiment, data is stored in the first array by
`5 block-level striping of data across the disk drives of the array,
`with parity information for data blocks being distributed
`across disk drives of the array other than the disk drives
`storing the corresponding data blocks-such as in RAID-5. In
`this embodiment, the RAID level of the second array is
`10 RAID-0. In such an embodiment, double redundancy is
`achieved-and so reliability is better than ifRAID-5 is imple(cid:173)
`mented alone. In the embodiment, a write operation requires
`two reads and three writes-giving increased throughput
`compared with a RAID-51 implementation, even though one
`15 less disk drive is employed. After a disk failure in one array,
`data can still be read from the other array with one read
`operation.
`Although RAID-51 protects against any three failures,
`double redundancy is sufficient for many applications,
`20 including many business-critical data processing applica(cid:173)
`tions, since double redundancy combined with a prompt
`repair strategy reduces the likelihood of any data loss to a very
`low level (unless the disk drives are very unreliable).
`According to a further embodiment of the invention, a
`25 RAID-5 array and a RAID-0 array are located in different
`places for added protection, and can be implemented together
`with recovery techniques such as hot swapping or hot spares
`for applications which require high performance at all times
`and applications where loss of data is unacceptable.
`In another embodiment, the RAID-0 array is used in com(cid:173)
`bination with a RAID-4 array (which implements block-level
`striping with dedicated parity).
`The steps of storing a first and a second copy of the data
`may be performed in parallel, and read operations may be
`allocated between the arrays in accordance with workload
`sharing. In a first embodiment, the storing and retrieval are
`performed under the control of a single RAID controller
`which manages the two disk drive arrays in parallel despite
`the arrays having different RAID levels. The single RAID
`controller is able to manage the simultaneous occurrence of a
`disk failure in each of the arrays.
`In another embodiment, a separate RAID controller is used
`for each of the first and second arrays. Two cooperating RAID
`controllers can be advantageous when the first and second
`arrays are located in different locations. A separate coordina(cid:173)
`tor (which may be implemented in software) can be used to
`manage the cooperation between two RAID controllers, or
`cooperation may be coordinated by the two RAID controllers
`50 themselves.
`A further embodiment provides a RAID storage system,
`comprising: a first array of disk drives corresponding to a first
`RAID level providing redundancy; a second array of disk
`drives corresponding to a second RAID level which differs
`55 from the first RAID level; and at least one controller for
`controlling storing of a first copy of data in the first array and
`storing of a second copy of the data in the second array. The
`controller controls retrieval of stored data by disk access
`operations performed on the first and second arrays, and is
`60 responsive to a disk failure resulting in data becoming inac(cid:173)
`cessible from one of said arrays to retrieve the data from the
`other one of said arrays.
`The 'at least one controller' may comprise first and second
`cooperating RAID controllers, including a first RAID con-
`65 trailer for managing storing and retrieval of data for the first
`array and a second RAID controller for managing storing and
`retrieval of data for the second array.
`
`Petitioners Microsoft Corporation and HP Inc. - Ex. 1042, p. 10
`
`

`

`US 7,506,187 B2
`
`5
`
`5
`Embodiments of the invention provide RAID storage sys(cid:173)
`tems and methods of operating such systems which combine
`the benefits of different RAID levels, while improving per(cid:173)
`formance relative to reliability, cost and complexity when
`compared with typical nested RAID configurations. The per-
`formance and reliability characteristics of a RAID storage
`system according to an embodiment of the invention are
`particularly beneficial for applications in which double-re(cid:173)
`dundancy is sufficient, and yet better performance is desired
`than is achievable with known RAID-51 or RAID-15 imple- 10
`mentations.
`In an alternative embodiment, the first RAID array imple(cid:173)
`ments a nested configuration of RAID levels and the second
`RAID array implements non-redundant RAID-0.
`A further embodiment provides a RAID controller com- 15
`prising: means for controlling storing of a first copy of data in
`a first RAID array using a redundant storage technique cor(cid:173)
`responding to a first RAID level, to provide redundancy
`within the first array; means for controlling storing of a sec(cid:173)
`ond copy of the data in a second array of disk drives using a 20
`storage technique corresponding to a RAID level different
`from said first RAID level; and means for controlling retrieval
`of stored data by disk access operations performed on the first
`and second arrays and, in response to a disk failure resulting
`in data becoming inaccessible from a first one of said arrays, 25
`for controlling retrieval of the data from the other one of said
`arrays.
`The RAID controller may control storing of a second copy
`of the data in a second array of disk drives using a non(cid:173)
`redundant storage technique. The RAID controller may con- 30
`trol retrieval of stored data by disk access operations per(cid:173)
`formed in parallel on the first and second arrays. The RAID
`controller may be implemented as a program product, or
`using dedicated hardware.
`
`35
`
`BRIEF DESCRIPTION OF DRAWINGS
`
`40
`
`45
`
`One or more embodiments of the invention are described
`below in detail, by way of example, with reference to the
`accompanying drawings in which:
`FIG. 1 is a schematic representation of a RAID controller
`connected to a pair of arrays, according to an embodiment of
`the invention;
`FIG. 2 is a schematic representation of an example of
`striping of data, such as in a RAID-0 array;
`FIG. 3 is a representation of an example of non-redundant
`data block striping, such as in a RAID-0 array;
`FIG. 4 is a representation of an example of data block
`striping with distributed parity, such as in a RAID-5 array;
`FIG. 5 is a flow diagram showing a sequence of steps of a 50
`write operation according to an embodiment of the invention;
`and
`FIG. 6 is a flow diagram showing a sequence of steps of a
`read operation according to an embodiment of the invention.
`
`55
`
`DETAILED DESCRIPTION OF EMBODIMENTS
`
`Disk Drive Overview
`The components of a typical computer's hard disk drive are 60
`well understood by persons skilled in the art and will not be
`described in detail herein. In brief, a hard disk drive stores
`information in the form of magnetic patterns in a magnetic
`recording material which coats the surface of a number of
`discs ("platters"). The platters are stacked on a central spindle 65
`and are rotated at high speed by a spindle motor. Electromag(cid:173)
`netic read/write heads are mounted on sliders and used both to
`
`6
`record information onto and read data from the platters. The
`sliders are mounted on arms which are positioned over the
`surface of the platter by an actuator. An electronic circuit
`board and control program (located within the disk drive and
`hereafter collectively referred to as the disk controller) coop(cid:173)
`erate to control the activity of the other components of the
`disk drive and communicate with the rest of the computer via
`an interface connector (such as an SCSI connector).
`To allow for easier and faster access to information within
`a disk drive, each platter has its information recorded in
`concentric circles (tracks) and each track is divided into a
`number of sectors ( each of which holds 512 bytes of the total
`tens of billions of bits of information which can be held on
`each surface of each platter within the disk drive).
`The extreme miniaturization of the components of a typical
`disk drive, with ever-increasing requirements for increased
`performance and increased data density for improved capac(cid:173)
`ity, have resulted in disk drives being more prone to errors
`than many other components of a computer. Because hard
`disk drives are where the computer stores data and play an
`important in overall computer system performance, both the
`reliability and performance of the hard disk drive are critical
`in typical computer systems.
`The following is a simplified example of the operations
`involved each time a piece of information is read from a
`conventional disk drive (ignoring factors such as error cor(cid:173)
`rection):
`1. An application program, operating system, system BIOS,
`and any special disk driver software work together to pro(cid:173)
`cess a data request to determine which part of the disk drive
`to read;
`2. The location information undergoes one or more transla(cid:173)
`tion steps and then a read request ( expressed in terms of the
`disk drive geometry-i.e. cylinder, head and sector to be
`read) is sent to the disk drive over the disk drive connection
`interface;
`3. The hard disk drive's disk controller checks whether the
`information is already in the hard disk drive's cache-if
`within the cache, the disk controller supplies the informa(cid:173)
`tion immediately without looking on the surface of the
`disk;
`4. If the information is not in the cache and the disks of the
`drive are not already spinning, the disk drive's disk con(cid:173)
`troller activates the spindle motor to rotate the disks of the
`drive to operating speed;
`5. The disk controller interprets the address received for the
`read, and performs any necessary additional translation
`steps that take into account the particular characteristics of
`the drive; the hard disk drive' s disk controller then looks at
`the final numberofthe cylinder requested to identify which
`track to look at on the surface of the disk;
`6. The disk controller instructs the actuator to move the read/
`write heads to the appropriate track;
`7. When the heads are in the correct position, the disk con(cid:173)
`troller activates the specific head in the required read loca(cid:173)
`tion and the heads begin reading the track-looking for the
`sector that was requested while the disk is rotated under(cid:173)
`neath the head;
`8. When the correct sector is found, the head reads the con(cid:173)
`tents of the sector;
`9. The disk controller coordinates the flow of information
`from the hard disk drive into a temporary storage area and
`then sends the information over the disk drive connection
`interface (usually to the computer system's memory) to
`satisfy the request for data.
`
`Petitioners Microsoft Corporation and HP Inc. - Ex. 1042, p. 11
`
`

`

`US 7,506,187 B2
`
`7
`
`RAID Storage System
`RAID storage systems use multiple hard disk drives in an
`array for faster performance and/or improved reliability
`through redundancy (without requiring very expensive spe(cid:173)
`cialized, high capacity disk drives). The RAID storage system 5
`10 described below, and shown schematically in FIG. 1, uses
`a combination of a RAID-5 array 20, a RAID-0 array 30, and
`a RAID controller 40 which manages the two arrays coop(cid:173)
`eratively to obtain desired performance and reliability char(cid:173)
`acteristics. A read instruction from the RAID controller 40 10
`initiates the above-described sequence of read operations
`controlled by a disk controller within a disk drive. Similarly,
`the RAID controller 40 initiates write operations by sending
`a write instruction to a disk drive. The raid controller 40 is
`shown bidirectionally coupled to a block 5, such as a PC or a 15
`server.
`
`RAID Controller
`The RAID controllerofthe RAID storage system is located 55
`outside the individual disk drives and communicates with the
`disk drives via their connection interface (the RAID control(cid:173)
`ler is separate from the disk controller referred to above). The
`RAID controller can be a conventional RAID controller
`which is progranmied ( or, equivalently, control circuitry is
`modified) to manage the two arrays described above coop(cid:173)
`eratively and in parallel. In an alternative embodiment, a
`coordinator component is used to manage cooperation
`between two separate, conventional RAID controllers which
`each manage one of the two arrays.
`Many RAID implementations use dedicated hardware
`including a dedicated processor to control the array. In a
`
`RAID-0 and RAID-5 Arrays
`The RAID arrays 20,30 used in the RAID storage system
`10 can be conventional RAID-0 and RAID-5 arrays.
`The RAID-0 array 30 uses block-level striping of data
`across the disk drives within the array without storing parity
`information. FIG. 2 shows, in schematic form, an example
`RAID striping configuration. A RAID controller 40 (imple(cid:173)
`mented in hardware, software, or a combination) splits files
`into blocks and distributes the blocks across several hard disk 25
`drives 50,60,70,80 in a round-robin manner. The block size
`determines how the file is divided up. In the example shown,
`the first block of file 1 is sent to a first disk drive 50, then the
`second block to a second disk drive 60, etc. After one block of
`file 1 has been allocated to each of the four disk drives, the
`fifth block is stored on the first disk drive 50, the sixth block
`on the second drive 60, and so on. This continues until the
`whole file is stored. Some files may be smaller than the block
`size, such that they are stored on one disk drive.
`FIG. 3 shows an example of data striping of files of differ(cid:173)
`ent sizes between drives on a four-disk-drive, 16 kilobyte
`(kB) stripe size RAID-0 array. A first file labelled A is 8 kB in
`size. A second file labelled B is 16 kB. A third file labelled C
`is 96 kB. A fourth file Dis 504 kB in size.
`The RAID-5 array uses block level striping with parity
`information distributed across the disk drives in the array(cid:173)
`striping both data and parity information across three or more
`drives. The parity information for a given block of data is
`placed on a drive separate from the drive(s) used to store the
`data itself. FIG. 4 illustrates a number of files distributed 45
`between the disk drives 90,100,110,120 of a four-disk-drive
`RAID-5 array (using, for example, a 16 kB stripe size).A first
`file labelled A is 8 kB in size; a second file labelled Bis 16 kB
`in size; a third file is 96 kB in size and a fourth file is 504 kB
`in size. The RAID-5 array can tolerate loss of one of the drives
`in the array without loss of data (although performance is
`affected if a disk drive fails, due to reduced parallelism).
`
`35
`
`50
`
`30
`
`8
`typical personal computer (PC), a specialized RAID control(cid:173)
`ler is installed into the PC and the disk drives of the array are
`connected to the RAID controller. The RAID controller inter(cid:173)
`faces to the drives using SCSI or IDE/ ATA and sends data to
`the rest of the PC over a system bus. Some motherboards ( e.g.
`those intended for server systems) include an integrated
`RAID controller instead of a bus-based card. In higher-end
`systems, the RAID controller may be an external device
`which manages the drives in the array and then presents the
`logical drives o

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket