throbber
DHPN-1007
`Dell Inc. vs. Electronics and Telecommunications, IPR2013-00635
`Page 1 of 43
`
`

`
`FOR THE PURPOSES OF INFORMATION ONLY
`
`Codes used to identify States party to the PCT on the front pages of pamphlets publishing international applications under the PCT.
`
`Albania
`Armenia
`Austria
`Australia
`Azerbaijan
`Bosnia and Herzegovina
`Barbados
`Belgium
`Burkina Faso
`Bulgaria
`Benin
`Brazil
`Belarus
`Canada
`Central African Republic
`Congo
`Switzerland
`Cote d‘Ivoire
`Cameroon
`China
`Cuba
`Czech Republic
`Germany
`Denmark
`Estonia
`
`ES
`FI
`FR
`GA
`GB
`GE
`GH
`GN
`GR
`HU
`IE
`IL
`IS
`IT
`JP
`KE
`KG
`KP
`
`KR
`KZ
`LC
`LI
`LK
`LR
`
`Spain
`Finland
`France
`Gabon
`United Kingdom
`Georgia
`Ghana
`Guinea
`Greece
`Hungary
`Ireland
`Israel
`Iceland
`Italy
`Japan
`Kenya
`Kyrgyzstan
`Democratic People’s
`Republic of Korea
`Republic of Korea
`Kazakstan
`Saint Lucia
`Liechtenstein
`Sri Lanka
`Liberia
`
`LS
`LT
`LU
`LV
`MC
`MD
`MG
`MK
`
`ML
`MN
`MR
`MW
`MX
`NE
`NL
`NO
`NZ
`PL
`PT
`RO
`RU
`SD
`SE
`SG
`
`Lesotho
`Lithuania
`Luxembourg
`Latvia
`Monaco
`Republic of Moldova
`Madagascar
`The former Yugoslav
`Republic of Macedonia
`Mali
`Mongolia
`Mauritania
`Malawi
`Mexico
`Niger
`Netherlands
`Norway
`New Zealand
`Poland
`Portugal
`Romania
`Russian Federation
`Sudan
`Sweden
`Singapore
`
`SI
`SK
`SN
`SZ
`TD
`TG
`
`TM
`TR
`TT
`UA
`UG
`US
`UZ
`VN
`YU
`ZW
`
`Slovenia
`Slovakia
`Senegal
`Swaziland
`Chad
`Togo
`Tajikistan
`Turkmenistan
`Turkey
`Trinidad and Tobago
`Ukraine
`Uganda
`United States of America
`Uzbekistan
`Viet Nam
`Yugoslavia
`Zimbabwe
`
`DHPN-1007 I Page 2 of 43
`
`DHPN-1007 / Page 2 of 43
`
`

`
`WO 99/38067
`
`PCT/US99/01282
`
`AN APPARATUS AND METHOD FOR AUTOMATIC CONFIGURATION
`
`OF A RAID CONTROLLER
`
`.EI..:|I.
`
`The present invention relates generally to peripheral controllers. More
`
`particularly, the invention relates to the automatic configuration of Redundant Array
`
`of Independent Disks (RAID) controllers.
`
`
`
`RAID is a technology used to improve the I/O performance and reliability of
`
`mass storage devices. Data is stored across multiple disks in order to provide
`
`immediate access to the data despite one or more disk failures. The RAID
`
`technology is typically associated with a taxomony of techniques, where each
`
`technique is referred to by a RAID level. There are six basic RAID levels, each
`
`having its own benefits and disadvantages. RAID level 2 uses non-standard disks
`
`and as such is not commercially feasible.
`
`RAID level 0 employs “striping" where the data is broken into a number of
`
`stripes which are stored across the disks in the array. This technique provides higher
`
`performance in accessing the data but provides no redundancy which is needed for
`
`disk failures.
`
`RAID level 1 employs “mirroring” where each unit of data is duplicated or
`
`“mirrored” onto another disk drive. Mirroring requires two or more disk drives. For
`
`read operations, this technique is advantageous since the read operations can be
`
`performed in parallel. A drawback with mirroring is that it achieves a storage
`
`efficiency of only 50%.
`
`DHPN-1007 I Page 3 of 43
`
`DHPN-1007 / Page 3 of 43
`
`

`
`WO 99/38067
`
`PCT/US99/01282
`
`2
`
`In RAID level 3, a data block is partitioned into stripes which are striped
`
`across a set of drives. A separate parity drive is used to store the parity bytes
`
`associated with the data block. The parity is used for data redundancy. Data can be
`
`regenerated when there is a single drive failure from the data on the remaining drives
`
`and the parity drive. This type of data management is advantageous since it requires
`
`less space than mirroring and only a single parity drive. In addition, the data is
`
`accessed in parallel from each drive which is beneficial for large file transfers.
`
`However, performance is poor for high I/O transaction applications since it requires
`
`access to each drive in the array.
`
`In RAID level 4, an entire data block is written to a disk drive. Parity for
`
`each data block is stored on a single parity drive. Since each disk is accessed
`
`independently, this technique is beneficial for high I/O transaction applications. A
`
`drawback with this technique is the single parity disk which becomes a bottleneck
`
`since the single parity drive needs to be accessed for each write operation. This is
`
`especially burdensome when there are a number of small I/O operations scattered
`
`randomly across the disks in the array.
`
`In RAID level 5, a data block is partitioned into stripes which are striped
`
`across the disk drives. Parity for the data blocks is distributed across the drives
`
`thereby reducing the bottleneck inherent to level 4 which stores the parity on a single
`
`disk drive. This technique offers fast throughput for small data files but performs
`
`poorly for large data files.
`
`A typical data storage system can contain a number of disk storage devices
`
`that can be arranged in accordance with one or more RAID levels. A RAID
`
`controller is a device that is used to manage one or more arrays of RAID disk drives.
`
`The RAID controller is responsible for configuring the physical drives in a data
`
`storage system into logical drives where each logical drive is managed in accordance
`
`with one of the RAID levels.
`
`DHPN-1007 I Page 4 of 43
`
`DHPN-1007 / Page 4 of 43
`
`

`
`WO 99/38067
`
`PCT/US99/01282
`
`3
`
`RAID controllers are complex and difficult to configure. This is due in part
`
`to the numerous possible configurations that can be achieved, the knowledge
`
`required by a user to configure such a system, and the time consumed by a user in
`
`configuring the controller. In one such RAID controller configuration procedure, an
`
`automatic configuration feature is provided that attempts to alleviate the user’s input
`
`by automatically configuring a number of devices at system initialization. However,
`
`this automatic configuration feature is very limited and only operates where all the
`
`physical disk drives are of the same physical size and where there are between 3 to 8
`
`disk drives. In this case, the automatic configuration feature configures the disk
`
`drives as a single drive group defined as a RAID level 5 system drive with no spare
`
`drives. This configuration is limited providing no other alternate configurations.
`
`Accordingly, there exists a need for an automatic RAID controller
`
`configuration mechanism that can accommodate various types of RAID level
`
`configurations and for disk drives having various physical dimensions.
`
`&
`
`The present invention pertains to an apparatus and method for automatically
`
`configuring disk drives connected to a RAID controller. The automatic
`
`configuration mechanism is able to generate a full configuration of the disk drives
`
`connected to a RAID controller both at system initialization or bootup and at
`
`runtime. The mechanism uses a robust criteria to configure the disk drives which
`
`allows the drives to be configured in accordance with one or more RAID levels and
`
`with various default settings that affect the operation of the disk array.
`
`In a preferred embodiment, the automatic configuration mechanism includes
`
`a startup configuration procedure that provides the automatic configuration
`
`capability at system initialization and a runtime configuration procedure that
`
`automatically configures disk drives connected to the RAID controller at runtime.
`
`The startup configuration procedure generates a full configuration of the disk drives.
`
`The configuration specifies the logical drives that are formed and the associated
`
`operational characteristics for each logical drive which includes the RAID level, the
`
`DHPN-1007 I Page 5 of 43
`
`DHPN-1007 / Page 5 of 43
`
`

`
`WO 99/38067
`
`PCT/US99/01282
`
`4
`
`capacity, as well as other information. The startup configuration procedure can
`
`accommodate previously existing configurations and partial configurations. In
`
`addition, the startup configuration procedure can configure unconfigured drives in
`
`accordance with a criteria that considers the existing configuration of the disk drives
`
`and which is able to select an appropriate RAID level suitable for optimizing the
`
`overall computer system’s performance.
`
`The runtime configuration procedure is used to configure disk drives
`
`connected to the RAID controller while the system is operational. The inserted disk
`
`drives can be part of an existing configuration or can be unconfigured. The runtime
`
`configuration procedure can incorporate the configured drives into the current
`
`configuration as well as configure the unconfigured drives. The unconfigured drives
`
`are configured in accordance with a criteria that uses the inserted disk drives to
`
`replace dead or failed drives, that adds the inserted disk drives to certain logical
`
`drives that can support the additional capacity at the defined RAID level, and that
`
`forms additional logical drives as needed.
`
`The automatic configuration mechanism is advantageous since it eliminates
`
`user interaction required to configure disk drives connected to a RAID controller. In
`
`addition, the mechanism allows disk drives to be configured into one or more RAID
`
`levels in a manner that considers the current state of the disk drives and that
`
`optimizes the overall system performance. The mechanism is flexible performing
`
`the automatic configuration both at runtime and at system initialization.
`
`PEI
`
`.. “I.
`
`For a better understanding of the nature and objects of the invention,
`
`reference should be made to the following detailed description taken in conjunction
`
`with the accompanying drawings, in which:
`
`FIGS. 1A-1B illustrates a computer system in accordance with the preferred
`
`30
`
`embodiments of the present invention.
`
`FIG. 2 illustrates a RAID controller in accordance with a preferred
`
`embodiment of the present invention.
`
`DHPN-1007 I Page 6 of 43
`
`DHPN-1007 / Page 6 of 43
`
`

`
`WO 99/38067
`
`PCT/US99/01282
`
`5
`
`FIG. 3 is a flow chart illustrating the steps used to manually configure a set
`
`of disk drives.
`
`FIGS. 4 - 5 illustrate the process of forming and ordering drive groups from
`
`physical drives in a preferred embodiment of the present invention.
`
`FIG. 6 illustrates an exemplary assignment of RAID levels to a set of logical
`
`drives in accordance with a preferred embodiment of the present invention.
`
`FIG. 7 is a flow chart illustrating the steps used in the startup configuration
`
`procedure in a preferred embodiment of the present invention.
`
`FIG. 8 is a flow chart illustrating the steps used to scan the physical devices
`
`connected to the RAID controller in a preferred embodiment of the present
`
`invention.
`
`FIG. 9 is a flow chart illustrating the logical drive order rules of a preferred
`
`embodiment of the present invention.
`
`FIG. 10 is a flow chart illustrating the steps used in the runtime configuration
`
`procedure in a preferred embodiment of the present invention.
`
`FIG. 11 is a flow chart illustrating the steps used to add capacity in
`
`accordance with a preferred embodiment of the present invention.
`
`Like reference numerals refer to corresponding parts throughout the several
`
`views of the drawings.
`
`I
`
`.1
`
`1 I
`
`.
`
`.
`
`E 1
`
`I
`
`.
`
`Fig. 1A illustrates a host system 100 utilizing the RAID controller 102 in a
`
`first preferred embodiment of the present invention. There is shown the RAID
`
`controller 102 connected to a host peripheral bus 104 and one or more Small
`
`Computer System Interface (SCSI) channels 106A-106N.
`
`In a preferred
`
`embodiment, the RAID controller 102 can be any of the Mylexm RAID controllers,
`
`such as but not limited to the DAC960 series of RAID controllers. The operation of
`
`SCSI channels is well known in the art and a more detailed description can be found
`
`in Ancot Corporation, Basics of SCSI, third edition, (1992-1996), which is hereby
`
`incorporated by reference.
`
`DHPN-1007 I Page 7 of 43
`
`DHPN-1007 / Page 7 of 43
`
`

`
`WO 99/38067
`
`PCT/US99/01282
`
`6
`
`The host peripheral bus 104 is connected to a host central processing unit
`
`(CPU) and memory 108. The host peripheral bus 104 can be any type of peripheral
`
`bus including but not limited the Peripheral Component Interconnect (PCI) bus,
`
`Industry Standard Architecture (ISA) bus, Extended Industry Standard Architecture
`
`(EISA) bus, Micro Channel Architecture, and the like. The host CPU and memory
`
`108 includes an operating system (not shown) that interacts with the RAID
`
`controller 102.
`
`Each SCSI channel 106 contains one or more peripheral devices 110A-1l0Z
`
`such as but not limited to disk drives, tape drives, various types of optical disk
`
`drives, printers, scanners, processors, communication devices, medium changers,
`
`and the like. A SCSI charmel 106A can be used to access peripheral devices located
`
`within the host system 100 or a SCSI channel 106N can be used to access peripheral
`
`devices external to the host system 100.
`
`Fig. 1B illustrates a computer system in accordance with a second preferred
`
`embodiment of the present invention. In this embodiment, the RAID controller 102
`
`is external to the host system 100. The RAID controller 102 is connected to the host
`
`system 100 through a SCSI channel 106A and is connected to one or more
`
`peripheral devices through one or more SCSI channels 106B-106N. The RAID
`
`controller 102 and the SCSI channels 106 are similar to what was described above
`
`with respect to Fig. 1A.
`
`Fig. 2 illustrates the components of the RAID controller 102. There is shown
`
`a CPU 112 connected to the host peripheral bus 104. The CPU 112 is also
`
`connected to a secondary peripheral bus 114 coupled to one or more SCSI I/O
`
`processors 116A-116N. A SCSI I/O processor 116 can be coupled to a SCSI
`
`channel 106A and acts as an interface between the secondary peripheral bus 114 and
`
`the SCSI channel 106. The CPU 112 is also coupled to a local bus 118 connected to
`
`a first memory device (memory,) 120, a second memory device (memoryz) 122, and
`
`a coprocessor 124. The coprocessor 124 is coupled to an on—board cache memory
`
`126 which is under the control of the coprocessor 124. The coprocessor 124 and
`
`DHPN-1007 I Page 8 of 43
`
`DHPN-1007 / Page 8 of 43
`
`

`
`WO 99/38067
`
`PCT/US99/01282
`
`7
`
`cache memory 126 is used to retrieve data read to and written from the peripheral
`
`devices 110 as well as perform error correction code (ECC) encoding and decoding
`
`on data that is read to and from the peripheral devices 110. The cache memory 126
`
`can employ either a write-through or write-back caching strategy.
`
`In a preferred embodiment, the CPU 112 is a 32-bit Intel i960 RISC
`
`microprocessor, the first memory device 120 is a flash erasable/prograrnrnable read
`
`only memory (EPROM), the second memory device 122 is a non—volati1e random
`
`access memory (NVRAM), the host peripheral bus 104 is a primary PCI bus, and the
`
`second peripheral bus 114 is a secondary PCI bus. In the first memory device 120,
`
`there can be stored a startup configuration procedure 128 and a runtime
`
`configuration procedure 130. The startup configuration procedure 128 is used to
`
`automatically configure the disk drives at system initialization or bootup which
`
`occurs before the operating system is installed. The runtime configuration procedure
`
`128 is used to configure the disk drives while the operating system is operational.
`
`In the second memory device 122, there can be stored a configuration file 132
`
`containing the current configuration of the RAID disk drives.
`
`In addition, each physical disk drive associated with the RAID controller 102
`
`includes a configuration file 134 that includes data indicating the configuration of
`
`the drive.
`
`In an alternate embodiment, the second memory device 122 on the controller
`
`holds only configuration labels which identify the configuration files 134 on the
`
`physical disk drives, rather than holding the entire configuration information.
`
`The foregoing description has described the computer system utilizing the
`
`technology of the present invention. It should be noted that the present invention is
`
`not constrained to the configuration described above and that other configurations
`
`can be utilized. Attention now turns to a brief overview of the terminology that will
`
`be used to describe a preferred embodiment of the present invention. This
`
`terminology is explained in the context of the manual configuration procedure.
`
`DHPN-1007 I Page 9 of 43
`
`DHPN-1007 / Page 9 of 43
`
`

`
`WO 99/38067
`
`PCT/US99/01282
`
`8
`
`The manual configuration procedures are used to create logical disk drives
`
`from an array of physical disk drives. Typically the configuration process is a
`
`manual procedure that is initiated by a user. Fig. 3 illustrates the steps used in the
`
`manual configuration process. First, a user identifies one or more drive groups (step
`
`172), orders the drive groups (step 174), and creates and configures one or more
`
`logical drives in each drive group with a RAID level as well as other parameter
`
`settings (step 176). The configuration information is stored in each physical drive
`
`and in the RAID controller (step 178). The logical drives are then initialized (step
`
`180) and the configuration is presented by the RAID controller 102 to the host
`
`operating system (step 182). These steps will be described in more detail below.
`
`Fig. 4 illustrates a number of physical disk drives arranged in one or more
`
`drive groups 140, 142, 144 (step 172). The physical disk drives are connected to one
`
`of the SCSI channels 106. The physical disk drives can be arranged into one or
`
`more drive groups 140, 142, 144. A drive group 140, 142, 144 is used to create
`
`logical drives having a defined capacity, a RAID level, as well as other device
`
`settings. The capacity is based on the aggregate of the capacities of each of the disk
`
`drives in the drive group and depends on the RAID level. In a preferred
`
`embodiment, the RAID controller 102 can support up to eight drive groups. A drive
`
`group can include one to eight physical drives. Drives that are not included in any
`
`drive group are considered standby or hot spare drives. The standby drive is a
`
`redundant disk drive that is used when a disk drive fails.
`
`As shown in Fig. 4, the disk drives are configured into three drive groups
`
`referred to as drive group A 140, drive group B 142, and drive group C 144 with one
`
`standby drive 146. Drive group A 140 contains three disk drives located on SCSI
`
`charmel 106A, drive group B 142 includes three disk drives located on SCSI channel
`
`106B, and drive group C 144 includes two disk drives situated on SCSI channel
`
`106A and one disk drive situated on SCSI channel 106B. Disk drive 146 is
`
`30
`
`considered the hot spare drive.
`
`DHPN-1007 I Page 10 of 43
`
`DHPN-1007 / Page 10 of 43
`
`

`
`WO 99/38067
`
`PCT/US99/01282
`
`9
`
`After all the disk groups have been identified, the drive groups are ordered
`
`(step 174). The drive groups are ordered with a sequential numeric ordering from 1
`
`to n, where n is the highest order number. The ordering is used for certain operating
`
`system purposes and in particular to designate a primary logical drive that can serve
`
`as a boot drive. A boot drive is used to “boot-up” the physical drives in a
`
`configuration which is described in more detail below.
`
`Next, logical drives in each drive group are created and configured (step
`
`176). A logical or system drive is that portion of a drive group seen by the host
`
`operating system as a single logical device. There can be more than one logical
`
`drive associated with a particular drive group. A user creates a logical drive by
`
`indicating the portions of the drive group that will be part of a particular logical
`
`drive. For example, as shown in Fig. 5, drive group A includes three physical drives
`
`150, 152, 154 and three logical drives A0, A1, and A2. Logical drive A0 spans across
`
`a designated portion of each physical drive 150, 152, and 154 in drive group A.
`
`Similarly, logical drive A, and A2 each span across a designated portion of each
`
`physical drive 150, 152, and 154 in drive group A.
`
`Each logical drive within a drive group is ordered. This order is derived
`
`from the manner in which the logical drives are created. The logical drives are
`
`created based on the order of their respective drive groups. For instance, the first
`
`logical drive created in the first drive group is considered the first logical drive, the
`
`second logical drive created in the first drive group is considered the second logical
`
`drive, and so on. As noted above, the order is used to define the logical drive that
`
`serves as the boot drive. A boot drive is used at system initialization to boot up the
`
`physical drives in the configuration. The first logical drive (i.e., logical driveo) is
`
`considered the boot drive. In addition, the order is used to add disk capacity which
`
`is discussed in more detail below. As shown in Fig. 5, logical A0 is considered the
`
`first logical drive or boot drive of drive group A, logical drive A1 is considered the
`
`second logical drive, and logical drive A3 is considered the third logical drive.
`
`DHPN-1007 I Page 11 of 43
`
`DHPN-1007 / Page 11 of 43
`
`

`
`WO 99/38067
`
`PCT/US99/01282
`
`10
`
`Each logical drive is configured by defining a capacity, a cache write policy,
`
`and a RAID level. The capacity of a logical drive includes any portion of a drive
`
`group up to the total capacity of that drive group.
`
`The RAID controller 102 has a cache memory 126 that is used to increase the
`
`performance of data retrieval and storage operations. The cache memory 126 can be
`
`operated in a write-back or write-through mode. In a write-back mode, write data is
`
`temporarily stored in the cache 126 and written out to disk at a subsequent time. An
`
`advantage of this mode is that it increases the control1er’s performance. The RAID
`
`controller 102 notifies the operating system that the write operation succeeded
`
`although the write data has not been stored on the disk. However, in the event of a
`
`system crash or power failure, data in the cache 126 is lost unless a battery backup is
`
`used.
`
`In write-through mode, write data is written from the cache 126 to the disk
`
`before a completion status is returned to the operating system. This mode is more
`
`secure since the data is not lost in the event of a power failure. However, the write-
`
`through operation lowers the performance of the controller 102 in many
`
`environments.
`
`Each logical drive has a RAID level which is based on the number of drives
`
`in the drive group in which it is created. In a preferred embodiment, the following
`
`RAID levels are supported:
`
`DHPN-1007 I Page 12 of 43
`
`DHPN-1007 / Page 12 of 43
`
`

`
`WO 99/38067
`
`PCT/US99/01282
`
`RAID LEVEL or JBOD
`
`DESCRIPTION
`
`RAID 1
`
`RAID 3
`
`RAID 5
`
`_
`
`Striping. Requires a minimum of 2 drives and a
`maximum of 8 drives. This RAID level does not
`
`support redundancy.
`
`Mirroring. Requires 2 drives. This RAID level
`supports redundancy.
`
`Requires a minimum of 3 drives and a maximum of
`8 drives. This RAID level supports redundancy.
`
`Requires a minimum of 3 drives and a maximum of
`8 drives. This RAID level supports redundancy.
`
`Combination of RAID 0 (striping) and
`RAID 1 (mirroring). Requires a minimum of 3
`drives and a maximum of 8 drives. This RAID
`
`level supports redundancy.
`
`Just a Bunch of Disks
`(JBOD)
`
`Each drive functions independently of one another.
`No redundancy is supported.
`TABLE I
`
`Fig. 6 illustrates an exemplary assignment of the RAID levels for a particular
`
`drive group, drive group B. In this example, logical drive B0 spans across three
`
`physical drives 156, 158, 160 and logical drive B1 spans across the same physical
`
`drives 156, 158, 160. Logical drive B0 is assigned RAID level 5 and logical drive B1
`
`is assigned RAID level 0+1.
`
`Once the configuration procedure is completed, the configuration for each
`
`drive group is stored in each physical drive in a configuration file 134 preferably
`
`located in the last 64K bytes of the drive (step 178). The logical drives are then
`
`initialized (step 180) and the RAID controller 102 presents the configuration to the
`
`host operating system (step 182).
`
`The foregoing description has described the steps that can be used by a user
`
`to manually configure the disk drives connected to a RAID controller and introduces
`
`the terminology used in a preferred embodiment of the present invention. Attention
`
`DHPN-1007 I Page 13 of 43
`
`DHPN-1007 / Page 13 of 43
`
`

`
`WO 99/38067
`
`PCT/US99/01282
`
`12
`
`now turns to the methods and procedures that are used to automatically configure the
`
`disk drives connected to a RAID controller.
`
`There are two procedures that are used to automatically configure the disk
`
`drives 110 connected to a RAID controller 102. A startup configuration procedure
`
`128 is used when the controller 102 is powered up or started before the operating
`
`system is operational. A runtime configuration procedure 130 is used to alter the
`
`configuration at runtime when additional disk drives are connected to the controller
`
`102. At runtime, the RAID controller 102 is operational and servicing the I/O
`
`activity.
`
`Fig. 7 illustrates the steps used by the startup configuration procedure 128 to
`
`create a configuration for the disk drives connected to the controller 102. At power-
`
`up, the controller 102 scans all the devices 110 connected to it in order to obtain the
`
`physical capacity of a disk drive and to obtain the configuration data from the disk
`
`drive (step 200).
`
`Fig. 8 illustrates this step (step 200) in further detail. The startup
`
`configuration procedure 128 scans each SCSI channel 106 (step 202) and each
`
`device 110 connected to the SCSI channel 106 (step 204). In a preferred
`
`embodiment, the startup configuration procedure 128 can issue the SCSI command
`
`TEST READY UNIT to determine if a peripheral connected to a SCSI charmel 106
`
`is powered up and operational. The startup configuration procedure 128 can also
`
`issue an INQUIRY command to determine the type of the peripheral device 110. If
`
`the device 110 is not a disk drive (step 206-N), then the startup configuration
`
`procedure 128 continues onto the next device 110. Otherwise (step 206-Y), the
`
`startup configuration procedure 128 obtains the capacity of the disk drive (step 208).
`
`This can be accomplished by the startup configuration procedure 128 issuing a
`
`READ CAPACITY command to the device. The READ CAPACITY command
`
`returns the maximum logical block address (LBA) for a disk drive which serves as
`
`an indicator of the capacity of the device 110.
`
`DHPN-1007 I Page 14 of 43
`
`DHPN-1007 / Page 14 of 43
`
`

`
`WO 99/38067
`
`PCT/US99/01282
`
`13
`
`Next, the startup configuration procedure 128 attempts to read the
`
`configuration file 134 stored in the disk drive (step 210). As noted above, the
`
`configuration file 134 is preferably located at the last 64K block on the disk drive.
`
`A configuration file 134 is present if the disk has been previously configured.
`
`Otherwise, the configuration file 134 will not exist. The configuration file 134
`
`contains configuration information such as the drive group identifier, the logical
`
`drive identifier, the RAID level, as well as other information.
`
`The startup configuration procedure 128 repeats steps 202 - 210 for each disk
`
`drive 110 located on each channel 106 that is connected to the RAID controller 102.
`
`Referring back to Fig. 7, the startup configuration procedure 128 then
`
`analyzes the configuration information (step 212). The startup configuration
`
`procedure 128 uses the configuration information to determine the validity of each
`
`configuration and to determine which devices have not been configured.
`
`A complete configuration is one where all the physical drives identified in
`
`the configuration are connected to the controller 102. A partial configuration is one
`
`where some of the physical drives identified in the configuration are not connected
`
`to the controller 102. A valid configuration is one that is either a complete
`
`configuration or a partial configuration where the logical drives are at least in the
`
`degraded mode. Degraded mode refers to the situation where the following two
`
`conditions are met. Firstly, the logical drives are configured with redundancy (i.e.,
`
`RAID level 1, 3, 5, or 0+1) and secondly, a physical drive is dead or not operational
`
`but the logical drive can still operate without any data loss. In degraded mode, the
`
`drive group is fiinctioning and all data is available, but the array cannot sustain a
`
`further drive failure without potential data loss.
`
`After analyzing the configuration information, the startup configuration
`
`procedure 128 determines whether there are any partial configurations present (step
`
`214). As noted above, a partial configuration is one where some of the physical
`
`drives identified in the configuration are not connected to the controller 102. If so
`
`DHPN-1007 I Page 15 of 43
`
`DHPN-1007 / Page 15 of 43
`
`

`
`WO 99/38067
`
`PCT/US99/01282
`
`14
`
`(step 214-Y), the startup configuration procedure 128 performs corrective actions to
`
`configure the drive group as a valid configuration (step 216). The procedure 128
`
`will attempt to place the logical drives in the degraded mode (step 216).
`
`When the corrective action cannot be performed (step 218-N), the startup
`
`configuration procedure 128 terminates processing and an appropriate error message
`
`can be displayed to the user (step 220). In the case where the corrective action is
`
`successful (step 218—Y), the startup configuration procedure 128 continues
`
`processing.
`
`The startup configuration procedure 128 then determines whether there are
`
`any unconfigured drives (step 222). An unconfigured drive is one that does not have
`
`a configuration file 134 associated with it. If there are any unconfigured drives, the
`
`startup configuration procedure 128 configures the drives with the following
`
`configurable default parameter settings (step 224):
`
`Stripe size: The stripe size is the size of the amount of data written on one
`
`drive before moving to the next drive. The stripe size is used to tune the controller
`
`perfonnance for a specific environment or application. Typically, a smaller stripe
`
`size provides better performance for random I/O and a larger stripe size provides
`better performance for sequential transfers.
`
`Cache line size: The cache line size represents the size of the data that is
`
`read or written. The cache line size is based on the stripe size.
`
`SCSI transfer rate: The SCSI transfer rate sets the maximum transfer rate
`
`for each drive charmel.
`
`Spin-up option: The spin-up option controls how the SCSI drives in the
`
`array are started. There are two spin-up modes that may be selected: Automatic and
`
`On Power. The Automatic option causes the controller to spin-up all connected
`
`drives, two-at-a—time at six second intervals, until every drive in the array is
`
`spinning. The On Power option assumes that all drives are already spinning.
`
`DHPN-1007 I Page 16 of 43
`
`DHPN-1007 / Page 16 of 43
`
`

`
`WO 99/38067
`
`PCT/US99/01282
`
`15
`
`Controller read ahead: The controller read ahead option allows the
`
`controller to read into the cache a full cache line of data at a time. When this option
`
`is enabled, the percentage of cache hits is improved.
`
`Automatic add capacity: This option automatically adds capacity to a
`
`logical drive.
`
`In addition, the startup configuration procedure 128 associates a RAID level
`
`with the unconfigured drives based on the following parameter settings and the
`
`number of inserted unconfigured drives (step 224). The parameter settings that are
`
`used to affect the determination of the RAID level are as follows:
`
`Redundancy: This option specifies whether or not data redundancy is
`
`required.
`
`Redundancy method: This options specifies one of the two redundancy
`
`methods: mirroring or parity. Mirroring refers to the 100% duplication of data on
`
`one disk drive to another disk drive. Parity or rotated XOR redundancy refers to a
`
`method of providing complete data redundancy while requiring only a fraction of the
`
`storage capacity of mirroring. For example, in a system configured under RAID 5,
`
`all data and parity blocks are divided between the drives in such a way that if any
`
`single drive is removed or fails, the data on it can be reconstructed using the data on
`
`the remaining drives.
`
`Spare disposition: The controller allows for the replacement of failed hard
`
`disk drives without the interruption of system service. A hot spare is a standby drive
`
`that is used in the event of a disk failure to rebuild the data on a failed disk.
`
`A RAID level is assigned to the unconfigured drives based on the
`
`aforementioned parameter settings and the number of inserted unconfigured drives
`
`in accordance with the following rules which are illustrated in Table II below:
`
`(1) If the number of unconfigured drives = 1, then the unconfigured drive is
`
`a JBOD.
`
`(2)
`
`If redundancy is not needed, the RAID level is set to RAID 0.
`
`DHPN-1007 I Page 17 of 43
`
`DHPN-1007 / Page 17 of 43
`
`

`
`WO 99/38067
`
`PCT/US99/01282
`
`16
`
`(3)
`
`If redundancy is not needed and the number of drives = 2, then the
`
`RAID level is set to RAID 1.
`
`(4)
`
`If redundancy is not needed, the number of drives >=

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket