`US008010740B2
`
`c12) United States Patent
`Arcedera et al.
`
`(IO) Patent No.:
`(45) Date of Patent:
`
`US 8,010,740 B2
`Aug. 30, 2011
`
`(54) OPTIMIZING MEMORY OPERATIONS IN AN
`ELECTRONIC STORAGE DEVICE
`
`(75)
`
`Inventors: Mark I. Arcedera, Paranaque (PH);
`Ritchie Babaylan, Cavite (PH); Reyjan
`Lanuza, Gen. T. De Leon Valenzuela
`(PH)
`
`(73) Assignee: BiTMICRO Networks, Inc., Fremont,
`CA (US)
`
`( *) Notice:
`
`Subject to any disclaimer, the term ofthis
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 181 days.
`
`(21) Appl. No.: 12/323,461
`
`(22) Filed:
`
`Nov. 25, 2008
`
`(65)
`
`Prior Publication Data
`
`US 2009/0077306 Al
`
`Mar. 19, 2009
`
`Related U.S. Application Data
`
`(63) Continuation-in-part of application No. 11/450,005,
`filed on Jun. 8, 2006, now Pat. No. 7,506,098.
`
`(51)
`
`Int. Cl.
`G06F 12100
`(2006.01)
`(52) U.S. Cl. ........................................ 711/103; 711/203
`(58) Field of Classification Search .................. 711/103;
`771/203
`See application file for complete search history.
`
`(56)
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`2003/0217202 Al * 11/2003 Zilberman et al. .............. 710/22
`5/2005 Chou et al ..................... 711/103
`2005/0114587 Al*
`2009/0300318 Al* 12/2009 Allen et al. ................... 711/206
`OTHER PUBLICATIONS
`
`Lee W. Young, PCT /US07 /70660 International Search Report (ISR),
`Nov. 18, 2008, p. 2, International Searching Authority/US, Alexan(cid:173)
`dria, Virginia.
`Lee W. Young, PCT/US07/70660 Written Opinion (WO), Nov. 18,
`2008, Box V and Supplemental Box, International Searching Author(cid:173)
`ity/US, Alexandria, Virginia.
`
`* cited by examiner
`Primary Examiner - Hashem Farrokh
`(74) Attorney, Agent, or Firm - Stephen R. Uriarte
`ABSTRACT
`(57)
`To optimize memory operations, a mapping table may be
`used that includes: logical fields representing a plurality of
`LBA sets, including first and second logical fields for repre(cid:173)
`senting respectively first and second LBA sets, the first and
`second LBA sets each representing a consecutive LBA set;
`PBA fields representing PBAs, including a first PBA disposed
`for representing a first access parameter set and a second PBA
`disposed for representing a second access parameter set, each
`PBA associated with a physical memory location in a
`memory store, and these logical fields and PBA fields dis(cid:173)
`posed to associate the first and second LBA sets with the first
`and second PBAs; and, upon receiving an I/0 transaction
`request associated with the first and second LBA sets, the
`mapping table causes optimized memory operations to be
`performed on memory locations respectively associated with
`the first and second PBAs.
`40 Claims, 2 Drawing Sheets
`
`Logical
`Fields
`26
`
`PBA Fields 28
`
`Access Parameter Fields 30
`
`Bus Id
`Fields 34
`
`FDE Id
`Fields 36
`
`Group Id
`Fields 38
`
`Memory
`Index
`Fields
`32
`
`22-01
`r----- --------------- ------, fi
`i-
`- - - - - - - - - - - - - - _____________ ..,
`22-02
`15-1
`0x08
`1 8-0
`------,_~
`r----- ---------------
`1 8-1
`22-11
`15-0
`0x00
`1
`51
`______ J
`------,__/
`0x08
`I
`
`22-12
`
`15-1
`
`0
`0
`0
`
`0
`0
`0
`
`0
`
`0 •
`
`0
`0
`0
`
`0
`0
`0
`
`• 0
`
`0
`
`0
`
`0 •
`
`0
`0
`0
`
`0
`0
`0
`
`0
`0
`0
`
`INTEL-1001
`8,010,740
`
`
`
`U.S. Patent
`
`Aug. 30, 2011
`
`Sheet 1 of 2
`
`US 8,010,740 B2
`
`Electronic Storage Device 2.
`
`Memory Store §_
`
`1
`
`1 1
`
`
`
`- - - - - - - - 14-o-7--------:
`I - - - - - - - - 14-1_:r-------,
`- -
`I
`- -
`I
`r-L--------------------~-----;-~-------------------- ➔ ,
`------J-,
`------.J-,
`:
`15-1
`------r-
`----- 1 -
`14-0-u
`14-1-CT
`14-1-1
`1.±Q.:1
`1
`I B~
`I
`8-1
`I
`
`12-02
`
`I
`I
`I
`
`12-01
`
`- - - -
`12-01
`
`- - - -
`12-02
`
`: J.7
`I 23_n I
`: ~
`I 23-0 !
`
`o
`
`I
`
`,
`
`ooo
`
`:
`
`o
`
`I
`1
`
`.__ __ __,
`
`,
`
`...._ __ __.
`
`B~
`! 23-n I :
`8-0
`,V
`I 23-n !
`: ~ I 23-n !
`'j
`I 23-0 I 1
`~
`! 23~0 !
`~ 1
`1
`:
`: ,
`1
`1
`I
`I
`I
`I
`...._ __ __.,
`I
`I
`I
`...._ __ _.
`_J _______ {-i-------~-~--~------~-------1-L------- ➔ -LI
`I I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`o
`o
`o
`o
`I I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`o
`o
`o
`o
`I I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`o
`o
`o
`o
`I I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`
`:
`
`12-01
`
`__
`
`o
`:
`
`I
`
`I
`I
`
`:
`
`-LL--- - - - -'--'-~;_;2 ____ .. _i _ -1-;~c?-L-1- ---___ .J_.J ___ - - - - -~~-:
`,------,
`! 23-n !
`! 23-n !
`:
`:
`12-01
`r:;:,;:;-,
`o
`~ g
`! 23-0 I I
`! 23-0 I
`~
`!23-0 I
`----
`
`12-02
`
`I
`
`I
`I
`
`' 000
`
`I
`
`J
`
`I
`
`:
`
`1
`:
`1
`...._ __ _.1
`,~
`L-------1 L-------~ I
`L-------• L-------
`I
`L----------------------------------------------------J
`I
`L
`-
`-
`- - - - - - - - - - - - - - - - - - - -
`I
`I
`
`000
`
`Memory
`
`OMA
`
`1Q
`
`System 1 Controller I
`
`t
`I FDE 22'f=J
`I FDE 22-01
`
`Storage Device Controller 24
`
`Flash Buffer Controller 16
`
`000
`
`000
`
`J
`
`1
`
`I
`t
`I FDE2~
`I FDE 22-11
`
`Mapping Table
`~
`
`...... H_os_t_1_s .... I .J ~20
`FIG.1
`
`
`
`U.S. Patent
`
`Aug. 30, 2011
`
`Sheet 2 of 2
`
`US 8,010,740 B2
`
`PBA Fields 28
`
`Logical
`Fields
`26
`
`~ ..--{--------
`I LBAs 00-07 :
`- - - - - - - -
`-(--------
`~
`I
`I_ _b~A_§ _Q~1_5_ 1
`-(--------
`~
`I LBAs 16-23 :
`--------
`~
`,-{--------
`1 LBAs 24-31 1
`--------'
`
`Access Parameter Fields 30
`
`Bus Id
`Fields 34
`
`FOE Id
`Fields 36
`
`Group Id
`Fields 38
`
`I
`
`Memory
`Index
`Fields
`32
`,------ ------- -------- ------, J§
`Ox00 1
`15-0
`22-01
`I 8-0
`- - - - - - - - - - - - - - ----------- ______ ...J
`,------ ------- -------- ------, ~
`------- ------- ~-------- ______ _,
`j-
`Ox08
`15-1
`22-02
`8-0
`,------ ...,_.......,....,,_......,...,_.....,.....,., -------- ------, ~
`i-
`Ox00
`I 8-1
`22-11
`15-0
`------- ------- --------- -------' ~
`,------ ------- -------- ------,___
`.,._.......,__,..,.__,.,,.......,_ ------- ,..... _________
`
`I 8-1
`
`22-12
`
`15-1
`
`0x08
`
`I
`
`- - - - - ...... ...1
`
`0
`0
`0
`
`0
`0
`0
`
`0
`0
`0
`
`0
`0
`0
`
`0
`0
`0
`
`0
`0
`0
`
`0
`0
`0
`
`0
`0
`0
`
`0
`0
`0
`
`0
`0
`0
`
`FIG. 2
`
`
`
`1
`OPTIMIZING MEMORY OPERATIONS IN AN
`ELECTRONIC STORAGE DEVICE
`
`CROSS-REFERENCE TO RELATED
`APPLICATIONS
`
`This application is a continuation in part application and
`claims the benefit of U.S. patent application Ser. No. 11/450,
`005, filed 8 Jun. 2006, entitled "Optimized Placement Policy
`for Solid State Storage Devices, now U.S. Pat. No. 7,506,098,
`which is hereby incorporated by reference as if fully set forth
`herein.
`
`FIELD OF INVENTION
`
`The present invention relates to solutions for optimizing
`memory operations in a memory system suitable for use in an
`electronic storage device. More particularly, the present
`invention relates to solutions for increasing the likelihood that
`the operational load imposed on the memory system by these
`memory operations, performed in response to an I/O transac(cid:173)
`tion request, will be optimally distributed across resources
`used by the memory system, increasing operational effi(cid:173)
`ciency, reducing memory operation latency or both.
`
`BACKGROUND
`
`US 8,010,740 B2
`
`2
`level, however, may decrease storage device performance,
`increase transaction or memory operation latency, or render
`the storage device as a primary bottleneck to the processing
`efficiency of a host or a computing network. In response,
`5 some solid state storage device manufacturers have imple(cid:173)
`mented "brute force" or active solutions that include using
`more complex algorithms and "faster" processing devices,
`such as CPUs, to handle and process these algorithms. These
`active solutions, however, increases device costs and in most
`10 cases, design complexity.
`Consequently, a need exists for a solution that optimizes
`memory operations performed by or in an electronic storage
`device with a relatively minimal increase in device complex(cid:173)
`ity and costs. These optimizing memory operations include
`15 increasing the likelihood that, in response to an I/O transac(cid:173)
`tion initiated by a host, the operational load imposed on the
`storage device by these memory operations will be optimally
`distributed across different storage device resources, such as
`by interleaving or parallel memory operations, reducing
`20 memory operation latency, increasing operational device effi(cid:173)
`ciency, or both.
`
`SUMMARY
`
`Electronic storage devices that respectively employ a
`memory system that includes memory devices, memory
`chips, or modules that use non-volatile memory cells are
`commonly known and are sometimes referred to as solid state
`storage devices. The computer device industry has increased
`the adoption of these solid state storage devices due to certain
`advantages offered by these types of storage devices over
`other forms of storage devices, such as rotational drives. The
`adoption of solid state storage devices as an enhancement or
`even as replacements to rotational drives is not without some
`difficulty because many conventional computer devices,
`sometimes referred to as "hosts", use host operating systems,
`file systems, or both that are optimized for use with rotational
`drives rather than solid state storage devices. For example,
`unlike rotational drives, solid state storage devices that use
`NAND flash memory, also referred to as "flash drives", suffer
`from write cycle limitations and to a certain degree, bad
`blocks. In addition, flash drives use block addressing rather
`than byte addressing, and these flash drives use block
`addresses that are usually much larger than the block address
`used by the host. Block addressing may impose an additional
`level of complexity and additional processing cycles when
`performing a write operation, and which in tum, may increase 50
`write operation latency. This additional level of complexity
`may include performing a read-modify-write transaction to
`complete the write operation.
`Many conventional host operating and file systems, how(cid:173)
`ever, do not employ mechanisms to solve these write cycle, 55
`bad block, or block level addressing issues. Consequently,
`flash drive manufacturers sometimes employ, at the storage
`device level, flash memory related techniques that include
`read-modify-write transactions, wear leveling, bad block
`management or any combination of these to minimize or 60
`manage the write cycle limitation and bad blocks common to
`flash drives. Other host file systems, such as file systems that
`employ a write in place method, might use a "flash translation
`layer" to avoid excessive wear to a particular block of flash
`memory.
`The use of these flash memory related techniques at the
`storage device level, or a flash translation layer at the host
`
`65
`
`25
`
`Solutions for optimizing memory operations performed by
`or in an electronic storage device in response to receiving an
`I/O transaction request initiated by a host are disclosed. In one
`implementation, a mapping table is used and disposed to
`include: a set of logical fields, including a first logical field
`30 and a second logical field, with these logical fields respec(cid:173)
`tively disposed for representing a plurality of LBA sets,
`including the first logical field disposed for representing a
`first LBA set and the second logical field disposed for repre(cid:173)
`senting a second LBA set, the first and second LBA sets each
`35 representing a set of consecutive LBAs. The mapping table
`further includes a set of PBA fields, including a first PBA field
`and a second PBA field, with these set of PBA fields respec(cid:173)
`tively disposed for representing a set of PBAs, including a
`first PBA disposed for representing a first set of access param-
`40 eters and a second PBA disposed for representing a second set
`of access parameters, with these PBAs each associated with a
`physical memory location in a memory store, and these set of
`logical fields and set of PBA fields disposed to associate the
`first and second LBA sets with the first and second PBAs; and
`45 wherein, in response to receiving the I/O transaction request,
`the mapping table causes the storage device to perform opti(cid:173)
`mized memory operations on memory locations respectively
`associated with the first and second PBAs, if the I/O transac-
`tion request is associated with the first and second LBA sets.
`
`BRIEF DESCRIPTION OF DRAWINGS
`
`FIG. 1 is a block diagram of an electronic storage device
`that includes a memory system which uses a memory table for
`increasing the likelihood that optimized memory operations
`will be performed in accordance with one embodiment of the
`present invention.
`FIG. 2 is a mapping table for an electronic storage device,
`such as the electronic storage device illustrated in FIG. 1, in
`accordance with another embodiment of the present inven(cid:173)
`tion.
`
`DETAILED DESCRIPTION OF THE INVENTION
`
`For clarity purposes, not all of the routine features of the
`embodiments described herein are shown or described. It is
`appreciated that in the development of any actual implemen-
`
`
`
`US 8,010,740 B2
`
`3
`tation of these embodiments, numerous implementation-spe(cid:173)
`cific decisions may be made to achieve a developer's specific
`goals. These specific goals will vary from one implementa(cid:173)
`tion to another and from one developer to another. This devel(cid:173)
`opment effort might be complex and time-consuming but
`would nevertheless be a routine engineering undertaking for
`those of ordinary skill in the art having the benefit of this
`disclosure. In addition, the elements used in the embodiments
`described herein may be referred to using the variable symbol
`n, which is intended to reflect any number greater than one
`(1 ).
`The various embodiments of the present invention dis(cid:173)
`closed herein relate to solutions for optimizing memory
`operations in a memory system suitable for use in an elec(cid:173)
`tronic storage device. More particularly, the present invention
`pertains to solutions that increase the likelihood which, in
`response to an I/O transaction initiated by a host, the opera(cid:173)
`tional load imposed on the memory system by these memory
`operations will be optimally distributed across resources used
`by the memory system, reducing memory operation latency, 20
`increasing operational device efficiency, or both. This
`memory system, such as memory system 4, may be used with
`or implemented in an electronic storage device 2, in the man(cid:173)
`ner illustrated in FIG. 1 in accordance with one embodiment
`of the present invention. Memory system 4 is coupled to a
`memory store 6 via a set of buses, such as buses 8-0 and 8-1,
`and uses a mapping table 9. Memory system 4 also includes a
`DMA controller 10 that includes a flash buffer controller 16
`and a set of direct memory access engines, which are suitable
`for performing DMA operations on memory devices used by
`memory store 6.
`When designed to operate with flash memory devices,
`these direct memory access engines may be also herein
`referred to as FD Es, and in the example shown in FIG. 1, are
`illustrated as FDEs 22-01, 22-02, 22-11 and 22-12. An FDE
`represents a device that is capable of controlling a flash
`memory device and performing DMA operations on the flash
`memory device in response to commands generated by stor(cid:173)
`age device 2 through storage device controller 24. These
`D MA operations include facilitating high speed data transfers
`to and from memory devices that are included as part of
`memory store 6, such as flash memory devices 12-01 and
`12-02. Buses 8-0 and 8-1 respectively function as a bus inter(cid:173)
`face that is used by an FDE to control and communicate with
`selected flash memory devices in memory store 6. For
`example, bus 8-0 may be used by FDE 22-01 to control flash
`memory devices, such as flash memory devices 12-01, that
`are coupled to bus 8-0 and that have chip enables (not shown)
`which are controllable by FDE 22-01, while bus 8-1 may be
`used by FDE 22-11 to control flash memory devices, such as 50
`flash memory devices 12-01, that are coupled to bus 8-1 and
`that have chip enables (not shown) which are controllable by
`FDE 22-11. In another example, bus 8-0 may be used by FDE
`22-02 to control flash memory devices, such as flash memory
`devices 12-02, that are coupled to bus 8-0 and that have chip 55
`enables which are controllable by FDE 22-02, while bus 8-1
`may be used by FDE 22-12 to control flash memory devices,
`such as flash memory devices 12-02 in flash, that are coupled
`to bus 8-1 and that have chip enables which are controllable
`by FDE 22-12.
`DMA controller 10 also includes a flash buffer controller
`16. Flash buffer controller 16 is disposed to drive each bus
`utilized by memory system 4 and to translate command sig(cid:173)
`nals asserted by FDEs 22-0 through 22-12 into native
`memory device commands that are compatible with the
`memory device selected to receive these commands. In the
`example shown in FIG. 1, these memory device commands
`
`4
`are in the form of native flash memory device commands. The
`number of buses, flash memory devices and FDEs shown in
`FIG. 1 are not intended to be limiting in any way and may be
`selected according to a variety of factors, such as the perfor-
`5 mance, cost, capacity and any combination of these which
`may be desired.
`One of ordinary skill in the art having the benefit of this
`disclosure would readily recognize that DMA controller 10
`may be implemented using a different configuration dis-
`IO closed herein and may include other types of processors,
`DMA engines, or equivalent components. The term "host"
`means any device that can initiate an I/O transaction request
`which can be received by an electronic storage device that is
`15 targeted for receiving the I/O transaction request. For
`example, in FIG. 1, host 18 can initiate an I/O transaction
`request 20, which is received and processed by storage device
`2 as described herein.
`Storage device 2 represents any storage device that may be
`used with the optimized memory operation solutions and
`embodiments disclosed herein. For example, storage device 2
`may be further configured to include a processing unit, such
`as a CPU, a bus, a working memory, such as DRAM, and an
`I/O interface, which may be in the form of a SATA, iSCSI,
`25 SATA, Fibre Channel, USB, eSATA interfaces, a network
`adapter, a PCI or PCI-e bus bridge, or the like. These storage
`device components enable storage device 2 to execute an
`embedded operating system (not shown) necessary for pro(cid:173)
`cessing I/O transactions that are initiated by a host through a
`30 suitable network, and that are eventually received and pro(cid:173)
`cessed by storage device 2 through various device resources,
`including memory system 4, buses 8-0 and 8-1, and memory
`store 6. The processing unit, bus, DRAM, embedded operat(cid:173)
`ing system, and I/O interface are generally known, and thus,
`35 are hereinafter referred to and shown collectively in the form
`of a storage device controller 24 to avoid over complicating
`the herein disclosure. In addition, although DMA controller
`10 is shown as a separate component device in FIG.1, DMA
`controller 10 may be integrated with any other component
`40 employed by a storage device and thus, the form factor of
`DMA controller 10 is not intended to be limiting in any way.
`For example and although not shown in FIG. 1, DMA con(cid:173)
`troller 10 may be part of or integrated within a storage device
`controller that has a function similar to storage device con-
`45 trailer 24.
`In another example, DMA controller 10 may be imple(cid:173)
`mented in the form of Flash DMA controller 109, which is
`part of hybrid storage controller 102, which are further dis(cid:173)
`closed in FIG. 1 of United States patent application Ser. No.
`11/450,023, filed on 8 Jun. 2006, entitled "Hybrid Multi(cid:173)
`Tiered Caching Storage System", now U.S. Pat. No. 7,613,
`876, which is hereby incorporated by reference as if fully set
`forth herein.
`Memory store 6 may be configured to include a set of solid
`state memory devices that are coupled to a set of buses and
`that are controlled by a set of FD Es. In FIG. 1, these memory
`devices, buses, and FD Es are implemented in the form of flash
`memory devices 12-01 and 12-02; buses 8-0 and 8-1; and
`FD Es 22-01, 22-02, 22-11, and 22-12, respectively. The term
`60 "flash memory device" is intended to include any form of
`non-volatile memory, including those that use blocks of non(cid:173)
`volatile memory cells, named flash blocks. Each memory cell
`may be single or multi-level. In FIG. 1, flash blocks 23-0
`through 23-n are shown for each flash memory device illus-
`65 trated. In addition, the number of flash memory devices, bus
`interfaces, and FD Es illustrated in FIG. 1 is not intended to be
`limiting in any way. Instead, the number of these devices may
`
`
`
`US 8,010,740 B2
`
`5
`be increased or decreased, depending on the performance,
`cost and storage capacity desired.
`A flash memory device permits memory operations, such
`as a write or read operation, to be performed on these flash
`blocks according to a protocol supported by the flash memory 5
`device. A flash memory device may be implemented using a
`flash memory device that complies with the Open NAND
`Flash Interface Specification, commonly referred to as ONFI
`Specification. The term "ONFI Specification" is a known
`device interface standard created by a consortium oftechnol- 10
`ogy companies, called the "ONFI Workgroup". The ONFI
`Workgroup develops open standards for NAND flash
`memory devices and for devices that communicate with these
`NAND flash memory devices. The ONFI Workgroup is head(cid:173)
`quartered in Hillsboro, Oreg. Using a flash memory device 15
`that complies with the ONFI Specification is not intended to
`limit the embodiment disclosed. One of ordinary skill in the
`art having the benefit ofthis disclosure would readily recog(cid:173)
`nize that other types of flash memory devices employing
`different device interface protocols may be used, such as 20
`protocols compatible with the standards created through the
`Non-Volatile Memory Host Controller Interface ("NVM(cid:173)
`HCI") working group. Members of the NVMHCI working
`group include Intel Corporation of Santa Clara, Calif., Dell
`Inc. of Round Rock, Tex. and Microsoft Corporation of Red- 25
`mond, Wash.
`Mapping table 9 may be implemented in the manner shown
`in FIG. 2, which is discussed below in conjunction with FIG.
`1. Mapping table 9 includes a set of fields, named logical
`fields 26, and a set of PBA fields 28. Each PBA field from set 30
`of PBA fields 28 is disposed to represent a unique addressable
`physical memory location, named "PBA, in memory store 6,
`such as PBA 46, 48, 50 or 51. For each PBA field, this
`physical memory location may be represented by using a set
`of access parameter fields 30 and a memory index field 32. 35
`The set of access parameters fields 30 includes a bus identifier
`field 34, a FDE identifier field 36, and a group identifier field
`38.
`Bus identifier field 34 is disposed to represent a bus iden(cid:173)
`tifier, which is an identifier that is used for representing a bus
`interface, such as bus 8-0 or 8-1. FDE identifier field 36 is
`disposed to represent a FDE identifier, which is an identifier
`for identifying one FD E from another FD E from a set of FD Es
`that are coupled to the same bus interface. The forms of the
`bus and FDE identifiers are not intended to be limiting in any
`way as long as the bus identifiers and FDE identifiers
`employed enable the selection of a specific bus from a set of
`buses, and the selection of a specific FDE from a set of FD Es
`that are coupled to the same bus via flash buffer controller 16.
`Flash memory devices coupled to the same bus may be here(cid:173)
`inafter referred to as a flash array bank. For example, flash
`memory devices 12-01 and 12-02 that are coupled to bus 8-0
`are in a flash array bank 14-0, while flash memory devices
`12-01 and 12-02 that are coupled to bus 8-1 are in a flash array
`bank 14-1.
`Flash memory devices that are controlled by the same FDE
`may be hereinafter referred to as a flash array bank interleave.
`For instance, in flash array bank 14-0, flash memory devices
`12-01 are controlled by FDE 22-01 and are thus, in one flash
`array bank interleave 14-0-0, while flash memory devices
`12-02 are controlled FDE 22-02 and are thus, in another flash
`array bank interleave, such as flash array bank interleave
`14-0-1. Similarly, in flash array bank 14-1, flash memory
`devices 12-01 are controlled by FDE 22-11 and are thus, in
`flash array bank interleave 14-1-0, while flash memory
`devices 12-02 are controlled FDE 22-12 and are thus, in flash
`array bank interleave 14-1-1. Selection of a flash memory
`
`6
`device may be provided by coupling the chip select line (not
`shown) of the flash memory device to a FDE.
`Group identifier field 38 is disposed to represent a group
`identifier, which is an identifier for identifying one flash
`memory device from another flash memory device from a set
`of flash devices that are controlled by the same FDE. For
`example, in flash array bank interleave 14-0-0, since flash
`memory devices 12-01 are controlled by FDE 22-01, each of
`these flash memory devices are represented using different
`group identifiers, such as group identifiers 15-0 and 15-1.
`Group identifiers are selected and assigned to flash
`memory devices so that no more than one flash memory
`device, which is within the same flash array bank interleave or
`which is controlled by the same FDE, is associated with the
`same group identifier. Flash memory devices may share the
`same group identifier as long as these flash memory devices
`are not controlled by the same FDE. In the example embodi(cid:173)
`ment in FIG. 1, flash memory devices 12-01 and 12-02 share
`the same group identifier, such as group identifier 15-0 or
`15-1, because these flash devices are controlled by different
`FDEs, such as FDEs 22-01 or 22-02, respectively.
`In FIG. 2, mapping table 9 reflects the embodiment dis-
`closed in FIG. 1. PBA 46 and PBA 50 include access param(cid:173)
`eters that are associated with the same group identifierof15-0
`but pertain to physical memory locations within different
`flash memory devices in memory store 6. These access
`parameters are associated with different FDE identifiers, such
`as FDE identifiers 22-01 and 22-11 and with different bus
`identifiers 8-0 and 8-1, respectively. Using their respective
`access parameters, PBA 46 is associated with a physical
`memory location that is within a flash memory device 12-01
`coupled to bus 8-0, controlled by FDE 22-01, and in group
`15-0, while PBA 50 is associated with another physical
`memory location that is within a flash memory device 12-01
`coupled to bus 8-1, controlled by FDE 22-11, and in group
`15-0 in FIG. 1.
`The term "LBA", which may also be referred to herein as a
`logical block address, is intended to represent an address that
`is part of a logical addressing system (not shown) used by a
`40 host, such as host 18, and this host may use one or more LBAs
`in an I/0 transaction request, such as I/0 transaction request
`20. Mapping table 9 associates LBAs from this host logical
`addressing system to the device addressing system used by
`storage device 2. This device addressing system may include
`45 using a mapping table that includes the functionality
`described herein, such as mapping table 9. These LBAs may
`be grouped into sets of consecutive LBAs, named "LBA
`sets". These LBA sets may then be represented using logical
`fields 26 from mapping table 9, where each logical field in set
`50 of logical fields 26 is respectively disposed to represent a
`LBA set. For example, LBA sets 40, 42, 44, and 45 may be
`used to respectively represent consecutive LBAs used by host
`18, such as LBAs 00 through 07, LBAs 08 through 15, LBAs
`16 through 23, and LBAs 24-31, respectively.
`Logical fields in mapping table 9 may be initialized to
`represent LBA sets that are serially arranged in mapping table
`9 so that LBAs from adjacent LBA sets are contiguous. In the
`example shown in FIG. 2, LBA sets 40, 42, 42, 44 and 45
`represent serially arranged LBAs since the highest LBA in
`60 one LBA set is contiguous with the lowest LBA in an adjacent
`LBA set. Specifically, LBA sets 40 and 42 are disposed in
`adjacent rows and thus are adjacent to each other. These
`adjacent LBA sets are also contiguous because LBA 7, which
`corresponds to the highest LBA in LBA set 40, is contiguous
`65 with LBA 8, which is the lowest LBA in LBA set 42. Simi(cid:173)
`larly, LBA sets 42 and 44 are also contiguous because LBA
`15, which corresponds to the highest LBA in LBA set 42, is
`
`55
`
`
`
`US 8,010,740 B2
`
`7
`contiguous with LBA 16, which is the lowest LBA in LBA set
`44. LBA set 44 is also contiguous with LBA set 45.
`In addition, storage device 2 initializes mapping table 9 by
`creating an association between these LBA sets and PBAs so
`that adjacent LBA sets are mapped to PBAs, and any two
`adjacent PBAs differ by at least one access parameter. The
`term "two adjacent PBAs" is intended to include any two
`PBAs that are respectively mapped to adjacent LBAs in map(cid:173)
`ping table 9. This mapping association among LBA sets and
`PBAs increases the likelihood that memory operations result(cid:173)
`ing from an I/0 transaction request will be optimized because
`if these memory operations involve data accesses associated
`with contiguous LBA sets these data accesses will in likeli(cid:173)
`hood occur on different PBAs. This results in the likelihood
`that different storage resources will be used, such as different
`buses, different FD Es, different groups or any combination of
`these, which provides a form of interleaving by bus, FDE,
`flash memory device group or any combination of these.
`Each LBA set also includes a LBA set size, which is a 20
`function of the number ofLBAs used in each LBA set and the
`LBA size of each LBA. For example, each LBA set may be
`associated with eight LBAs and if each LBA has an LBA size
`equal to 512 bytes, then LBA set 40 has an LBA set size of 4
`Kilobytes (KB). If a flash block within a given flash memory
`device has a block size of 16 KB, an LBA set size of 4 KB
`would permit a maximum of four PBAs that can be mapped to
`this flash block. Similarly, if a flash block has a flash block
`size of 32 KB, an LBA set size of 4 KB would permit a
`maximum of eight PBAs that can be mapped to this flash
`block. In the example shown in FIG. 2, the LBA set size
`selected is set equal to a section size (not illustrated in figure).
`The term "section size" is the size of the minimum portion in
`which data may be transferred or relocated between physical
`memory locations in memory store 6. In addition, the section
`size may be limited to a size that is at least equal to the product
`of a positive integer multiplied by the native page size used by
`a flash block. For example, if the native page size for flash
`block 23-0 is 4 KB, then the section sizes that may be used
`would be limited to 4 KB, 8 KB, 12K, and so on.
`In an alternative embodiment, if a flash memory device
`supports more than one page write cycle that can be per(cid:173)
`formed on a page before requiring an erase cycle to be per(cid:173)
`formed on the flash block containing the page, then the sec(cid:173)
`tion size may be set to a size that is at least equal to the
`quotient that results from dividing the native page size of the
`flash block by the number of page writes supported by the
`flash memory device. For example, if flash block 23-0 from
`flash memory device 12-01 supports two page writes on a
`page before requiring an erase cycle on flash block 23-0 and 50
`the flash block 23-0 has a native page size of 4 KB, then the
`section size may be set at least equal to 2 KB. Since the LBA
`set size selected is equal to the section size, in this alternative
`embodiment, the LBA set used would have an LBA set size of
`2KB.
`Besides access parameter field 30, each PBA field also
`includes a memory index field 32, which may be used to
`represent a memory index, such as 0x00 or Ox08 in FIG. 2.
`Each memory index represents the starting address boundary
`of a physical memory location in a flash memory device in
`memory store 6, such as flash memory device 12-01 or 12-02.
`This physical memory location is used as the target physical
`memory location for reading or writing data that is associated
`with an LBA set that is mapped to the PBA associated with the
`memory index. A physical memory location is disposed to
`have a size, named "physical memory location size", that is at
`least equal to the LBA set size of this LBA s