throbber
(12) United States Patent
`Gorobets et al.
`
`(10) Patent No.:
`(45) Date of Patent:
`
`US 8,250,333 B2
`Aug. 21, 2012
`
`USOO8250333B2
`
`2005/0172082 A1* 8, 2005 Liu et al. ....................... 711/144
`2007/0101.095 A1
`5/2007 Gorobets
`2007/0300037 A1* 12/2007 Rogers et al. ................. T11 202
`2008. O162792 A1* 7, 2008 Wu et al. .....
`... 711,103
`2008. O162793 A1* 7, 2008 Chu et al. ....
`... 711,103
`2008. O177935 A1* 7, 2008 Lasser et al.
`... 711,103
`2010.0030999 A1
`2/2010 Hinz ............................. T11 206
`
`(54)
`54) MLAPPING ADDRESS TABLE MAINTENANCE
`INA MEMORY DEVICE
`
`(75) Inventors: Sergey Anatolievich Gorobets,
`Edinburgh (GB); Alexander Paley,
`Kfar-Saba (IL); Eugene Zilberman,
`Richmond Hill (CA); Alan David
`Bennett, Edinburgh (GB); Shai Traister,
`San Jose, CA (US)
`
`(*) Notice:
`
`73) Assignee: Sandisk Technologies Inc., Plano, TX
`9.
`g
`(US)
`Subject to any disclaimer, the term of this
`past lSolisted under 35
`M
`YW-
`y
`yS.
`(21) Appl. No.: 12/348,782
`(22) Filed:
`Jan. 5, 2009
`
`OTHER PUBLICATIONS
`International Search Report issued in International Application No.
`PCT/US2010/020027, mailed Apr. 23, 2010 (6 pages).
`Written Opinion issued in International Application No. PCT/
`US2010/020027, mailed Apr. 23, 2010 (7 pages).
`* cited by examiner
`Primary Examiner — Kaushikkumar Patel
`(74) Attorney, Agent, or Firm — Brinks Hofer Gilson &
`Lione
`ABSTRACT
`(57)
`A method and system maintains an address table for mapping
`logical groups to physical addresses in a memory device. The
`method includes receiving a request to set an entry in the
`address table and selecting and flushing entries in an address
`table cache depending on the existence of the entry in the
`cache and whether the cache meets a flushing threshold cri
`teria. The flushed entries include less than the maximum
`capacity of the address table cache. The flushing threshold
`criteria includes whether the address table cache is full or if a
`page exceeds a threshold of changed entries. The address
`table and/or the address table cache may be stored in a non
`Volatile memory and/or a random access memory. Improved
`performance may result using this method and system due to
`U.S. PATENT DOCUMENTS
`the reduced number of write operations and time needed to
`ck
`595. f ck 3.283 FGS et al... 3.06 partially flush the address table cache to the address table.
`2004/008.84.74 A1* 5, 2004 Lin ............................... T11 103
`2004/O186946 A1
`9, 2004 Lee
`22 Claims, 9 Drawing Sheets
`
`(65)
`
`Prior Publication Data
`US 2010/O1748.69 A1
`Jul. 8, 2010
`s
`
`(51) Int. Cl.
`(2006.01)
`G06F 2/12
`(52) U.S. Cl. ......................... 711/207 711/135: 711/103
`(58) Field of Classification Search s
`s
`None
`See application file for com lete search history
`pp
`p
`ry.
`References Cited
`
`(56)
`
`800
`
`request to Set GATEntry
`802.
`
`
`
`
`
`Yes
`
`GATEntry
`Exists in GAT
`Delta?
`804
`
`
`
`GAT Delta Meets
`Criteria?
`
`Select GATPage to update
`808
`
`Flush Changed entries from
`GAT Delta to GAT 810
`
`Allocate entry in GATDalia
`812
`
`-- Set GATEntry in GATD
`
`Ed
`
`Micron Ex. 1053, p. 1
`Micron v. Vervain
`IPR2021-01550
`
`

`

`U.S. Patent
`
`Aug. 21, 2012
`
`Sheet 1 of 9
`
`US 8,250,333 B2
`
`-OS is:
`
`
`
`EtoxY SYSex: x
`
`808: i:
`
`W
`
`is settery 3:
`as 8tettery
`
`*:::gr:8:3:38
`8:::::::::::: 88883:y
`
`Micron Ex. 1053, p. 2
`Micron v. Vervain
`IPR2021-01550
`
`

`

`U.S. Patent
`
`Aug.21, 2012
`
`Sheet 2 of 9
`
`US 8,250,333 B2
`
`HOST 70
`
`Application
`
`OS!
`File System
`
`husieralLogical sectons}
`
`Hostsiie Memory Manager Sopllonals | Logica! sectora
`
`MEMORY SYSTEM20
`
`Flash Memory260
`
`i
`
`petao
`
`oe TOU
`
`Mera ory-side Merary
`Manager
`
`
`io Pirgsinal
`Logeal
`
`Adkireas
`
`Translation
`
`1.
` Uncale Binck
`idarger
`
` Erased Block
`Manzacar
` $70
`
`
`
`MMaetablock Link
`
`
`Manager FIG. 2
`
`Micron Ex. 1053, p. 3
`Micron v. Vervain
`IPR2021-01550
`
`Micron Ex. 1053, p. 3
`Micron v. Vervain
`IPR2021-01550
`
`

`

`U.S. Patent
`
`Aug. 21, 2012
`
`Sheet 3 of 9
`
`US 8,250,333 B2
`
`i.cgieai stat:
`
`{i}
`
`
`
`trysikai groxip
`itetaticsek.
`
`3giea & exts
`
`
`
`ity-sixai Srok
`exixitxik
`
`i.ogicai ix3
`hysica:
`irectories
`
`FIG. 3B
`
`Micron Ex. 1053, p. 4
`Micron v. Vervain
`IPR2021-01550
`
`

`

`U.S. Patent
`
`Aug. 21, 2012
`
`Sheet 4 of 9
`
`US 8,250,333 B2
`
`a -a aaa aaaa-, -a-, -a-
`
`i.&ge:388:38
`
`controller too
`
`-(8 xer: Act is:
`
`
`
`
`
`
`
`
`Xs: 8 (383-838.
`8:888.88:8888
`
`exi Setia
`
`
`
`
`
`
`
`
`
`*iasis is try :
`... 3
`3:3 &xiki:88
`tabies {&A
`
`A psia
`cat Dera
`
`-
`
`FIG. 4
`
`Micron Ex. 1053, p. 5
`Micron v. Vervain
`IPR2021-01550
`
`

`

`U.S. Patent
`
`
`
`US 8,250,333 B2
`
`
`
`00G
`
`Z09
`
`Micron Ex. 1053, p. 6
`Micron v. Vervain
`IPR2021-01550
`
`

`

`U.S. Patent
`
`Aug. 21, 2012
`
`Sheet 6 of 9
`
`US 8,250,333 B2
`
`
`
`eed
`LVS
`
`
`
`
`
`(Sbr
`097
`
`saolpu|
`aBed
`1VD
`eyedLVSt
`
`
`
`ee Ee ee
`
`9 S
`
`id
`
`Micron Ex. 1053, p. 7
`Micron v. Vervain
`IPR2021-01550
`
`(3
`(éve,"%e897
`
`Zz
`abed
`1VD
`
`(€99L"
`8rZLOT
`6edLV
`
`¢a
`
`
`
`
`(0
`(Les"9lyOT
`bed
`1VD
`
`|a
`
`Micron Ex. 1053, p. 7
`Micron v. Vervain
`IPR2021-01550
`
`
`
`
`
`
`
`
`

`

`U.S. Patent
`U.S. Patent
`
`Aug.21, 2012
`
`Sheet 7 of 9
`
`US 8,250,333 B2
`US 8,250,333 B2
`
`eyed
`LVD
`
`SQ]
`USUM-3y
`
`SyOO|G-e}aWMAN
`
`Micron Ex. 1053, p. 8
`Micron v. Vervain
`IPR2021-01550
`
`Loeeneeennneeoeeeeeketelewaead
`ad5)dqv'(Ja)Is701gaely|
`
`9)
`(es"9lrOT)|Abed
`(SL7097)0abedLVD
`|.
`1VvO
` “00s
`
`
`
`
`
`>
`abed
`X@pU|
`4y8e
`
`
`
`009
`
`Micron Ex. 1053, p. 8
`Micron v. Vervain
`IPR2021-01550
`
`
`
`

`

`U.S. Patent
`
`Aug. 21, 2012
`
`Sheet 8 of 9
`
`US 8,250,333 B2
`
`800
`
`Request to Set GAT Entry
`802
`
`Yes
`
`
`
`
`
`GAT Entry
`Exists in GAT
`Delta?
`804
`NO
`
`
`
`GAT Delta Meets\ N
`Criteria?
`806
`
`
`
`Yes
`
`Select GAT Page to Update
`808
`
`Flush Changed Entries from
`GAT Delta to GAT 810
`
`Allocate Entry in GAT Delta
`812
`
`Set GAT Entry in GAT Dglia
`
`End
`
`FIG. 8
`
`Micron Ex. 1053, p. 9
`Micron v. Vervain
`IPR2021-01550
`
`

`

`U.S. Patent
`
`Aug. 21, 2012
`
`Sheet 9 Of 9
`
`US 8,250,333 B2
`
`900
`
`Write to GAT Delta Copy in
`RAM
`800
`
`
`
`
`
`Flush GAT Delta
`Copy in RAM to
`GAT Delta in Flash?
`
`
`
`Update GAT Delta in Flash
`With GAT Delta in RAM
`
`FIG. 9
`
`Micron Ex. 1053, p. 10
`Micron v. Vervain
`IPR2021-01550
`
`

`

`US 8,250,333 B2
`
`1.
`MAPPING ADDRESS TABLE MAINTENANCE
`INA MEMORY DEVICE
`
`TECHNICAL FIELD
`
`This application relates generally to memory devices.
`More specifically, this application relates to maintenance of
`logical to physical mapping address tables in reprogram
`mable non-volatile semiconductor flash memory.
`
`BACKGROUND
`
`10
`
`15
`
`25
`
`30
`
`35
`
`2
`pages. The method also includes determining whether the
`entry exists in an address table cache that stores changes to the
`address table and determining whether the address table
`cache meets a flushing threshold criteria. If the entry does not
`exist in the address table cache and the address table cache
`meets the flushing threshold criteria, a quantity of pages of the
`address table is selected. The quantity of pages selected is less
`than the total number of pages in the address table. The pages
`in the address table include changed and unchanged entries.
`Changed entries in the selected pages are flushed from the
`address table cache to the address table. The requested entry
`is allocated and set in the address table cache.
`In some embodiments, the flushing threshold criteria may
`include the maximum capacity of the address table cache, and
`determining whether the address table cache meets the crite
`ria includes determining whether the number of entries in the
`address table cache is at the maximum capacity of the address
`table cache. In other embodiments, the flushing threshold
`criteria may include a threshold of changed entries, and deter
`mining whether the address table cache meets the criteria
`includes determining whether a number of changed entries in
`the pages of the address table exceeds the threshold of
`changed entries.
`Selecting the quantity of pages to flush in the address table
`may include selecting the pages with the greatest number of
`changed entries. Alternatively, selecting the quantity of pages
`to flush in the address table may include selecting the pages
`that have a number of changed entries above a predetermined
`threshold of changed entries. The quantity of pages selected
`to be flushed may be one. Flushing changed entries from the
`address table cache to the address table may include updating
`the entries in the address table with the changed entries in the
`address table cache.
`The method may further include updating the existing
`entry for the logical group in the address table cache if the
`entry already exists in the address table cache. The method
`may also include allocating and setting the entry for the
`logical group in the address table cache if the entry does not
`exist in the address table cache and the address table cache
`does not meet the flushing threshold criteria. The address
`table and/or the address table cache may be stored in one or
`more of a non-volatile memory or a random access memory.
`According to another aspect, a memory device includes an
`address table for mapping logical groups to physical address
`in the memory device, an address table cache that stores
`changes to the address table, and a controller. The controller
`is configured to receive a request to set an entry in the address
`table, where the entry maps a logical group to a physical
`address. The address table includes a plurality of pages. The
`controller is also configured to determine whether the entry
`exists in an address table cache and determine whether the
`address table cache meets a flushing threshold criteria. If the
`controller determines the entry does not exist in the address
`table cache and that the address table cache meets the flushing
`threshold criteria, a quantity of pages of the address table is
`selected. The quantity of pages selected is less than the total
`number of pages in the address table. The pages in the address
`table include changed and unchanged entries. The controller
`flushes changed entries in the selected pages from the address
`table cache to the address table. The requested entry is allo
`cated and set in the address table cache by the controller.
`In some embodiments, the flushing threshold criteria may
`include the maximum capacity of the address table cache, and
`determining whether the address table cache meets the crite
`ria includes the controller being configured to determine
`whether the number of entries in the address table cache is at
`the maximum capacity of the address table cache. In other
`
`Non-volatile memory systems, such as flash memory, have
`been widely adopted for use in consumer products. Flash
`memory may be found in different forms, for example in the
`form of a portable memory card that can be carried between
`host devices or as a solid state disk (SSD) embedded in a host
`device. When writing data to a conventional flash memory
`system, a host typically writes data to, and reads data from,
`addresses within a logical address space of the memory sys
`tem. The memory system then commonly maps data between
`the logical address space and the physical blocks or meta
`blocks of the memory, where data is stored in fixed logical
`groups corresponding to ranges in the logical address space.
`Generally, each fixed logical group is stored in a separate
`physical block of the memory system. The memory system
`keeps track of how the logical address space is mapped into
`the physical memory but the host is unaware of this. The host
`keeps track of the addresses of its data files within the logical
`address space but the memory system generally operates
`without knowledge of this mapping.
`An address table in the memory system includes the map
`ping of the logical address space to the physical memory. In
`particular, the address table includes pages indicating the
`mapping of logical groups to physical blocks in the memory
`system. When the host writes data to logical groups that have
`already been mapped, the address table may be updated with
`the pertinent mapping information.
`Some memory systems contain a cache to the address table
`to temporarily store changes to the address table when data is
`written. Writing to an address table cache instead of to the
`address table may save some time and write operation over
`head. Memory systems with an address table cache may
`periodically synchronize the changed entries in the cache
`with the address table by updating the entire cache to the
`address table, regardless of the amount of actual changed
`entries in the cache. However, in a large memory system that
`may have over one hundred pages in the address table, updat
`ing the entire address table with changed entries in the address
`table cache may negatively affect performance and delay
`other operations in the memory system. Pages in the address
`table may be unnecessarily rewritten if no changes were
`made. The flash memory cells used to store the address table
`may also be worn out prematurely when the entire address
`table is written.
`
`40
`
`45
`
`50
`
`55
`
`SUMMARY
`
`In order to address the problems noted above, a method and
`system for maintaining an address table mapping logical
`groups to physical addresses is disclosed.
`According to a first aspect of the invention, a method is
`disclosed for maintaining an address table for mapping logi
`cal groups to physical addresses in a memory device. The
`method includes receiving a request to set an entry in the
`address table, where the entry maps a logical group to a
`physical address. The address table includes a plurality of
`
`60
`
`65
`
`Micron Ex. 1053, p. 11
`Micron v. Vervain
`IPR2021-01550
`
`

`

`US 8,250,333 B2
`
`10
`
`15
`
`25
`
`30
`
`35
`
`3
`embodiments, the flushing threshold criteria may include a
`threshold of changed entries, and determining whether the
`address table cache meets the criteria includes the controller
`being configured to determine whether a number of changed
`entries in the pages of the address table exceeds the threshold
`of changed entries.
`Selecting the quantity of pages to flush in the address table
`may include selecting the pages with the greatest number of
`changed entries. Alternatively, selecting the quantity of pages
`to flush in the address table may include selecting the pages
`that have a number of changed entries above a predetermined
`threshold of changed entries. The quantity of pages selected
`to be flushed may be one. Flushing changed entries from the
`address table cache to the address table may include updating
`the entries in the address table with the changed entries in the
`address table cache.
`The controller may be further configured to update the
`existing entry for the logical group in the address table cache
`if the entry already exists in the address table cache. The
`controller may also be configured to allocate and set the entry
`for the logical group in the address table cache if the entry
`does not exist in the address table cache and the address table
`cache does not meet the flushing threshold criteria. The
`address table and/or the address table cache may be stored in
`one or more of a non-volatile memory or a random access
`memory.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`FIG. 1 illustrates the main hardware components of a
`memory system Suitable for implementing embodiments of
`the invention.
`FIG. 2 illustrates the memory being organized into physi
`cal groups of sectors (or metablocks) and managed by a
`memory manager of the controller, according to an embodi
`ment.
`FIG. 3A illustrates the mapping between a logical group
`and a metablock, according to an embodiment.
`FIG. 3B illustrates the mapping between logical groups
`and metablocks.
`FIG. 4 is a schematic block diagram of the metablock
`management system as implemented in the controller and
`flash memory.
`FIG. 5 illustrates the group address table block in an initial
`State.
`FIG. 6 illustrates the group address table block after data is
`re-written.
`FIG. 7 illustrates the group address table block after a
`group address table cache update.
`FIG. 8 is a flow diagram illustrating a method of maintain
`ing a group address table and group address table cache that
`map logical groups to physical addresses.
`FIG. 9 is a flow diagram illustrating a method of maintain
`ing a group address table cache in random access memory and
`flash memory.
`
`4
`prises one or more array of non-volatile memory cells distrib
`uted over one or more integrated circuit chips. The controller
`100 includes an interface 110, a processor 120, an optional
`coprocessor 121, ROM 122 (read only memory), RAM 130
`(random access memory) and optionally programmable non
`volatile memory 124. The interface 110 has one component
`interfacing the controller to a host and another component
`interfacing to the memory 200. Firmware stored in nonvola
`tile ROM 122 and/or the optional nonvolatile memory 124
`provides code for the processor 120 to implement the func
`tions of the controller 100. Error correction codes may be
`processed by the processor 120 or the optional coprocessor
`121. In an alternative embodiment, the controller 100 is
`implemented by a state machine (not shown). In yet another
`embodiment, the controller 100 is implemented within the
`host.
`FIG. 2 illustrates the memory being organized into physi
`cal groups of sectors (or metablocks) and managed by a
`memory manager of the controller, according to an embodi
`ment. The memory 200 is organized into metablocks
`MB, ..., MB, where each metablock is a group of physical
`sectors So..., S. that are erasable together.
`The host 10 accesses the memory 200 when running an
`application under a file system or operating system. Typically,
`the host system addresses data in units of logical sectors
`where, for example, each sector may contain 512 bytes of
`data. Also, it is usual for the host to read or write to the
`memory system in units of logical clusters, each consisting of
`one or more logical sectors. In some host systems, an optional
`host-side memory manager may exist to perform lower level
`memory management at the host. In most cases during read or
`write operations, the host 10 essentially issues a command to
`the memory system 20 to read or write a segment containing
`a string of logical sectors of data with contiguous addresses.
`A memory-side memory manager is implemented in the
`controller 100 of the memory system 20 to manage the stor
`age and retrieval of the data of host logical sectors among
`metablocks of the flash memory 200. In the preferred embodi
`ment, the memory manager contains a number of Software
`modules for managing erase, read and write operations of the
`metablocks. The memory manager also maintains system
`control and directory data associated with its operations
`among the flash memory 200 and the controller RAM 130.
`FIGS. 3A(i)-3A(iii) illustrate the mapping between a logi
`cal group and a metablock, according to an embodiment. The
`metablock of the physical memory has N physical sectors for
`storing N logical sectors of data of a logical group. FIG.3A(i)
`shows the data from a logical group LG, where the logical
`sectors are in contiguous logical order 0, 1,...,N-1. FIG.
`3A(ii) shows the same data being stored in the metablock in
`the same logical order. The metablock when stored in this
`manner is said to be "sequential. In general, the metablock
`may have data stored in a different order, in which case the
`metablock is said to be “non-sequential or “chaotic.”
`There may be an offset between the lowest address of a
`logical group and the lowest address of the metablock to
`which it is mapped. In this case, the logical sector address
`wraps around as a loop from the bottom back to the top of the
`logical group within the metablock. For example, in FIG.
`3A(iii), the metablock stores in its first location beginning
`with the data of logical sector k. When the last logical sector
`N-1 is reached, it wraps around to sector 0 and finally stores
`data associated with logical sector k-1 in its last physical
`sector. In the preferred embodiment, a page tag is used to
`
`40
`
`45
`
`50
`
`55
`
`BRIEF DESCRIPTION OF THE PRESENTLY
`PREFERRED EMBODIMENTS
`
`FIG. 1 illustrates the main hardware components of a
`memory system Suitable for implementing embodiments of
`the invention. The memory system 20 typically operates with
`a host 10 through a host interface. The memory system is
`typically in the form of a memory card or an embedded
`memory system, such as a solid state disk (SSD) drive. The
`memory system 20 includes a memory 200 whose operations
`are controlled by a controller 100. The memory 200 com
`
`60
`
`65
`
`Micron Ex. 1053, p. 12
`Micron v. Vervain
`IPR2021-01550
`
`

`

`5
`identify any offset, such as identifying the starting logical
`sector address of the data stored in the first physical sector of
`the metablock. Two blocks will be considered to have their
`logical sectors stored in similar order when they only differ by
`a page tag.
`FIG. 3B illustrates the mapping between logical groups
`and metablocks. Each logical group is mapped to a unique
`metablock, except for a small number of logical groups in
`which data is currently being updated. After a logical group
`has been updated, it may be mapped to a different metablock.
`The mapping information is maintained in a set of logical to
`physical directories, such as a group address table and group
`address table cache, as described below.
`FIG. 4 is a block diagram of the metablock management
`system as implemented in the controller and flash memory.
`The metablock management system comprises various func
`tional modules implemented in the controller 100 and main
`tains various control data in tables and lists in the flash
`memory 200 and the controller RAM 130. The function mod
`ules implemented in the controller 100 includes an interface
`module 110 and a logical-to-physical address translation
`module 140. The interface 110 allows the metablock manage
`ment system to interface with a host system. The logical to
`physical address translation module 140 maps the logical
`address from the host to a physical memory location.
`During operation the metablock management system gen
`erates and works with control data Such as addresses, control
`and status information. Since much of the control data tends
`to be frequently changing data of Small size, it cannot be
`readily stored and maintained efficiently in a flash memory
`with a large block structure. A hierarchical and distributed
`scheme may be employed to store the more static control data
`in the nonvolatile flash memory while locating the smaller
`amount of the more varying control data in controller RAM
`for more efficient update and access. In the event of a power
`35
`shutdown or failure, the scheme allows the control data in the
`volatile controller RAM to be rebuilt quickly by scanning a
`Small set of control data in the nonvolatile memory.
`The non-volatile flash memory 200 may store control data
`such as the group address table (GAT) 210 and the group
`address table cache (GAT Delta) 220. The GAT 210 keeps
`track of the mapping between logical groups of sectors and
`their corresponding metablocks. The GAT 210 contains one
`entry for each logical group, ordered sequentially according
`to logical address. The GAT 210 includes a plurality of pages
`with each page including entries defining metablock
`addresses for every logical group in the memory system. The
`GAT Delta 220 acts as a cache that is a list of changed entries
`in the mappings of the GAT 210. In one embodiment, the GAT
`210 and GAT Delta 220 are both stored in the flash memory
`50
`200. Flushing of changed entries from the GAT Delta 220 to
`the GAT 210 take place within the flash memory 200 in this
`embodiment.
`In some embodiments, the RAM 130 may include a GAT
`Delta Copy 132. The GAT Delta Copy 132 may contain the
`55
`same list of changed entries as in the GAT Delta 220. Peri
`odically, the controller may synchronize the GAT Delta Copy
`132 and the GAT Delta 220 so that they contain the same
`information. This process is detailed more below in reference
`to FIG. 9.
`FIGS. 5-7 illustrate the group address table (GAT) block
`500 in (1) an initial state with an empty GAT Delta; (2) after
`logical groups are re-written with an updated GAT Delta; and
`(3) after the GAT Delta is partially flushed to the GAT. FIG. 8
`is a flow diagram illustrating a method 800 of maintaining a
`group address table and group address table cache that map
`logical groups to physical addresses. Each of the steps
`
`30
`
`40
`
`45
`
`60
`
`65
`
`US 8,250,333 B2
`
`10
`
`15
`
`25
`
`6
`described in FIG. 8 for the method 800 may be performed
`alone or in combination with other steps.
`FIG. 5 illustrates the group address table (GAT) block 500
`in an initial state. The GAT block 500 includes a master index
`page 502 containing a free block list (FBL) 504, a GAT Delta
`506, and a GAT 508 that includes a plurality of pages, where
`each page includes entries mapping metablockaddresses for
`logical groups that have been written to. The FBL 504 lists
`available free blocks that may be later mapped to logical
`groups. The FBL 504 may be in the order the free blocks were
`previously allocated. In FIG. 5, the exemplary FBL 504 lists
`metablocks F, G, H, J, and Kas free blocks.
`In an initial state of the memory system, written logical
`groups are already assigned to physical metablocks in entries
`of the pages in the GAT508. The exemplary GAT508 in FIG.
`5 includes Pages 0, 1, 2, and 3 that each has 416 entries
`corresponding to logical groups. The GAT Delta 506 is empty
`in the initial state because no changes have been made to the
`GAT508 yet. In other words, in the initial state shown in FIG.
`5, the GAT 508 contains the most updated mapping for the
`logical groups to physical metablocks.
`FIG. 6 illustrates the GAT block 500 after data is re-written
`by the host. The old copy of the master index page 502 shown
`in FIG.5 is not shown in FIG. 6. When data is written from the
`host, a request to set an entry in the GAT 508 mapping a
`logical group to a physical metablock may be received at step
`802 of the method 800 shown in FIG. 8. Because the GAT
`Delta 506 may contain more recent mapping information than
`the GAT508, the GAT Delta 506 is checked to see if the entry
`specified in the set request already exists at step 804. If the
`entry already exists at step 804, then it is updated with the new
`mapping information from the request at step 814. In some
`embodiments, if the entry already exists in the GAT Delta
`506, the corresponding page in the GAT 508 containing the
`logical group for the already existing entry may be updated
`immediately from the GAT Delta 506. In other embodiments,
`if the entry already exists in the GAT Delta 506, a new entry
`may be allocated and set in the GAT Delta 506 for the logical
`group specified in the request.
`However, if the entry does not already exist in the GAT
`Delta 506 at step 804, the entry may be allocated and set if a
`flushing threshold criteria is not met at step 806. In this case,
`the entry is allocated and set in the GAT Delta 506 based on
`the request at steps 812 and 814. The flushing threshold
`criteria includes the GAT Delta 506 reaching its maximum
`capacity, if one or more pages in the GAT 508 have a number
`of changed entries above a threshold, or after a certain time
`period has elapsed. Flushing threshold criteria may also
`include doing a preemptive flush if the master index page 502
`is updated for other reasons.
`Another flushing threshold criteria may include a case
`when the GAT 508 is compacted. In one embodiment, when
`the GAT 508 is compacted, valid pages in the GAT508 are
`copied from the compacted GAT block and the updated indi
`ces for the GAT508 are written to the master index page 502.
`In another embodiment, when the GAT 508 is compacted,
`pages in the GAT 508 are copied and updated with changed
`entries from the GAT Delta 506. In this compaction case, the
`GAT Delta 506 may be partially or fully flushed after com
`paction and updating of the GAT508. At a minimum, in this
`compaction case, the GAT Delta 506 would no longer include
`any entries for the pages in the GAT508 that were compacted.
`Therefore, if the entry does not exist in the GAT Delta 506
`and the flushing threshold criteria is not met at step 806, the
`GAT Delta 506 is not flushed and synchronized with the GAT
`508. When the host rewrites data to logical groups, the cor
`responding physical metablock the data is written to is
`
`Micron Ex. 1053, p. 13
`Micron v. Vervain
`IPR2021-01550
`
`

`

`7
`recorded in entries in the GAT Delta 506 at steps 812and814
`instead of directly in the GAT508. For example, in FIG. 6, the
`host rewrites logical groups 410, 411, 520, 413, and 1101. A
`request to set entries for these logical groups may be received
`at step 802. Because the GAT Delta 506 is empty at step 804,
`the entries for these logical groups do not already exist in the
`GAT Delta 506 of FIG. 5. In addition to the entries non
`existence in the GAT Delta 506, because the flushing thresh
`old criteria for the GAT Delta 506 is not met at step 806, the
`entries for the logical groups are allocated and set at steps 812
`and 814. Following steps 812and814, the GAT Delta 506 is
`no longer empty and includes entries mapping logical groups
`410,411,520, 413, and 1101 to physical metablocks F, G, H,
`J, and K, respectively. The memory controller writes the data
`for these blocks into free blocks F, G, H, J, and K, respec
`tively, based on the available free blocks listed in FBL 504.
`The logical groups 410, 411, 520, 413, and 1101 had pre
`viously been mapped to physical metablocks A, B, C, D, and
`E, respectively. At this point, the GAT 508 still contains this
`original mapping, however, the entries for these logical
`groups in the GAT 508 are now superseded by the entries in
`the GAT Delta 506. FIG. 6 shows the superseded physical
`metablocks A, B, C, D, and E in the GAT 508 as grayed out.
`Because physical metablocks A, B, C, D, and E no longer
`contain valid data, the FBL 504 now lists these metablocks as
`free blocks.
`The masterindex page 502 that includes the FBL 504, GAT
`Delta 506 and GAT 508 may be updated in a single write
`operation and/or be contained within a single data structure.
`The master index page 502 may also include other informa
`tion that is updated in the single write operation, e.g., update
`block information (e.g., logical groups that have been
`updated, block locations, and written length), pointers to
`active binary cache blocks, indices for pages of the GAT508,
`wear leveling counters, and other information. The master
`index page 502 provides a synchronous snapshot of the tables
`contained within and does not need to be entirely rewritten
`after every write to a page of the GAT508.
`All physical block references in the master index page 502
`are updated in the single write operation Such that there are no
`lost blocks or double-referenced blocks. All physical blocks
`are referenced by the GAT508, the master index page 502, or
`in control blocks (e.g., pointers to blocks in the GAT 508,
`binary cache blocks, etc.). When a block is taken from the
`FBL 504, the block may optionally be referenced in the
`update block information in the master index page 502. The
`block taken from the FBL 504 is then referenced by the GAT
`Delta 506 and the GAT 508. The reference for a block is
`changed in multiple places when a single write operation is
`performed on the master index page 502, e.g., in the FBL 504,
`where a new allocated block disappears and a new free block
`appears, and in the GAT 508 as a new reference. Therefore,
`instead of updating the FBL 504 and the FBL 508 at the same
`time using an atomic write operation with separate writes, a
`single write operation on the master index page 502 collec
`tively updates the FBL 504, the GAT508, block information,
`and the logical-to-physical table.
`FIG. 7 illustrates the GAT block 500 after a GAT update,
`including the old copy of the master index page 502 from FIG.
`6. The flushing threshold criteria may trigger a partial update
`of the GAT508 with the corresponding changed entries from
`the GAT Delta 506. For example, as shown in the GAT block
`500 of FIG. 7, if (1) a request to set an entry in the GAT508
`mapping a logical group to a physical metablock is received at
`step 802; (2) the entry does not already exist in the GAT Delta
`506 at step 804; and (3) the flushing threshold criteria is met
`at step 806, then a partial update of the GAT 508 occurs at
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`US 8,250,333

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket