throbber
United States Patent [19J
`Bruce et al.
`
`[54] UNIFIED RE-MAP AND CACHE-INDEX
`TABLE WITH DUAL WRITE-COUNTERS
`FOR WEAR-LEVELING OF NON-VOLATILE
`FLASH RAM MASS STORAGE
`
`[75]
`
`Inventors: Ricardo H. Bruce, Union City;
`Rolando H. Bruce, South San
`Francisco; Earl T. Cohen; Allan J.
`Christie, both of Fremont, all of Calif.
`
`[73] Assignee: BIT Microsystems, Inc., Fremont,
`Calif.
`
`[21] Appl. No.: 08/918,203
`
`[22]
`
`Filed:
`
`Aug. 25, 1997
`
`[51]
`[52]
`[58]
`
`[56]
`
`Int. Cl.6
`...................................................... G06F 12/10
`U.S. Cl. ............................................. 711/103; 711/206
`Field of Search ..................................... 711/103, 206;
`712/37; 365/230.03, 185.33, 218
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`
`5,222,046
`5,297,148
`5,341,339
`5,371,709
`5,379,401
`5,388,083
`5,396,468
`5,406,529
`5,432,748
`5,448,577
`5,459,850
`5,479,638
`5,485,595
`5,488,711
`5,500,826
`5,509,134
`5,513,138
`5,524,231
`5,530,828
`5,535,328
`5,535,356
`5,542,082
`5,548,741
`
`6/1993 Kreifels et al. .................... 365/230.06
`3/1994 Harari et al. ........................... 371/10.2
`8/1994 Wells ...................................... 365/218
`12/1994 Fisher et al. ... ... ... ... ... .... ... ... ... 365 /226
`1/1995 Robinson et al.
`...................... 395/425
`2/1995 Assar et al. ............................. 365/218
`3/1995 Harari et al. ............................ 365/218
`4/1995 Asano ................................ 365/230.03
`7/1995 Hsu et al.
`.......................... 365/230.01
`9/1995 Wells et al. ............................ 371/10.1
`10/1995 Clay et al. ......................... 395/497.02
`12/1995 Assar et al. ............................. 395/430
`1/1996 Assar et al. ............................. 395/430
`1/1996 Hewitt et al. ........................... 395/430
`3/1996 Hsu et al.
`.......................... 365/230.01
`4/1996 Fandrich et al. ........................ 395/430
`4/1996 Manabe et al. .................... 365/185.33
`6/1996 Brown ..................................... 395/428
`6/1996 Kaki et al. .............................. 395/430
`7/1996 Harari et al. ....................... 395/182.05
`7/1996 Kim et al. ............................... 395/430
`7/1996 Solhjell ................................... 395/442
`8/1996 Watanabe ................................ 395/442
`
`I 1111111111111111 11111 1111111111 11111 11111 11111 11111 11111 lll111111111111111
`US006000006A
`[11] Patent Number:
`[45] Date of Patent:
`
`6,000,006
`Dec. 7, 1999
`
`5,559,956
`5,568,423
`5,568,439
`5,572,466
`5,594,883
`5,602,987
`5,603,001
`5,606,529
`5,606,532
`5,619,470
`5,627,783
`5,640,349
`5,737,742
`5,802,554
`5,819,307
`
`9/1996 Sukegawa .......................... 395/182.06
`10/1996 Jou et al. ........................... 365/185.33
`10/1996 Harari ..................................... 365/218
`11/1996 Sukegawa .......................... 365/185.33
`1/1997 Pricer ...................................... 395/440
`2/1997 Harari et al. ....................... 395/182.06
`2/1997 Sukegawa et al. ..................... 395/430
`2/1997 Honma et al. ..................... 365/230.03
`2/1997 Lambrache et al. ................. 365/238.5
`4/1997 Fukumoto ............................... 365/228
`5/1997 Miyauchi ........................... 365/185.33
`6/1997 Kakinuma et al. ................ 365/185.33
`4/1998 Achiwa et al.
`......................... 711/103
`9/1998 Caceres et al. ......................... 711/103
`10/1998 Iwamoto et al. ........................ 711/103
`
`Primary Examiner-Eddie P. Chan
`Assistant Examiner-Gary J. Portka
`Attorney, Agent, or Firm-Stuart T. Auvinen
`ABSTRACT
`
`[57]
`
`A flash-memory system provides solid-state mass storage as
`a replacement to a hard disk. A unified re-map table in a
`RAM is used to arbitrarily re-map all logical addresses from
`a host system to physical addresses of flash-memory
`devices. Each entry in the unified re-map table contains a
`physical block address (PEA) of the flash memory allocated
`to the logical address, and a cache valid bit and a cache
`index. When the cache valid bit is set, the data is read or
`written to a line in the cache pointed to by the cache index.
`A separate cache tag RAM is not needed. When the cache
`valid bit is cleared, the data is read from the flash memory
`block pointed to by the PEA Two write count values are
`stored with the PEA in the table entry. A total-write count
`indicates a total number of writes to the flash block since
`manufacture. An incremental-write count indicates the num(cid:173)
`ber of writes since the last wear-leveling operation that
`moved the block. Wear-leveling is performed on a block
`being written when both total and incremental counts exceed
`system-wide total and incremental thresholds. The
`incremental-write count is cleared after a block is wear(cid:173)
`leveled, but the total-write count is never cleared. The
`incremental-write count prevents moving a block again
`immediately after wear-leveling. The thresholds are adjusted
`as the system ages to provide even wear.
`
`19 Claims, 7 Drawing Sheets
`
`NVMEM
`
`CACHE
`
`n.
`
`PHY BLK
`ADDR
`
`INCR
`WR'S
`
`CACHE
`VALID
`
`CACHE
`INDEX
`
`OTHER
`
`44
`
`62
`
`INCR_THRESH
`
`48
`
`52
`
`54
`
`56
`
`WEAR-
`LEVEL
`CTLR
`
`70
`
`64
`
`68
`
`INTEL-1005
`8,010,740
`
`

`

`U.S. Patent
`
`Dec. 7, 1999
`
`Sheet 1 of 7
`
`6,000,006
`
`
`
`00 0
`
`1
`00
`
`
`
`01 0
`
`01
`1
`FIG. 1 10
`0
`
`10 1
`
`NVRAM
`
`PAGE O DATA
`
`PAGE 1 (DEFECTIVE)
`
`PAGE 2 DATA
`
`PAGE 3 DATA
`
`PAGE4 DATA
`
`PAGE 5 (EMPTY)
`
`0
`PRIORART 11
`
`PAGE 1 DATA
`
`1
`11
`
`10 s
`
`PAGE 7 (EMPTY)
`
`~
`12
`
`000
`
`110
`
`000
`
`000
`
`000
`
`000
`
`000
`
`000
`
`~
`14
`
`._
`
`LOGICAL
`BLKADDR
`
`BIT-MAP
`TABLE
`
`PHYSICAL
`BLKADDR
`
`000
`001
`010
`011
`100
`FIG. 2 101
`110
`111
`
`0
`1
`0
`0
`0
`0
`0
`0
`
`PRIOR ART
`
`15
`
`000
`001 BAD
`010
`011
`100
`101
`110
`111
`1000
`1001 SUB
`
`

`

`U.S. Patent
`
`Dec. 7, 1999
`
`Sheet 2 of 7
`
`6,000,006
`
`LOGICAL
`BLKADDR
`
`RE-MAP
`TABLE
`
`PHYSICAL
`BLKADDR
`
`FIG. 3
`
`000
`001
`010
`011
`100
`101
`110
`111
`
`100
`1001
`110
`011
`101
`111
`000
`010
`
`000
`001 BAD
`010
`011
`100
`101
`110
`111
`1000
`1001
`
`22
`;>
`
`24v
`
`
`
`F IG. 4
`
`CACHE
`(RAM)
`
`HOST
`REQ'S
`
`RE-MAP
`TABLE
`
`J
`20
`
`NVMEM
`
`NVMEM
`
`I • ~30
`""
`26
`
`BACKUP
`
`"' RE-MAP
`28
`
`TABLE
`
`>
`32
`
`

`

`U.S. Patent
`
`Dec. 7, 1999
`
`Sheet 3 of 7
`
`6,000,006
`
`•
`•
`•
`
`I'\,
`0
`5
`
`PHY BLK
`ADDR
`)
`44
`
`TOTAL
`WR'S
`)
`46
`
`CACHE
`INDEX
`)
`54
`
`OTHER
`
`)
`56
`
`20
`
`INCR
`WR'S
`
`CACHE
`VALID
`
`) . )
`
`48
`
`52
`
`•
`•
`FIG. 5
`
`NVMEM
`
`FIG. 6
`
`=
`
`=
`
`CACHE
`
`22
`
`PHY BLK
`ADDR
`
`TOTAL
`WR'S
`
`INCR
`WR'S
`
`CACHE
`VALID
`
`CACHE
`INDEX
`
`OTHER
`
`44
`
`46
`
`48
`
`52
`
`54
`
`56
`
`50
`
`62
`
`TOT THRESH
`
`INCR THRESH
`
`64
`
`66
`
`68
`
`WEAR-
`LEVEL
`CTLR
`
`70
`
`

`

`U.S. Patent
`
`Dec. 7, 1999
`
`Sheet 4 of 7
`
`6,000,006
`
`LBA
`
`PBA TOT WR'S
`
`INCR WR'S
`
`001
`
`010
`
`\
`
`011
`
`90
`
`100
`
`001
`
`101
`
`111
`
`011
`
`50,000
`
`20,000
`
`250,001
`
`50,000
`
`20,000 )
`
`250,001
`
`350,000
`
`15,000
`
`101
`
`100
`
`125,000
`
`125,000
`
`110
`
`000
`
`200,000
`
`200,000
`
`111
`
`010
`
`300,000
`
`12,000
`
`TOT_ THRESH = 250,000
`INCR_THRESH = 25,000
`FIG. 7A
`
`LBA
`
`PBA TOT WR'S
`
`INCR WR'S
`
`001
`
`001
`
`50,000
`
`50,000
`
`010
`
`011
`
`111
`
`101
`
`250,002
`
`20,001
`
`1
`
`1
`
`100
`
`011
`
`350,000
`
`15,000
`
`101
`
`100
`
`125,000
`
`125,000
`
`110
`
`000
`
`200,000
`
`200,000
`
`\
`
`111
`
`010
`
`313,001
`
`25,001
`
`92
`
`TOT_THRESH = 250,000
`INCR_THRESH = 25,000
`FIG. 78
`
`

`

`U.S. Patent
`
`Dec. 7, 1999
`
`Sheet 5 of 7
`
`6,000,006
`
`LBA
`
`PBA TOT WR'S
`
`INCR WR'S
`
`001
`
`010
`
`313,002
`
`1
`
`010
`
`011
`
`111
`
`101
`
`255,001
`
`5,000
`
`22,000
`
`2,000
`
`TOT_THRESH = 250,000
`
`100
`
`011
`
`350,000
`
`15,000
`
`INCR_ THRESH= 25,000
`
`101
`
`100
`
`125,000
`
`125,000
`
`110
`
`000
`
`200,000
`
`200,000
`
`FIG. 7C
`
`111
`
`001
`
`50,001
`
`1
`
`LBA
`
`PBA TOT WR'S
`
`INCR WR'S
`
`001
`
`010
`
`313,002
`
`1
`
`(
`
`96
`
`010
`
`011
`
`100
`
`111
`
`101
`
`011
`
`275,002
`
`22,000
`
`350,000
`
`101
`
`100
`
`125,000
`
`110
`
`000
`
`200,000
`
`25,001 :J
`
`2,000
`
`TOT_THRESH = 250,000
`
`15,000
`
`INCR_ THRESH= 25,000
`125,000)
`200,000
`
`FIG. 7D
`
`(111
`
`001
`
`250,001
`
`200,001
`
`98
`
`

`

`U.S. Patent
`
`Dec. 7, 1999
`
`Sheet 6 of 7
`
`6,000,006
`
`LBA PBA TOT WR'S
`
`INCR WR'S
`
`001
`
`010
`
`313,002
`
`1
`
`010
`
`101
`
`250,001
`
`228,001
`
`011
`
`111
`
`275,003
`
`1
`
`100
`
`011
`
`350,000
`
`15,000
`
`101
`
`001
`
`254,001
`
`4,000
`
`110
`
`000
`
`250,000
`
`250,000
`
`111
`
`100
`
`125,000
`
`125,000
`
`TOT_ THRESH = 300,000
`INCR_THRESH = 25,000
`FIG. 7E
`
`5128
`
`158
`
`18
`
`5128 DATA
`
`5128 DATA
`
`5128 DATA
`•
`•
`•
`
`5128 DATA
`
`5128 DATA
`
`5128 DATA s
`
`102
`
`ECC
`
`ECC
`
`ECC
`•
`•
`•
`
`ECC
`
`ECC
`
`1
`
`1
`
`1
`•
`•
`•
`
`1
`
`1
`
`ECC s
`
`104
`
`1 s
`
`106
`
`168 TOTAU8LK
`FOR L8A, TOT-WR,
`INCR-WR
`
`FIG. 8
`
`

`

`U.S. Patent
`
`Dec. 7, 1999
`
`Sheet 7 of 7
`
`6,000,006
`
`2K SUB-
`DIR BLKS
`
`74
`
`•
`•
`•
`
`MSTR
`DIR
`
`68
`
`72
`
`2K SUB-
`DIR BLKS
`
`78
`
`2K SUB-
`DIR BLKS
`
`76
`
`•
`•
`•
`
`2K SUB(cid:173)
`DIR BLKS
`
`FIG. 9
`
`NVMEM
`
`26
`
`•
`•
`•
`
`NVMEM
`
`24
`
`

`

`6,000,006
`
`1
`UNIFIED RE-MAP AND CACHE-INDEX
`TABLE WITH DUAL WRITE-COUNTERS
`FOR WEAR-LEVELING OF NON-VOLATILE
`FLASH RAM MASS STORAGE
`BACKGROUND OF THE INVENTION
`1. Field of the Invention
`This invention relates to non-volatile memory storage
`systems, and more particularly to wear-leveling, caching,
`and re-mapping of flash memory.
`2. Description of the Related Art
`Non-volatile semiconductor memory is useful for rela(cid:173)
`tively long-term storage in computer systems. Often the
`computer's hard disk is replaced by an array of non-volatile
`random-access memories (NVRAM's) or non-volatile flash
`memories. Battery-backed DRAM is sometimes used. These
`memory devices use electrically-erasable programmable
`read-only-memory (EEPROM) technology for storage cells.
`Floating polysilicon gates in these storage cells retain charge
`and state when power is lost, providing non-volatile storage.
`Flash EEPROM chips are divided into pages and blocks.
`A 64 Mbit flash chip typically has 512-byte pages which
`happens to match the sector size for IDE and SCSI hard
`disks. Rather than writing to just one word in the page, the
`entire page must be written at the same time; individual 25
`bytes cannot be written. The page must be cleared of any
`previous data before being written; clearing is accomplished
`by a flash erase cycle. An entire block pages (typically 16
`pages) is erased at once. Thus a block of 16 pages must be
`erased together, while all 512 bytes on a page must be 30
`written together. EEPROM memory cells are not as reliable
`as static or dynamic RAM cells. Indeed, one or two percent
`of the pages on a new flash chip are usually defective when
`sold to the customer. EEPROM cells wear out as they are
`written and erased because some charge is trapped in the 35
`insulating oxide layers surrounding the floating gate. Even(cid:173)
`tually this trapped charge increases until it prevents an
`applied voltage from sufficiently reading the cell. Thus a
`scheme is needed to identify such defective memory cells
`and replace them with good cells.
`FIG. 1 is a prior-art replacement scheme for re-mapping
`defective pages of flash memory. Flash chip 10 has a
`memory array of EEPROM cells arranged into pages. Each
`page contains a 512-byte data field 12, and an additional
`16-byte pointer field 14. Pointer field 14 contains a re-map 45
`bit (not shown) that is set when a defect is found in any of
`the 512 bytes of data in the page. When a page is read, the
`re-map bit is also read to determine if the page is defective.
`If defective, a pointer to another page is read from pointer
`field 14. The new page pointed to contains the replaced data. 50
`For example, page 1 of FIG. 1 is defective. When page 1
`at address 001 is read and found to be defective, pointer field
`14 is also read. Pointer field 14 for page 1 contains the
`pointer 110. The address is changed to 110 so that the
`replacement page at address 110 is read to obtain the data for 55
`page 1. Thus address 001 to page 1 is re-mapped by pointer
`field 14 to replacement page 6 at address 110. Page 6 was
`initially reserved as a spare page for later replacement of a
`defective page.
`An extra read of the flash memory may be necessary to 60
`read the bad data with the pointer. Another problem with
`using pointer field 14 for re-mapping is that the 16 bytes of
`the pointer field must be good. When a defect occurs in the
`data field 12 of a page, and in pointer field 14, then the
`defective page cannot be re-mapped and bad data can be 65
`read. It is thus critical that the pointer field 14 contain good
`memory cells.
`
`40
`
`2
`FIG. 2 shows a prior-art re-mapping scheme using a
`bit-map table. To work around the problems of using the
`pointer fields in the flash memory pages, a separate re-map
`table in SRAM is used. SRAM re-map table 15 is accessed
`5 when the flash memory chip is accessed. Most logical
`addresses are simply passed through without translation or
`re-mapping since the corresponding bit in re-map table 15 is
`a zero. However, when a logical address's entry in re-map
`table 15 is a one, the logical address is re-mapped to a
`10 different physical address so that a replacement page is
`accessed.
`When a one is encountered when reading re-map table 15,
`a separate address re-map table (not shown) is consulted to
`find the physical address. This address re-map table is
`15 typically stored in the last block of each flash device.
`Unfortunately, this scheme requires that the last block of the
`flash device is defect-free and not wear out; otherwise the
`address re-map table can re-map to the wrong page.
`More complex re-mapping or address translation tables
`20 have been used with flash memory devices. Assar et al. in
`U.S. Pat. Nos. 5,479,638, and 5,485,595, assigned to Cirrus
`Logic of Fremont, Calif., teaches a content-addressable
`memory for a re-map table, using CAM, EEPROM, DRAM,
`or SRAM cells. This table includes an erase counter field
`which is incremented as a page is erased and written. Once
`a page reaches a threshold erase count, it is moved to an
`unused page. Once all unused pages are depleted, a clean(cid:173)
`out erase cycle is performed to clear all erase counters to
`zero. The process then repeats. Thus the wear on pages of
`flash memory is spread out or leveled among all pages.
`Periodically clearing the erase counters is undesirable
`because there is no way to determine the total number of
`erase/write cycles to a given block of flash memory. Some
`blocks can be use heavily while others are lightly used; yet
`their erase counters are periodically cleared to zero regard(cid:173)
`less of the usage. Thus while wear-leveling occurs between
`erase-counter clears, longer-term wear leveling beyond the
`erase-counter clears is not performed.
`Not periodically clearing the erase counters allows the
`total number of erase/writes to be stored, but then pages can
`be swapped unnecessarily when their erase counts hover
`around the threshold. This thrashing of pages is similar to the
`problem seen in processor caches.
`Other flash memory systems with wear-leveling are
`taught by Kaki et al. in U.S. Pat. No. 5,530,828, assigned to
`Hitachi of Tokyo, Japan, and Harari et al. in U.S. Pat. No.
`5,602,987, assigned to SanDisk Corp. of Sunnyvale, Calif.
`Kaki et al. use a management table with a counter that counts
`bytes written as the 512 bytes are written to a page. Harari
`et al. uses a data cache to buffer writes to the flash memory
`and reduce the total number of writes to flash.
`While these flash memory systems are useful, a more
`effective flash memory system is desired. A more efficient
`and exact wear-leveling scheme is desired. It is desired to
`minimize excess writes to flash memory while re-mapping
`addresses to pages of flash memory. A unified table for
`re-mapping, wear-leveling, and caching flash memories is
`desirable.
`
`SUMMARY OF THE INVENTION
`A unified re-mapping table for a flash-memory system has
`a plurality of entries. An entry is selected by a logical
`address from a host system.
`Each entry in the plurality of entries has a physical-block(cid:173)
`address field that contains a physical block address of a
`block in an array of flash-memory devices. Each flash-
`
`

`

`25
`
`FIG. 1 is a prior-art replacement scheme for re-mapping
`defective pages of flash memory.
`FIG. 2 shows a prior-art re-mapping scheme using a
`bit-map table.
`FIG. 3 is a diagram illustrating re-mapping of all accesses
`to a flash memory system.
`FIG. 4 is a diagram of a flash-memory system using a
`write-back cache.
`FIG. 5 is a diagram of an entry in the unified re-map table
`for a block of flash memory.
`FIG. 6 shows how the entry in the unified re-map table is
`used to access data in the cache or in flash memory, and how
`the wear-leveling counters are used to determine when
`30 wear-leveling is needed.
`FIGS. 7A-7E show examples of wear-leveling operations
`using dual write counters for total and incremental writes to
`flash blocks.
`FIG. 8 shows that block-granularity rather than page-
`35 granularity for re-mapping and wear-leveling provides addi(cid:173)
`tional bits for the error-correction code.
`FIG. 9 shows a multi-level unified re-mapping table.
`
`15
`
`3
`memory device contains non-volatile storage cells that retain
`data when a power supply is no longer applied to the
`flash-memory device.
`A total-write-counter field indicates a total number of
`write-erase cycles of the block identified by the physical-
`block-address field. An incremental-write-counter field indi-
`cates an incremental number of write-erase cycles since a
`wear-leveling operation for the block. Both the total number
`and the incremental number from the entry must exceed
`thresholds to initiate wear-leveling for the block.
`The block is wear-leveled by moving the physical block
`address and the total number to a different entry in the
`unified re-mapping table. Thus each entry includes the
`physical block address for address translation to the flash(cid:173)
`memory devices and two write-counters for wear-leveling.
`In further aspects of the invention a total-threshold reg(cid:173)
`ister contains a total-write threshold for the flash-memory
`system. An incremental-threshold register contains an
`incremental-write threshold for the flash-memory system. A
`total compare means is coupled to the total-threshold reg(cid:173)
`ister and receives the total number from the entry selected by 20
`a current logical address. The total compare means activates
`a first signal when the total number from the entry in the
`unified re-mapping table exceeds the total-write threshold.
`An incremental compare means is coupled to the
`incremental-threshold register and receives the incremental
`number from the entry selected by the current logical
`address. It activates a second signal when the incremental
`number from the entry in the unified re-mapping table
`exceeds the incremental-write threshold.
`A wear-leveler is activated by a write to the current logical
`address when both the first signal and the second signal are
`activated. It wear-levels the block by moving the physical
`block address and the total number to a different entry in the
`unified re-mapping table. It replaces the physical block
`address with a new physical block address for a physical
`block with a smaller total number of write-erase cycles.
`Thus the total number from the entry must exceed the
`total-write threshold and the incremental number must
`exceed the incremental-write threshold to initiate wear(cid:173)
`leveling.
`In still further aspects the total-write-counter field is never
`cleared but continues to increase over a lifetime of the
`flash-memory system. However, the incremental-write(cid:173)
`counter field is reset when the block is allocated to a 45
`different logical address and moved to a different entry in the
`plurality of entries. Thus the total-write-counter field is
`never reset but the incremental-write-counter field is reset by
`wear-leveling.
`In further aspects, each page in the flash-memory devices
`contains a data field and a system field. The system field
`contains only one byte for mapping, while all other bytes in
`the system field are available for an error-correction code for
`the data field. Thus all except one byte of the system field is
`used for the error-correction code.
`In further aspects the one byte for mapping for each page
`in a block of pages is combined with the one byte for
`mapping for other pages in the block to form a block pointer.
`The block pointer contains a copy of the total number and
`the incremental number of writes from the entry in the 60
`unified re-mapping table for the block.
`In other aspects the block pointer includes the logical
`address for data in the data field of pages in the block. Thus
`the block pointer includes a reverse map with the logical
`address mapped to the physical block address for the block. 65
`In further aspects of the invention each entry in the
`plurality of entries has a cache valid field that indicates when
`
`6,000,006
`
`5
`
`4
`data identified by the logical address resides in a cache. The
`cache has volatile memory that loses data when power is
`lost. A cache index field contains a cache index when the
`cache valid field indicates that the data resides in the cache.
`The cache index identifies a location in the cache of the data
`for the logical address. Thus each entry identifies the loca-
`tion in cache for the data or the location in the flash-memory
`devices.
`In other aspects the cache contains only data while cache
`10 tags containing cache addresses are absent for the cache.
`Thus the cache index in the unified re-mapping table elimi(cid:173)
`nates the cache tags.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`DETAILED DESCRIPTION
`
`40
`
`The present invention relates to an improvement in flash(cid:173)
`memory controllers. The following description is presented
`to enable one of ordinary skill in the art to make and use the
`invention as provided in the context of a particular applica(cid:173)
`tion and its requirements. Various modifications to the
`preferred embodiment will be apparent to those with skill in
`the art, and the general principles defined herein may be
`applied to other embodiments. Therefore, the present inven(cid:173)
`tion is not intended to be limited to the particular embodi-
`50 ments shown and described, but is to be accorded the widest
`scope consistent with the principles and novel features
`herein disclosed.
`Dual Write Counters
`The inventors have realized that more efficient yet flexible
`55 wear-leveling can be accomplished if both the total number
`of writes and the incremental number of writes to a block are
`stored. Dual write counters keep track of:
`1. total number of writes to a block over its entire
`existence,
`2. incremental number of writes since the block's last
`wear-leveling operation.
`The total-write counter is never reset; the incremental(cid:173)
`write counter is reset after each wear-leveling swap. Both
`the total and incremental-write counts are stored for every
`block. Each time a block is written, its total-write count is
`compared to a total-write threshold, while the block's
`incremental-write count is compared to an incremental-write
`
`

`

`6,000,006
`
`10
`
`5
`threshold. Both the total and incremental thresholds must be
`exceeded for a wear-leveling swap to occur.
`The incremental-write counter prevents thrashing, where
`a block is continually swapped back and forth during
`successive wear-leveling operations. The total-write counter
`allows the wear-leveling firmware to determine which
`blocks have had the most writes and are most likely to wear
`out. The total and incremental thresholds can be adjusted
`over the life of the flash-memory system, or tweaked to
`improve performance. Wear-leveling firmware can search
`for a replacement block that has both a low number of total
`writes and a low number of recent (incremental) writes.
`Unified Re-Map Table Has Wear-Leveling Counters
`The inventors have also realized that the dual write
`counters are preferably stored in a unified re-map table along
`with address translation information. All flash accesses are
`first routed through the unified re-map table to determine the
`physical block address of the data in flash memory. This
`one-step re-mapping is faster for bad pages than the prior-art
`technique of mechanical mapping followed by reading
`re-map pointers of bad pages.
`Unified Re-Map Table Replaces Cache Tags
`Frequently-written data may also be buffered in a cache.
`The frequent writes can be merged together in a copy-back
`cache before being written to the flash memory. Such
`write-buffering by an SRAM or DRAM cache can reduce the 25
`number of writes to flash memory, thus extending its life.
`Rather than have a separate tag RAM for the cache, the
`cache tags are merged into the unified re-map table. Since
`the re-map table is accessed with the logical address, the tag
`portion of the address is not stored. Instead, the cache index 30
`is directly stored in the re-map table, along with the cache
`valid bits, dirty bits, etc.
`A single lookup of the re-map table for a logical address
`produces the physical block address (PEA) of the data in
`flash memory, the write-counts for wear-leveling, and the 35
`cache index (pointer to the data in the cache) should the data
`currently be in the cache.
`Arbitrary Re-Mapping
`FIG. 3 is a diagram illustrating re-mapping of all accesses
`to a flash memory system. Logical block addresses (LBA's) 40
`from a user or host system are received and used as an index
`into a re-map table. The re-map table includes a physical
`block address (PEA) for each possible logical block address.
`Since all accesses first pass through the re-map table, logical
`addresses can be arbitrarily translated into physical 45
`addresses for the flash devices. The physical addresses do
`not have to correspond to the logical addresses, allowing
`more-efficient wear-leveling to occur. Flash blocks can be
`swapped and frequently-written addresses allocated to dif(cid:173)
`ferent flash blocks over time without regard to the logical 50
`address. The access time is the same whether a block is
`re-mapped or not.
`The physical address space is larger than the logical
`address space for the flash memory, as can be seen in FIG.
`3. Additional blocks of flash memory are needed as flash
`blocks wear out and must be replaced. More flash memory
`is needed for system information such as storing non(cid:173)
`volatile backups of the re-map table.
`Rather than map flash pages, entire blocks are mapped.
`Since the block is the smallest unit for erase, while the page 60
`is the smallest unit for write, it is more efficient to keep
`block-based wear-leveling and re-mapping information.
`Thus blocks rather than pages are re-mapped by the unified
`re-map table. A block is typically composed of 16 pages. A
`page is usually 512 bytes in size.
`FIG. 3 shows that logical block address (LEA) 000 is
`re-mapped to PEA 100, while LEA 001 is re-mapped to PEA
`
`6
`1001. Defective PEA 001 is not used, while good PEA 1000
`is not used but is available as a replacement PEA should one
`of the other PBA's fail. In actual systems, the LBA's and
`PBA's have more address bits than shown in FIG. 3, such as
`5 32 bits for the LEA and 32 bits for the PEA
`Flash-Memory-System Block Diagram-FIG. 4
`FIG. 4 is a diagram of a flash-memory system using a
`write-back cache. Such a system is useful as a mass-storage
`replacement to a hard disk on a personal computer and is
`sometimes referred to as a flash "disk", even though no
`rotating disk is used.
`Host requests for access to flash storage are first
`re-mapped by unified re-map table 20. These host requests
`contain a logical address which includes a logical-block
`15 address (LEA) and an offset address that identifies a page
`within a flash block and perhaps a byte within a page for a
`read access. Some embodiments only read a page or a block
`at a time and thus do not use offset address bits.
`Unified re-map table 20 contains entries preferably for all
`20 LBA's. The entry for the current LEA is read from unified
`re-map table 20, and a cache-valid bit is checked to deter(cid:173)
`mine if the LBA's data is in cache 22. More-recently
`accessed LBA's have their data stored in cache 22 so that
`writes to the same pages of flash memory can be merged
`together before updating the flash memory. Write-caching
`reduces the number of writes to flash memory and thus
`reduces wear and improves performance.
`When the cache valid bit from the entry retrieved from
`unified re-map table 20 indicates that a copy of the data is
`stored in cache 22, then the data is either read from or
`written to cache 22. A cache-index field contained in the
`entry from unified re-map table 20 is used to locate the data
`in cache 22. Thus cache 22 does not have a tag RAM as most
`caches do.
`When the cache valid bit indicates that the data is not
`stored in cache 22, then a PEA field in the entry retrieved
`from unified re-map table 20 is used to access the data in
`flash memory. Flash memory is contained in several non(cid:173)
`volatile (NV) flash memories 24, 26, which can be indi(cid:173)
`vidual flash-memory chips, or cards of flash-memory chips
`such as flash PC cards. Bus 32 is used to access flash
`memories 24, 26.
`Since unified re-map table 20 is contained in volatile
`SRAM memory, back-up copy 28 of the re-map table is
`stored in flash memory 26. When power fails, unified re-map
`table 20 can be re-loaded from back-up copy 28. A second
`back-up copy may also be stored in flash memory 26. A third
`level of back-up protection is available by reconstructing
`unified re-map table 20 from block-pointer fields 30 at the
`end of each block in flash memories 24, 26.
`Unified Re-Map Table Entry-FIG. 5
`FIG. 5 is a diagram of an entry in the unified re-map table
`for a block of flash memory. The entry includes address
`translations, a cache valid bit, and dual write counters for
`55 total and incremental writes to this block. Unified re-map
`table 20 contains a plurality of entries 50, each for a different
`logical block address (LEA). The LEA or a portion of the
`LBAis used as an index into unified re-map table 20 to select
`one of the entries 50.
`Entry 50 contains physical block address PEA field 44,
`which contains the physical address of the flash memory
`device containing the data of the LEA This physical address
`includes a portion of the address that identifies a flash
`memory chip or card, and another portion that identifies a
`65 block within a flash chip.
`Wear-leveling information is contained in dual write(cid:173)
`counter fields 46, 48 in entry 50 for each physical flash
`
`

`

`6,000,006
`
`7
`block. As physical blocks and thus PEA field 44 are swapped
`for wear-leveling, the values in counter fields 46, 48 are also
`moved to other entries with the contents of PEA field 44.
`Total-write-counter field 46 contains a value representing a
`total number of writes to the physical block since system 5
`manufacture. Incremental-write-counter field 48 contains
`the number of writes since this PEA was allocated to the
`LEA, or since it was last wear-leveled. Incremental-write(cid:173)
`counter field 48 is preferably cleared when physical flash
`blocks are re-assigned to a LEA, but the total-write count
`from total-write-counter field 46 is transferred over from the
`old entry to the entry for the newly allocated LEA
`Cache valid field 52 contains a valid bit that is set when
`the LBA's data resides in a cache. Cache index field 54 then
`contains an index or pointer to the data in the cache. Cache 15
`valid field 52 may also contain a dirty bit to indicate when
`the data in the cache has been modified but not yet copied
`back to the flash memory. Other cache management infor(cid:173)
`mation can be contained in cache valid field 52, or in other
`field 56.
`Use of Unified Wear-Leveling, Cache & Flash Entry-FIG.
`6
`
`FIG. 6 shows how the entry in the unified re-map table is
`used to access data in the cache or in flash memory, and how
`the wear-leveling counters are used to determine when
`wear-leveling is needed. An entry 50 from the unified
`re-map table is selected by a logical block address (LEA).
`Cache valid field 52 is used to determine when data in
`cache 22 is valid for the entry's LEA When cache valid field
`52 indicates that the cache is valid, cache index field 54 is
`used to find the data in cache 22. Otherwise, the physical
`block address from PEA field 44 is used to access the data
`in flash memory 24.
`To determine when wear-leveling is required, the total
`writes for this physical block in

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket