`
`Series Editor
`Anantha Chandrakasan
`Massachusetts Institute of Technology
`Cambridge, Massachusetts
`
`For further volumes, go to
`http://www.springer.com/series/7236
`
`Limestone Memory Systems, LLC – Exhibit 2013, p. 1
`
`
`
`.
`
`Limestone Memory Systems, LLC – Exhibit 2013, p. 2
`
`
`
`Masashi Horiguchi l Kiyoo Itoh
`
`Nanoscale Memory Repair
`
`Limestone Memory Systems, LLC – Exhibit 2013, p. 3
`
`
`
`Dr. Masashi Horiguchi
`Renesas Electronics Corporation
`5-20-1, Josuihon-cho
`Kodaira-shi, Tokyo, 187-8588
`Japan
`masashi.horiguchi.kc@renesas.com
`
`Dr. Kiyoo Itoh
`Hitachi Ltd.
`Central Research Laboratory
`1-280, Higashi-Koigakubo
`Kokubunji-shi, Tokyo, 185-8601
`Japan
`kiyoo.itoh.pt@hitachi.com
`
`ISSN 1558-9412
`ISBN 978-1-4419-7957-5
`DOI 10.1007/978-1-4419-7958-2
`Springer New York Dordrecht Heidelberg London
`
`e-ISBN 978-1-4419-7958-2
`
`# Springer ScienceþBusiness Media, LLC 2011
`New York, written permission of the publisher (Springer ScienceþBusiness Media, LLC, 233 Spring
`
`All rights reserved. This work may not be translated or copied in whole or in part without the
`
`Street NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis.
`Use in connection with any form of information storage and retrieval, electronic adaptation, computer
`software, or by similar or dissimilar methodology now known or hereafter developed is forbidden.
`The use in this publication of trade names, trademarks, service marks, and similar terms, even if they
`are not identified as such, is not to be taken as an expression of opinion as to whether or not they are
`subject to proprietary rights.
`
`Printed on acid-free paper
`
`Springer is part of Springer ScienceþBusiness Media (www.springer.com)
`
`Limestone Memory Systems, LLC – Exhibit 2013, p. 4
`
`
`
`Preface
`
`Repair techniques for nanoscale memories are becoming more important to cope
`with ever-increasing “errors” causing degraded yield and reliability. In fact, without
`repair techniques, even modern CMOS LSIs, such as MPUs/SoCs, in which mem-
`ories have dominated the area and performances, could not have been successfully
`designed. Indeed, various kinds of errors have been prominent with larger capacity,
`smaller feature size, and lower voltage operations of such LSIs. The errors are
`categorized as hard/soft errors, timing/voltage margin errors, and speed-relevant
`errors. Hard/soft errors and timing/voltage margin errors, which occur in a chip, are
`prominent in a memory array because the array comprises memory cells having
`the smallest size and largest circuit count in the chip. In particular, coping with the
`margin errors is vital for low-voltage nanoscale LSIs, since the errors rapidly
`increase with device and voltage scaling. Increase in operating voltage is one of
`the best ways to tackle the issue. However, this approach is unacceptable due
`to intolerably increased power dissipation, calling for other solutions by means of
`devices and circuits. Speed-relevant errors, which are prominent at a lower voltage
`operation, comprise speed-degradation errors of the chip itself and intolerably wide
`chip-to-chip speed-variation errors caused by the ever-larger interdie design-
`parameter variation. They must also be solved with innovative devices and circuits.
`For the LSI industry, in order to flourish and proliferate, the problems must be
`solved based on in-depth investigation of the errors.
`Despite the importance, there are few authoritative books on repair techniques
`because the solutions to the problems lie across different fields, e.g., mathematics
`and engineering, logic and memories, and circuits and devices. This book system-
`atically describes the issues, based on the authors’ long careers in developing
`memories and low-voltage CMOS circuits. This book is intended for both students
`and engineers who are interested in the yield, reliability, and low-voltage operation
`of nanoscale memories. Moreover, it is instructive not only to memory designers,
`but also to all digital and mixed-signal LSI designers who are at the leading edge of
`such LSI developments.
`Chapter 1 describes the basics of repair techniques. First, after categorizing
`sources of hard/soft errors, the reductions by means of redundancy, error checking
`
`v
`
`Limestone Memory Systems, LLC – Exhibit 2013, p. 5
`
`
`
`vi
`
`Preface
`
`and correction (ECC), and their combination are comprehensively described. Sec-
`ond, after defining the minimum operating voltage Vmin, reductions of timing/
`voltage margin errors are described in terms of Vmin. Finally, reduction techniques
`for speed-relevant errors are briefly discussed.
`Chapter 2 deals with a detailed explanation of the redundancy techniques
`for repairing hard errors (faults), where faulty memory cells are replaced by spare
`memory cells provided on the chip in advance. Various yield models and calcula-
`tions are introduced and various practical circuits and architectures that the authors
`regard as important for higher yield and reliability are discussed. The chapter also
`describes the devices for memorizing the addresses of faults and testing techniques
`for redundancy.
`Chapter 3 describes the details of the ECC techniques to cope with both hard and
`soft errors, where extra bits (check bits) are added to original data bits, thereby
`enabling error detection and/or correction. After mathematical preparations, vari-
`ous error-correcting codes used for the techniques and their practical implementa-
`tions in various memory LSIs are discussed. This is followed by the estimation of
`the reduction in hard-error and soft-error rates using ECC. Testing techniques for
`ECC are also described.
`Chapter 4 deals with the combination of the redundancy and ECC. Combining
`both the techniques generates a synergistic effect and dramatically enhances the
`repair capability. It is especially effective for random-bit errors. After quantitative
`estimation of the synergistic effect, the application to the repair of faults due to
`device mismatch is discussed as a promising application of the effect.
`Chapter 5 systematically describes challenges to ultra-low-voltage nanoscale
`memories and the repair techniques to accomplish the issues. After clarifying that
`reduction in the minimum operating voltage VDD (i.e., Vmin) is the key to reducing
`the voltage and timing margin error, adaptive circuits and relevant technologies to
`min are proposed, and the general features are described. Then, the V
`reduce V
`mins of
`logic gates, SRAMs, and DRAMs are compared. After that, devices (e.g., fully
`depleted planar SOI and FinFET structures), circuits (e.g., gate-source reverse
`biasing schemes accepting low threshold voltage (Vt) MOSFETs), and subsystems
`to widen the margins through reducing Vmin are described.
`Chapter 6 briefly describes device/circuit techniques to cope with two kinds of
`speed-relevant errors, namely, the speed degradation error and the interdie speed
`variation error. After specifying reduced gate-over-drive voltage of MOSFETs as
`the source of the speed degradation error, some solutions (e.g., using low-V
`t0
`circuits and dynamic V
`t circuits utilizing double-gate FD-SOI structures) are exem-
`plified. Moreover, after specifying the so-called global variation of design para-
`meters in the wafer as the source of the interdie speed variation error, some
`solutions such as power management for compensating for the variation with static
`or quasi-static controls of internal supply voltages are presented.
`
`Tokyo, Japan
`
`Masashi Horiguchi
`Kiyoo Itoh
`
`Limestone Memory Systems, LLC – Exhibit 2013, p. 6
`
`
`
`Contents
`
`1 An Introduction to Repair Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
`1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
`1.2 Hard and Soft Errors and Repair Techniques . . . . . . . . . . . . . . . . . . . . . 1
`1.2.1 Hard and Soft Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
`1.2.2 Redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
`1.2.3 ECC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
`1.2.4 Combination of Redundancy and ECC . . . . . . . . . . . . . . . . . . . . . 8
`1.2.5 Others . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
`1.3 Margin Errors and Repair Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . 9
`1.3.1 Device and Process Variations . . . . . . . . . . . . . . . . . . . . . . . . . . 11
`1.3.2 Timing and Voltage Margin Errors . . . . . . . . . . . . . . . . . . . . . . . 11
`1.3.3 Reductions of Margin Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
`1.4 Speed-Relevant Errors and Repair Techniques . . . . . . . . . . . . . . . . . . . 15
`References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
`
`2 Redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
`2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
`2.2 Models of Fault Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
`2.2.1 Poisson Distribution Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
`2.2.2 Negative-Binomial Distribution Model . . . . . . . . . . . . . . . . . . . . 22
`2.3 Yield Improvement Through Redundancy . . . . . . . . . . . . . . . . . . . . . . 25
`2.4 Replacement Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
`2.4.1 Principle of Replacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
`2.4.2 Circuit Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
`2.5 Intrasubarray Replacement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
`2.5.1 Simultaneous and Individual Replacement . . . . . . . . . . . . . . . . . 39
`2.5.2 Flexible Replacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
`2.5.3 Variations of Intrasubarray Replacement . . . . . . . . . . . . . . . . . . . 49
`2.6 Intersubarray Replacement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
`2.7 Subarray Replacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
`
`vii
`
`Limestone Memory Systems, LLC – Exhibit 2013, p. 7
`
`
`
`viii
`
`Contents
`
`2.8 Devices for Storing Addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
`2.8.1 Fuses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
`2.8.2 Antifuses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
`2.8.3 Nonvolatile Memory Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
`2.9 Testing for Redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
`References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
`
`3 Error Checking and Correction (ECC)
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
`3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
`3.2 Linear Algebra and Linear Codes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
`3.2.1 Coding Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
`3.2.2 Decoding Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
`3.3 Galois Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
`3.4 Error-Correcting Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
`3.4.1 Minimum Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
`3.4.2 Number of Check Bits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
`3.4.3 Single Parity Check Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
`3.4.4 Hamming Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
`3.4.5 Extended Hamming Code and Hsiao Code . . . . . . . . . . . . . . . . . 84
`3.4.6 Bidirectional Parity Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
`3.4.7 Cyclic Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
`3.4.8 Nonbinary Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
`3.5 Coding and Decoding Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
`3.5.1 Coding and Decoding Circuits for Hamming Code . . . . . . . . . . . 92
`3.5.2 Coding and Decoding Circuits for Cyclic Hamming Code . . . . . . 97
`3.5.3 Coding and Decoding Circuits for Nonbinary Code. . . . . . . . . . 102
`3.6 Theoretical Reduction in Soft-Error and Hard-Error Rates . . . . . . . . . 105
`3.6.1 Reduction in Soft-Error Rate. . . . . . . . . . . . . . . . . . . . . . . . . . . 105
`3.6.2 Reduction in Hard-Error Rate . . . . . . . . . . . . . . . . . . . . . . . . . . 108
`3.7 Application of ECC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
`3.7.1 Application to Random-Access Memories . . . . . . . . . . . . . . . . . 112
`3.7.2 Application to Serial-Access Memories . . . . . . . . . . . . . . . . . . . 126
`3.7.3 Application to Multilevel-Storage Memories . . . . . . . . . . . . . . . 130
`3.7.4 Application to Other Memories . . . . . . . . . . . . . . . . . . . . . . . . . 133
`3.8 Testing for ECC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
`References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
`
`4 Combination of Redundancy and Error Correction . . . . . . . . . . . . . . . . . . 139
`4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
`4.2 Repair of Bit Faults Using Synergistic Effect . . . . . . . . . . . . . . . . . . . 139
`4.2.1 Principle of Synergistic Effect. . . . . . . . . . . . . . . . . . . . . . . . . . 139
`4.2.2 Yield Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
`4.3 Application of Synergistic Effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
`4.3.1 Threshold-Voltage Variations . . . . . . . . . . . . . . . . . . . . . . . . . . 149
`4.3.2 Estimated Effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
`References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
`
`Limestone Memory Systems, LLC – Exhibit 2013, p. 8
`
`
`
`Contents
`
`ix
`
`5 Reduction Techniques for Margin Errors of Nanoscale Memories
`. . 157
`5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
`5.2 Definition of V
`min. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
`5.3 Reduction of Vmin for Wider Margins. . . . . . . . . . . . . . . . . . . . . . . . . 160
`5.3.1 General Features of Vmin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
`5.3.2 Comparison of Vmin for Logic Block, SRAMs, and DRAMs . . . 165
`5.4 Advanced MOSFETs for Wider Margins . . . . . . . . . . . . . . . . . . . . . . 165
`5.4.1 Planar FD-SOI MOSFETs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
`5.4.2 FinFETs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
`5.5 Logic Circuits for Wider Margins . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
`5.5.1 Gate-Source Offset Driving. . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
`5.5.2 Gate-Source Differential Driving. . . . . . . . . . . . . . . . . . . . . . . . 178
`5.5.3 Combined Driving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
`5.5.4 Instantaneous Activation of Low-Vt0 MOSFETs . . . . . . . . . . . . 181
`5.5.5 Gate Boosting of High-Vt0 MOSFETs . . . . . . . . . . . . . . . . . . . . 181
`5.6 SRAMs for Wider Margins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
`5.6.1 Ratio Operations of the 6-T Cell . . . . . . . . . . . . . . . . . . . . . . . . 182
`5.6.2 Shortening of Datalines and Up-Sizing of the 6-T Cell . . . . . . . 183
`5.6.3 Power Managements of the 6-T Cell . . . . . . . . . . . . . . . . . . . . . 185
`5.6.4 The 8-T Cell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
`5.7 DRAMs for Wider Margins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
`5.7.1 Sensing Schemes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
`5.7.2 Vmin(SA) of Sense Amplifier . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
`5.7.3 Vmin(Cell) of Cell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
`5.7.4 Comparison Between V
`min(SA) and V
`min(Cell) . . . . . . . . . . . . . 190
`5.7.5 Low-V
`t0 Sense Amplifier. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
`5.7.6 FD-SOI Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
`5.8 Subsystems for Wider Margins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
`5.8.1 Improvement of Power Supply Integrity . . . . . . . . . . . . . . . . . . 194
`5.8.2 Reduction in Vt0 at Subsystem Level . . . . . . . . . . . . . . . . . . . . . 195
`5.8.3 Low-Vt0 Power Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
`References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
`
`6 Reduction Techniques for Speed-Relevant Errors
`of Nanoscale Memories
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
`6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
`6.2 Reduction Techniques for Speed-Degradation Errors . . . . . . . . . . . . . 204
`6.3 Reduction Techniques for Interdie Speed-Variation Errors . . . . . . . . . 205
`6.3.1 On-Chip VBB Compensation . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
`6.3.2 On-Chip VDD Compensation and Others . . . . . . . . . . . . . . . . . . 211
`References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
`
`Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
`
`Limestone Memory Systems, LLC – Exhibit 2013, p. 9
`
`
`
`.
`
`Limestone Memory Systems, LLC – Exhibit 2013, p. 10
`
`
`
`Chapter 2
`Redundancy
`
`2.1
`
`Introduction
`
`For designing redundancy circuit, the estimation of the advantages and disadvan-
`tages is indispensable. The introduction of redundancy in a memory chip results in
`yield improvement and fabrication-cost reduction. However, it also causes the
`following penalties. First, spare memory cells to replace faulty cells, programmable
`devices to memorize faulty addresses, and control circuitry to increase chip size.
`Second, the time required for the judgment whether the input address is faulty or not
`is added to the access time. Third, special process steps to fabricate the program-
`mable devices and test time to store faulty addresses into the devices are required.
`Therefore, the design of redundancy circuit requires a trade-off between yield
`improvement and these penalties. The estimation of yield improvement requires a
`fault-distribution model. There are two representative models, Poisson distribution
`model and negative-binomial model, which are often used for the yield analysis of
`memory LSIs. The “replacement” of normal memory elements by spare elements
`requires checking whether the accessed address includes faulty elements, and if yes,
`inhibiting the faulty element from being activated and activating a spare element
`instead. These procedures should be realized with as small penalty as possible. One
`of the major issues for the replacement is memory-array division. Memory arrays
`are often divided into subarrays for the sake of access-time reduction, power
`reduction, and signal/noise ratio enhancement. There are two choices for memories
`with array division: (1) a faulty element in a subarray is replaced only by a spare
`element in the same subarray (intrasubarray replacement) and (2) a faulty element
`in a subarray may be replaced by a spare element in another subarray (intersubarray
`replacement). The former has smaller access penalty, while the latter realizes higher
`replacement efficiency. It is also possible that a subarray is replaced by a spare
`subarray. The devices for memorizing faulty addresses and test for finding out an
`effective replacement are also important issues for redundancy.
`The fault distribution models are presented in Sect. 2.2. The yield improvement
`analysis using the models is described in Sect. 2.3. Section 2.4 describes the circuit
`techniques for realizing the replacement. The intrasubarray replacement, inter-
`subarray replacement, and subarray replacement are described in Sects. 2.5, 2.6,
`
`M. Horiguchi and K. Itoh, Nanoscale Memory Repair, Integrated Circuits and Systems 1,
`DOI 10.1007/978-1-4419-7958-2_2, # Springer ScienceþBusiness Media, LLC 2011
`
`19
`
`Limestone Memory Systems, LLC – Exhibit 2013, p. 11
`
`
`
`20
`
`2 Redundancy
`
`and 2.7, respectively. The programmable devices for storing faulty addresses are
`described in Sect. 2.8. Finally, testing techniques for redundancy are explained in
`Sect. 2.9.
`
`2.2 Models of Fault Distribution
`
`2.2.1 Poisson Distribution Model
`
`Let us consider a memory chip with N “elements” (Fig. 2.1). Here, an element may
`be a memory cell, a row of memory cells, a column of memory cells, a subarray,
`and so on. If faults are randomly distributed in the chip, the probability of an
`element being faulty, p, is independent of the probability of other elements being
`faulty or nonfaulty. Therefore, the probability that k elements are faulty and
`(N K) elements are not faulty is expressed as the product of their probabilities,
`(1 p)
`N K
`pK
`. Since the number of cases of selecting K faulty elements out of N
`
`elements is expressed by
`
`N!
`
`¼ NCK ¼
`
`N K
`
`ðN KÞ!K! ¼ NðN 1Þ ðN K þ 1Þ
`
`PðKÞ ¼ N
`pKð1 pÞN K:
`K
`
`K!
`
`;
`
`(2.1)
`
`the probability of existing K faulty elements in the chip is expressed as
`
`(2.2)
`
`(N –K ) non-faulty
`elements
`
`K faulty
`elements
`
`probability:
`
`1 –p
`
`1 –p
`
`p1 –p
`
`1 –p
`
`1 –p
`
`p
`
`1 –p
`
`N memory elements
`
`Fig. 2.1 Probability of existing K faulty elements out of N elements when faulty probability of an
`element is p
`
`Limestone Memory Systems, LLC – Exhibit 2013, p. 12
`
`
`
`
`This is called binomial distribution and the coefficient
`is called binomial
`coefficient. Usually, N is very large and p is very small. When N ! 1, keeping
`l ¼ Np constant, (2.2) becomes
`PðKÞ ¼ NðN 1Þ ðN K þ 1Þ
`
`
`
`
`
` pK ð1 pÞN K
` 1 K þ 1
`
` lKK! 1 l
`¼ 1 1 1
`
`N
`N
`N
`N
`expð lÞ
`¼ lK
`! lKK! 1 l
`
`ðN ! 1Þ:
`K!
`
` K
`
`1 l
`
`N
`
`(2.3)
`
`NK
`
`N
`
`
`
`K!
`
`N
`
`2.2 Models of Fault Distribution
`
`21
`
`This is called Poisson distribution. Figure 2.2 shows examples of the distribu-
`tion. The probability P(K) monotonously decreases with K for l < 1, and has a
`peak around K ~ l for l > 1. Poisson distribution is characterized by only one
`parameter l. The average K and standard deviation s(K) of the number of faulty
`elements are expressed as
`
`X1
`X1
`ðK 1Þ! ¼ l;
`KPðKÞ ¼ expð lÞ
`K ¼
`s
`ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
`ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
`s
`X1
`X1
`ðK 1Þ! l2
`K2PðKÞ K2
`expð lÞ
`¼
`vuut
`ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
`(
`)
`X1
`X1
`lK
`ðK 2Þ! þ
`expð lÞ
` l2
`ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
`q
`ðK 1Þ!
`ffiffiffi
`K¼1
`K¼2
`p
`expð lÞðl2 exp l þ l exp lÞ l2
`¼
`
`K¼0
`
`K¼0
`
`sðKÞ ¼
`
`¼
`
`¼
`
`lK
`
`lK
`
`K¼1
`
`KlK
`
`K¼1
`
`l
`
`:
`
`(2.4)
`
`(2.5)
`
`b
`
`c
`
`l = 0.5
`
`l = 1.0
`
`l = 2.0
`
`0.7
`
`0.6
`
`0.5
`
`0.4
`
`0.3
`
`0.2
`
`0.1
`
`a
`
`Probability P(K)
`
`0
`
`0 1 2 3 4 5 6 7
`0 1 2 3 4 5 6 7
`0 1 2 3 4 5 6 7
`Number of faults K
`Number of faults K
`Number of faults K
`Fig. 2.2 Examples of Poisson distribution: (a) l ¼ 0.5, (b) l ¼ 1.0, and (c) l ¼ 2.0
`
`Limestone Memory Systems, LLC – Exhibit 2013, p. 13
`
`
`
`22
`
`2 Redundancy
`
`Thus, the parameter l is equal to the average number of faults and is expressed as:
`l ¼ AD;
`
`(2.6)
`
`where A is the chip area and D is the fault density. The probability of a chip having
`no faulty elements (raw yield, i.e., yield without redundancy) is expressed as
`Pð0Þ ¼ expð lÞ ¼ expð ADÞ:
`
`(2.7)
`
`Poisson distribution model is often used for yield analysis because of its mathe-
`matical simplicity [1–5]. It is useful for rough yield estimation or the comparison of
`redundancy techniques. More precise yield estimation, however, requires a model
`that takes “fault clustering” into account described below.
`
`2.2.2 Negative-Binomial Distribution Model
`
`It has been reported that actual faults are not randomly distributed but clustered and
`that the number of faulty elements does not match the Poisson distribution model
`[6, 7]. In this case, the parameter l is no longer constant but does distribute.
`Compound Poisson distribution model
`
`Z 1
`
`0
`
`PðKÞ ¼
`
`lK
`
`expð lÞ
`K!
`
` fðlÞdl
`
`(2.8)
`
`was proposed [6] as a distribution model for nonconstant l. The first factor in the
`integral is Poisson distribution and the second factor f(l) is a function called
`“compounder” representing the distribution of l. The average K and standard
`deviation s(K) of the number of faulty elements are given by the following
`equations:
`
`)
`
`dl
`
`(2.9)
`
`X1
`X1
`lK
`expð lÞfðlÞ
`KPðKÞ ¼
`Z 1
`Z 1
`ðK 1Þ!
`fðlÞ l dl ¼ l;
`expð lÞfðlÞ l expðlÞdl ¼
`¼
`vuut
`ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
`ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
`s
`(
`)
`Z 1
`X1
`X1
`lK K
`K2PðKÞ K2
`expð lÞfðlÞ
`dl l2
`¼
`vuut
`ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
`ðK 1Þ!
`"
`)
`#
`(
`Z 1
`X1
`X1
`lK
`ðK 2Þ! þ
`expð lÞfðlÞ
`dl l2
`ðK 1Þ!
`
`K ¼
`
`K¼0
`
`0
`
`sðKÞ ¼
`
`¼
`
`K¼0
`
`0
`
`(
`
`Z 1
`
`0
`
`K¼1
`
`0
`
`0
`
`lK
`
`K¼1
`
`K¼2
`
`K¼1
`
`Limestone Memory Systems, LLC – Exhibit 2013, p. 14
`
`
`
`2.2 Models of Fault Distribution
`
`s
`s
`q
`
`¼
`
`¼
`
`¼
`
`ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
`Z 1
`fexpð lÞfðlÞðl2 exp l þ l exp lÞgdl l2
`ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
`Z 1
`Z 1
`fðlÞ l dl þ
`fðlÞ l2 dl l2
`ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
`l þ fsðlÞg2
`
`0
`
`0
`
`0
`
`:
`
`23
`
`(2.10)
`
`The candidates for f(l) include uniform distribution and triangular distribution.
`
`However, Gamma distribution
`
`fðlÞ ¼ la 1 expð l=bÞ
`GðaÞba
`
`(2.11)
`
`has been shown the most suitable for actual fault distribution1 [6, 8]. The meanings
`ffiffiffi
`of the parameters a and b are as follows: a corresponds to fault clustering (smaller a
`means stronger clustering), and the product ab is equal to the average of l, l0. The
`ap
`standard deviation of l is equal to b
`. Figure 2.3 shows examples of (2.11) for
`various parameters maintaining l0 ¼ ab ¼ 1.0. When a ! 1, the distribution
`becomes the delta function, corresponding to no l distribution.
`
`l0 = 1.0
`
`a = ∞, b = 0
`
`a = 8, b = 1 / 8
`a = 4, b = 1 / 4
`a = 2, b = 1 / 2
`
`1
`
`2
`
`l
`
`3
`
`a = b = 1
`
`2
`
`1
`
`0
`
`0
`
`Probability density f(l)
`
`Fig. 2.3 Probability density function of gamma distribution as a compounder (average of
`l ¼ 1.0)
`
`1G(a) is gamma function defined as GðaÞ ¼
`
`R 1
`
`0
`
`ta 1 expð tÞdt. G(a) ¼ (a 1)! for integer a.
`
`Limestone Memory Systems, LLC – Exhibit 2013, p. 15
`
`
`
`This is called negative binomial distribution [9]. The average and standard
`deviation of the number of faulty elements are calculated from (2.9) and (2.10):
`K ¼ l ¼ l0;
`q
`ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
`ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
`l þ fsðlÞg2
`l0 þ b2a
`
`q
`
`¼
`
`sðKÞ ¼
`ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
`Comparing (2.14) with (2.5), we can find that the standard deviation of the
`negative-binomial distribution is larger than that of Poisson distribution by a factor
`1 þ l0=a
`. The raw yield is expressed as
`of
`
`24
`
`2 Redundancy
`
`PðKÞ ¼
`
`lK
`
`0
`
`¼
`
`0
`
`dl
`
`
`
`
`
`l
`
`dl
`
`Substituting (2.11) and b ¼ l0/a into (2.8) results in
`Z 1
`expð lÞ
` la 1 expð l=bÞ
`
`
`Z 1
`GðaÞba
`K!
`lKþa 1 exp 1 þ 1
`KþaZ 1
`
`1
`K!GðaÞba
`b
`b
`tkþa 1 expð tÞdt
`¼
`1
`b þ 1
`K!GðaÞba
`¼ GðK þ aÞbK
`K!GðaÞðb þ 1ÞKþa
`¼ aða þ 1Þ ða þ K 1Þðl0=aÞK
`K!ð1 þ l0=aÞKþa
`
`0
`
`:
`
`(2.12)
`
`(2.13)
`
`(2.14)
`
`p
`
`ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
`l0ð1 þ l0=aÞ
`
`:
`
`¼
`
`p
`
`(2.15)
`
`Pð0Þ ¼
`
`1
`
`1
`
`ð1 þ l0=aÞa ¼
`ð1 þ AD=aÞa :
`When a ! 1, (2.15) becomes identical to (2.7). Figures 2.4 and 2.5 show
`examples of the distribution with a ¼ 4.0 (weaker fault clustering) and a ¼ 1.0
`(stronger fault clustering), respectively. Compared with Fig. 2.2 (corresponding to
`the case a ¼ 1), the probability for K ¼ 0 and that for large K increase and the
`probability for medium K decreases as a decreases. Equations (2.7) and (2.15) are
`plotted in Fig. 2.6. The raw yield using the Poisson distribution model is expressed
`by the straight line (a ¼ 1) in semilog scale. The raw yield using the negative-
`binomial distribution model is expressed by a concave-up line and is greater than
`that using the Poisson model.
`The negative-binomial distribution model is often used for yield estimation of
`memory LSIs [10–12] because it gives good agreement with actual fault distribu-
`tion. In order to use this model, however, we must determine two parameters
`l0 (average number of faults) and a (fault clustering factor) from experimental
`data. In addition, it should be noted that the parameter a may depend on the kind of
`the memory element.
`
`Limestone Memory Systems, LLC – Exhibit 2013, p. 16
`
`
`
`2.3 Yield Improvement Through Redundancy
`
`25
`
`b
`
`c
`
`l 0 = 0.5
`a = 4.0
`
`l0 = 1.0
`a = 4.0
`
`l 0 = 2.0
`a = 4.0
`
`0.7
`
`0.6
`
`0.5
`
`0.4
`
`0.3
`
`0.2
`
`0.1
`
`a
`
`Probability P(K)
`
`0
`
`0
`
`0
`
`b
`
`0
`
`c
`
`1 2 3 4 5 6 7
`1 2 3 4 5 6 7
`1 2 3 4 5 6 7
`Number of faults K
`Number of faults K
`Number of faults K
`Fig. 2.4 Examples of negative binomial distribution with a ¼ 4.0: (a) l0 ¼ 0.5, (b) l0 ¼ 1.0, and
`(c) l0 ¼ 2.0
`a
`
`l0 = 0.5
`a = 1.0
`
`l0 = 1.0
`a = 1.0
`
`l 0 = 2.0
`a = 1.0
`
`0.7
`
`0.6
`
`0.5
`
`0.4
`
`0.3
`
`0.2
`
`0.1
`
`Probability P(K)
`
`0
`
`0
`
`1 2 3 4 5 6 7
`1 2 3 4 5 6 7
`1 2 3 4 5 6 7
`Number of faults K
`Number of faults K
`Number of faults K
`Fig. 2.5 Examples of negative binomial distribution with a ¼ 1.0: (a) l0 ¼ 0.5, (b) l0 ¼ 1.0, and
`(c) l0 ¼ 2.0
`
`0
`
`0
`
`2.3 Yield Improvement Through Redundancy
`
`In this section, yield improvement through redundancy is analyzed using the
`models described above. We assume the followings for simplicity:
`
`1. Faults on spare elements are neglected.
`2. Fatal faults are neglected. A fatal fault is defined as a fault that makes the entire
`chip no good. For example, a defect on peripheral circuit of a memory LSI may
`cause a fatal fault.
`Without redundancy, the yield Y0 is equal to P(0) as shown in Fig. 2.6
`because only chips without faulty elements are accepted. If R spare elements
`
`Limestone Memory Systems, LLC – Exhibit 2013, p. 17
`
`
`
`26
`
`2 Redundancy
`
`a =1.0
`
`a = 2.0
`a = 4.0
`
`a = ∞ (Poisson)
`
`0
`
`3
`2
`1
`Average number of faults l0
`
`4
`
`100
`
`50
`
`20
`
`10
`
`5
`
`2
`
`1
`
`Raw yield P(0) (%)
`
`Fig. 2.6 Comparison of raw yield using Poisson and negative-binomial distribution models
`
`Fig. 2.7 Principle of
`redundancy
`
`K faulty
`elements
`
`replace
`
`replace
`
`N memory elements
`
`R spare memory
`elements
`
`are added in the chip, chips with K faulty elements (K R) become acceptable
`by replacing the faulty elements with spares as shown in Fig. 2.7. Therefore, the
`yield becomes
`
`PðKÞ:
`
`(2.16)
`
`XR
`
`K¼0
`
`Y ¼
`
`Figures 2.8 and 2.9 show the calculated yield using the Poisson distribution and
`the negative-binomial distribution models, respectively. The raw yield Y0 (R ¼ 0)
`is lower but the yield improvement is larger with Poisson distribution model than
`with negative-binomial model. This is apparent in Figs. 2.10 and 2.11, where the
`relationships between yields with and without redundancy are plotted. Thus, it
`should be noted that using Poisson distribution model tends to underestimate Y0
`and overestimate the yield improvement.
`
`Limestone Memory Systems, LLC – Exhibit 2013, p. 18
`
`
`
`2.3 Yield Improvement Through Redundancy
`
`27
`
`l = 0.5
`l = 1
`
`l = 2
`
`l = 4
`
`6
`4
`2
`Number of spare elements R
`
`8
`
`100
`
`80
`
`60
`
`40
`
`20
`
`0
`
`0
`
`Yield Y (%)
`
`Fig. 2.8 Yield improvement through redundancy using Poisson distribution model
`
`l = 0.5
`l = 1
`
`l = 2
`
`l = 4
`
`6
`4
`2
`Number of spare elements R
`
`8
`
`100
`
`80
`
`60
`
`40
`
`20
`
`0
`
`0
`
`Yield Y (%)
`
`Fig. 2.9 Yield improvement through redundancy using negative-binomial distribution model
`(a ¼ 1.0)
`
`L