throbber
UNITED STATES PATENT AND TRADEMARK OFFICE
`
`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`
`
`Apple Inc.,
`Petitioner
`
`ME,
`
`California Institute of Technology,
`Patent Owner.
`
`Case: IPR2017-00700
`
`DECLARATIONOF JAMESA. DAVIS, PH.D.
`REGARDING U.S. PATENT NO. 7,421,032
`
`Apple 1004
`
`Apple 1004
`
`

`

`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`TABLE OF CONTENTS
`
`Page
`
`I.
`
`i.
`
`Il.
`
`O™7mmMOUOW>
`Lay)eea
`
`IV.
`
`Background suanautauseneeevaneetensnsesenseeenenseseusussuctscenesuevecsesscencenenecnenscaveecsnsessesesessucscene L
`Legal Principles .......s::sssserses janinioniabuaipaisinkedcvnbedsibubiucsieseussiaunebienierevionienseiaveet 4
`Overview Of The Technology....sssssssssssessscccsssscsssatssnsssserecssesrectensersseseseseneneas O
`. Error-Correcting Codes in General siinnitianiiiconncninannninnmnn®
`» COMINg RALE ccossseceracereressneernpsrrenyeansaneasspenrisssadaunsbistuabeatesstiifisbinkibseesslabivdsknsacciy nne=trmssess nel aaey9
`. Performance of Error-Correcting Codes...
`ctu
`seceseeeeaeeeuseesseuseseeeseeereeeeeee |O
`LDPC Codes, Turbo Codes, and Repeat--Accumulate Codessipciiia iainavamaunbuacisdaiai lL
`Mathematical Representations of Error-Correcting Codes...............::ccccsccssssesssseseeeeeteeneenees 16
`DST ARTY resrcupaserecuqunaaexcmnmenunaserneennansreoeaencsaranyoncnse cepyqnennn Eann nemraronenetsetmerenanaresrpereranarens2]
`. Message Passing and Belief Propagation Decoderwu...cccctssssssesseceneeeeensesesssseresserereseeeenenee2d
`Overview Of Primary Prior Art References .....ccccccssecssscnsssssseessesseneeees 28
`«PHIMsessennsersessensenerssersrsaenancansesesesnensansanscrsaneassapnssnsansansnnsensnnseassnsgensinssssinauspoesnerequnnnnrearaneesansnese28
`MinsRawrsistninitaincaranntsnpheneceencsecnunacncssna atin ene Gams panneene een CneReM mney cRnenrRENNFeRe ensues 36
`Divsalarisniiiciiinincicnienkitine catiaCRee a7
`5 LUMPT. cava crennennnesunsisiund sbibabssncanshanentunaiesssiisaiaaga santana bigaakasnanatapeonseneutausnmiensbebauseucercin talninaneeet40

`PHISCED ccscccccccccseccccccseccccccuccceccceseseesesensessessuccsesecacceeseesteesecssteesutsssseunsuseueeaceneesersrenscseseeseeeeseesess A]
`
`Person Of Ordinary Skill In The Art. .......scssssscssssnsssesessssesssesnteesneseessseeneenes 4
`Overview Of The ’032 Patent......:cssssccsssscssssessssescessscsessnesenrasenseenseeenseseeses 42
`
`¢ SUAS sercscncecscecsseser cunanvencancaaasmreassmanannan taioncmenananaa centri renee ARERR NUINNMECNneNS 42
`. Summary Of the SpecifitationcacconcnnanniednnoR
`. Claim Construction 282k iiiiiidiikksctikntemiiicinnninnnnn 44
`
`seveusasssessusessesseassesseneeaeeeeeeaeeceecseceeesseueseueneseareateesenaressenseee4A
`“irregular”......4.
`“Tanner eraph?” (Claimsil,18)... li caiaaanaeasecnakD
`The Challenged Claims Are¢InvalidsesssssssesssstsnnsnesnnsnnesnneennedT
`. Ground 1: Claims 11, 12, and 14-16 Are Obvious over ee in View of MacKay and
`Further in View of Divsalar...
`eis
`47
`R. Ground 2: Claims 13 and 18-23‘Are Obvious¢overr Ping}inViewofMacKay,TShvaalhid
`LOT csouoseaqnanncsaseussenrenrsensenesnnennenanseavsasessotbunateb sub auasvinanespuniien canta nieah anesieresd hiasioeeumanangenes I2
`Ground 3: Claim 17 Is Obvious over Ping in View of MacKay, Divsalar, and Pfister........90
`Availability For Cross-Exa mination .....c.scsscssssssssscssssssssssssseseesrssresseessnseesee DL
`Right To Supplement.........ccsessesseserseesnenreeseeeseenseesetesserseseesersenasnesenseensenens OD
`DUE asicecssiccacia stiiteseasienaciesscdincetedicta meats ceanmenmucncuniaonniambniaaRu Ie
`
`5.
`
`X.
`
`XI.
`
`XII.
`
`

`

`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`I, James A. Davis, Ph.D., declare as follows:
`
`1.
`
`My nameis James A. Davis.
`
`I.
`
`BACKGROUND
`
`zi
`
`I am a Professor of Mathematics at the University of Richmond in
`
`Richmond, Virginia.
`
`3.
`
`I received a B.S. in Mathematics (with honors) from Lafayette
`
`College in 1983 and an M.S. and Ph.D. in Mathematics from the University of
`
`Virginia in 1985 and 1987, respectively.
`
`4.
`
`After receiving my doctorate, I taught for one year at Lafayette
`
`College before accepting a position at the University of Richmondas an Assistant
`
`Professor of Mathematics in 1988. I became an Associate Professor of
`
`Mathematics in 1994 and a Full Professor of Mathematics in 2001.
`
`5.
`
`Since joining the faculty of the University of Richmondin 1988, I
`
`have been engagedin research in Coding Theory, Algebra, and Combinatorics. My
`
`research has appeared in journals such as IEEE Transactions on Information
`
`Theory, the Journal of Combinatorial Theory Series A, Designs, Codes, and
`
`Cryptography, the Proceedings of the American Mathematical Society, and the
`
`Journal of Algebra.
`
`6.
`
`I have made several major contributionsto the field of coding theory
`
`in wireless communication and sequence design. | co-discovered the connection
`
`

`

`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`between sequences with good powercontrol and Reed-Muller codes, an important
`step in making OFDM communicationpractical. I co-discovered a redhmiaue for
`
`constructing difference sets that has been applied to constructions of bent
`
`functions. I co-wrote the paper on the non-existence of Barkerarrays.
`
`4
`
`I was a co-Principal Investigator of a $1.5 million National Science
`
`Foundation grant designed to engage undergraduates in long-term research projects
`
`in mathematics.
`
`8.
`
`I have taught mathematics courses in Calculus, Statistics, Linear
`
`Algebra, Abstract Algebra, Coding Theory, and Cryptography, amongothers.I
`
`have directed 12 honors projects and 76 summerresearch experiencesfor
`
`undergraduates in the general area of Coding Theory and Combinatorics.
`
`9.
`
`I spent two years (academic years 1995-96 and 2000-01) workingat
`
`Hewlett-Packard Laboratories in Bristol, England. I was in a communications lab
`
`during this time, an industrial research lab focused on applications of Coding
`
`Theory to wireless communication and storage devices. I am co-inventor on 16
`
`patents based on my work during this time.
`
`10.
`
`I served as Chair of the Department of Mathematics and Computer
`
`Science 1997-2000.
`
`

`

`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`11.
`
`I have authored or co-authored over 50 peer-reviewed academic
`
`publicationsin the fields of Coding Theory, Combinatorics, Finite Geometry, and
`
`Algebra.
`
`12. Acopy of my curriculum vitae is attached as Appendix A.
`
`13.
`
`Ihave reviewedthe specification and claims of U.S. Patent No.
`
`7,421,032 (the “032 patent”; Ex. 1001). I have been informedthat the ’032 patent
`
`is a continuation of U.S. Patent No. 7,116,710, which claimspriority to provisional
`
`applications filed on May 18, 2000 and August 18, 2000.
`
`14.
`
`I have also reviewed the following references, all of which J
`
`understandto be prior art to the ’032 patent:
`
`e L. Ping, W. K. Leung, N. Phamdo, “Low Density Parity Check
`
`Codes with Semi-random Parity Check Matrix.” Electron. Letters,
`
`Vol. 35, No. 1, pp. 38-39, published on January 7, 1999 (“Ping”;
`
`Ex. 1003.)
`
`°* D. J.C. MacKay, S. T. Wilson, and M. C. Davey, “Comparison of
`
`Constructions of Irregular Gallager Codes,” IEEE Trans. Commun..,
`
`Vol. 47, No. 10, pp. 1449-54, published in Oct. 1999 (“MacKay”;
`
`Ex. 1002.
`
`° D. Divsalar, H. Jin, and R. J. McEliece, “Coding Theoremsfor
`
`‘Turbo-like’ Codes,” Proc. 36th Allerton Conf. on Comm., Control
`
`3
`
`

`

`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`and Computing, Allerton, Illinois, pp. 201-10, March, 1999
`
`(“Divsalar”; Ex. 1017.)
`
`¢ Luby, M. et al., “Practical Loss-Resilient Codes,” STOC °97, pp.
`
`150-159, published in 1997 (“Luby97”; Ex. 1008.)
`
`¢ Pfister, H. and Siegel, P, “The Serial Concatenation of Rate-1
`
`Codes Through Uniform Random Interleavers,” 37th Allerton
`
`Conf. on Comm., Control and Computing, Monticello, Illinois,
`
`published on or before September 24, 1999 (“Pfister”; Ex. 1022.)
`
`15.
`
`Jam being compensated at my normal consulting rate for my work.
`
`16. My compensation is not dependent on and in no wayaffects the
`
`substance of my statements in this Declaration.
`
`17.
`
`I have no financial interest in Petitioners. I similarly have no financial
`
`interest in the ’032 patent.
`
`Il.
`
`LEGAL PRINCIPLES
`
`18.
`
`I have been informed that a claim is invalid as anticipated under Pre-
`
`AIA 35 U.S.C. § 102(a) if “the invention was knownorused byothersin this
`
`country, or patented or described in a printed publication in this or a foreign
`country, before the invention thereofby the applicant for patent.” I have also been
`
`informed that a claim is invalid as anticipated under Pre-AIA 35 U.S.C. § 102(b)if
`
`“the invention was patented or described in a printed publication in this or a
`
`

`

`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`foreign country orin public use or on sale in this country, more than one yearprior
`
`to the date of the application for patent in the United States.” Further I have been
`
`informedthat a claim is invalid as anticipated under Pre-AIA 35 U.S.C. § 102(e)if
`
`“the invention was described in ... an application for patent, published under
`
`section 122(b), by another filed in the United States before the invention by the
`
`applicant for patent ....” It is my understanding that for a claim to beanticipated,
`
`all of the limitations must be present in a single priorart reference, either expressly
`
`or inherently.
`
`19.
`
`I have been informedthat a claim is invalid as obvious under Pre-AIA
`
`35 U.S.C. § 103(a):
`
`if the differences between the subject matter sought to be patented and
`
`the prior art are such that the subject matter as a whole would have
`
`been obvious at the time the invention was madeto a person having
`
`ordinary skill in the art to which [the] subject matter pertains.
`
`20.
`
`Iunderstand that a claimed invention would have been obvious, and
`
`therefore not patentable, if the subject matter claimed would have been considered
`
`obviousto a person ofordinary skill in the art at the time that the invention was
`
`made. I understand that when there are known elements that perform in known
`
`waysand producepredictable results, the combination of those elementsis
`
`probably obvious. Further, I understand that whenthere is a predictable variation
`
`5
`
`

`

`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`and a person wouldsee the benefit of making that variation, implementing that
`
`predictable variation is probably not patentable. I have also been informedthat
`
`obviousness doesnot require absolute predictability of success, but that what does
`
`matter is whetherthe prior art gives direction as to what parametersarecritical and
`
`which of many possible choices may be successful.
`
`Il. OVERVIEW OF THE TECHNOLOGY
`
`21.
`
`The ’032 patent relates to the field of channel coding anderror-
`
`correcting codes. This section provides an introduction to channel coding and
`
`error-correcting codes, highlighting developments relevant to the *032 patent.
`
`A.
`
` Error-Correcting Codes in General
`
`22. Most computing devices and otherdigital electronics use bits to
`
`represent information. A bit is a binary unit of information that may have one of
`
`two values: 1 or 0. Any type of information, including, for example, text, music,
`
`images and video information, can be represented digitally with bits.
`
`23. Whentransmitting binary information over an analog communication
`
`channel, the data bits representing the information to be communicated (also called
`“information bits”) are converted into an analog signal that can be transmitted over
`
`the channel. This process is called modulation. The transmitted signal is then
`
`received by a device and converted back into binary form. This process, in which a
`
`

`

`received analog waveform is converted into bits, is called demodulation. The steps
`
`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`of modulation and demodulation areillustrated in the figure below:
`
`
`
`Modulation
`
`Transmission
`
`Demodulation
`
`
`| 11000010
`11000010 Ne Allie ly \v
`| vst
`|
`
`
`Transmitter
`
`Receiver
`
`Digital
`information
`(bits)
`
`Analog
`Signal
`(waves)
`
`Digital
`Information
`(bits)
`
`
`
`Modulation, Transmission, and Demodulation
`
`24.
`
`Transmission over physical channels is rarely 100% reliable. The
`
`transmitted signal can be corrupted during transmission by “noise” caused by, for
`
`example, obstructions in the signal path, interference from othersignals, or
`
`electrical/magnetic disturbances. Noise can causebits to “flip” during
`
`transmission: for example, because of noise, a bit that was transmitted as a 1 can be
`
`corrupted during transmission and demodulated as 0, and vice versa.
`
`Dai
`
`Error-correcting codes were developed to combat such transmission
`
`errors. Using the bits representing the information to be communicated (called
`
`“information bits”) an error-correcting code generates “parity bits” that allow the
`
`receiver to verify that the bits were transmitted correctly, and to correct
`
`transmission errors that occurred.
`
`

`

`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`26.
`
`Bits are encoded by an encoder, which receives a sequence of
`
`information bits as input, generates parity bits with an encoding algorithm, and
`
`outputs a sequence of encodedbits called a codeword. The codeword produced by
`
`the encoderis then modulated and transmitted as an analog signal.
`
`27.
`
`At the receiver, the signal is demodulated, and the codeword is passed
`
`to the decoder, which uses a decoding algorithm to recoverthe original
`
`information bits.
`
`
`
`11000010—»|Encoder|—+90101010010 i Al Ne 00101010010—>| Decoder | 41000040
`
`
`4 ge
`I
`i
`Transmitter
`zie
`i
`i
`
`Transmission
`
`=
`
`informationbits
`Codeword
`Codeword
`Information bits
`
`Encoding and Decoding
`
`28.
`
`Error-correcting codes work by adding redundant information to the
`
`original message. Due to redundancy, the information represented by a given
`
`information bit is spread across multiple bits of the codeword. Thus, even if one of
`
`those bits is flipped during transmission, the original information bit canstill be
`
`recovered from the others.
`
`29. Asasimple example, consider an encoding scheme,called “repeat-
`
`three,” that outputs three copies of each informationbit. In this scheme, the
`
`information bits “1 0 1” would be encoded as “111 000 111.” Uponreceipt, the
`
`

`

`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`decoder converts instances of “111” into “1” and instances of “000”into “0” to
`
`produce the decodedbits “1 0 1,” which match the original informationbits.
`
`30.
`
`Supposea bit is flipped during transmission, changing “000”to “010.”
`
`The decoder can detect a transmission error, because “010”is not a valid “‘repeat-
`
`three” codeword. Using a “majority vote”rule, the decoder can infer that the
`
`original information bit was a 0, correcting the transmission error. Thus, due to the
`
`redundancy of the codeword, no information waslost due to the transmission error.
`
`31.
`
`Error-correcting codes are either systematic or non-systematic. (Ex.
`
`1020, p. 12, “Also, a [binary convolutional encoder] can be systematic or non-
`
`systematic.”; id. pp. 12-13, Figures 2.1 and 2.2, showing systematic and non-
`
`systematic encoders.) In a systematic code, both the parity bits and information bits
`
`are included in the codeword. (Ex. 1020, p. 14, “a systematic encoder is one for
`
`which the encoderinput(the data) forms a substring of the output (the codword)”;
`
`Ex. 1021, pp. 6, 229.) In a non-systematic code, the encoded data only includes the
`
`parity bits.
`
`32.
`
`Systematic and non-systematic codes had been knowninthe art for
`
`decadesprior to May 18, 2000, the claimedpriority date of the ’032 patent.
`
`B.
`
`33.
`
`Coding Rate
`
`Manyerror-correcting codes encode information bits in groups, or
`
`blocks of fixed length n. An encoderreceives a k-bit block of informationbits as
`
`

`

`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`input, and produces a corresponding n-bit codeword. Theratio k/n is called the rate
`
`of the code. Because the codeword generally includes redundant information,7 is
`
`generally greater than k, and the rate k/n of an error-correcting code is generally
`
`less than one.
`
`C.
`
`34.
`
`Performance of Error-Correcting Codes
`
`The effectiveness of an error-correcting code may be measured using
`
`a variety of metrics.
`
`35.
`
`Onetool used to assess the performanceofa codeis its bit-error rate
`
`(BER.) The BERis defined as the numberof corrupted information bits divided by
`
`the total number of information bits during a particular time interval. For example,
`
`if a decoder outputs one thousandbits in a given time period, and tenofthosebits
`
`are corrupted (meaning they differ from the information bits originally received by
`the encoder), then the BER ofthe code during that time period is (10 bit errors) /
`
`(1000 total bits) = 0.01 or 1%.'
`
`' Note that as used herein, BERrefers to the information BER, which measuresthe
`percentageofbits that remain incorrect after decoding. Thisis different than the
`transmission BER,which measuresthe percentage ofbits that are incorrect when
`received by the decoder, but before the decoding process begins.
`
`

`

`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`36.
`
`The BERofa transmission depends on the amountofnoisein the
`
`communication channel andthe strength of the transmitted signal (meaning the
`
`powerthat is used to transmit the signal). An increase in noise tends to increase the
`
`error rate and an increase in signal strength tends to decrease theerror rate. The
`
`ratio of signal strength to noise,called the “signal-to-noise ratio,” is often used to
`
`characterize the channel over which the encodedsignal is transmitted. The signal-
`
`to-noise ratio can be expressed mathematically as E;/No, in which £;is the amount
`
`of energy usedto transmiteachbit of the signal, and Np is the density ofthe noise.
`
`The BER ofan error-correcting code is often measured for multiple values of E,/No
`
`to determine how the code performs under various channel conditions.
`
`37.
`
`Error-correcting codes mayalso be assessed based on their
`
`computational complexity. The complexity of a code is a rough estimate of how
`
`manycalculations are required for the encoder to generate the encodedparity bits
`
`and how manycalculations are required for the decoder to reconstruct the
`
`information bits from the parity bits. If a code is too complex, it may be
`
`impractical to build encoders/decoders that are fast enoughto useit.
`
`D.
`
`38.
`
`LDPC Codes, Turbo Codes, and Repeat-Accumulate Codes
`
`In 1963, Robert Gallager described a set of error correcting codes
`
`called Low Density Parity Check (“LDPC”) codes. Gallager described how LDPC
`
`codes provide one method of generating parity bits from information bits using a
`
`1]
`
`

`

`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`matrix populated with mostly 0s andrelatively few 1s, as explained in more detail
`
`below.(See Gallager, R., Low-Density Parity-Check Codes, Monograph, M.I.T.
`
`Press, 1963; Ex. 1005.)
`
`39. Gallager’s work waslargely ignored overthe following decades,as
`
`researchers continued to discover other algorithms for calculating parity bits. These
`
`algorithmsincluded, for example, convolutional encoding with Viterbi decoding
`
`and cyclic code encoding with bounded distance decoding. In manycases, these
`
`new codes could be decoded using low-complexity decoding algorithms.
`
`40.
`
`In 1993, researchers discovered “turbocodes,”a class of error-
`
`correcting codes capable of transmitting informationat a rate close to the so-called
`
`“Shannon Limit” — the maximum rate at which information can be transmitted over
`
`a channel. A standard turbocoder encodes a sequenceofinformation bits using two
`
`“convolutional” coders. The information bits are passed to the first convolutional
`
`coder in their original order. At the same time, a copy of the informationbits that
`
`have been reordered by an interleaver is passed to the second convolutional coder.
`
`The figure below showsthe structure ofa typical turbocoder. See Berrouet al.,
`
`‘Near Shannon Limit Error-Correcting Coding and Decoding: Turbo Codes," JCC
`
`'93, Technical Program, Conference Record 1064, Geneva 1993 (Ex. 1006)
`
`

`

`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`Y)|_|
`
`TL
`
`C,
`
`ose(377
`
`with parallel concatenation.
`
`ah
`Tal
`
`Vo
`
`Recursive
`Syatematc
`Code (37,21)
`
`Fig, 2 Recursive Systematic codes
`
`41.
`
`The main drawback of convolutional codesis that they do not perform
`
`well over channels in whicherrors are clustered tightly together. Turbo codes
`
`overcomethis deficiency by encoding the input bits twice. The inputbits are fed to
`
`a first convolutional encoderin their normal order to produceafirst set of parity
`
`bits. The sameinputbits are also reordered by an interleaver and then encoded by a
`
`second convolutional encoder to produce a secondset of parity bits. The twosets
`
`of parity bits together with the information bits form a single codeword. Using a
`
`turbo code, a small numberoferrors will not result in loss of information unless
`
`

`

`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`the errors happento fall close together in both the original data stream and in the
`
`permuted data stream, whichis unlikely.
`
`42.
`
`In 1995, David J. C. MacKayrediscovered Gallager’s work from
`
`1963 relating to low-density parity-check (LDPC) codes, and demonstrated that
`
`they have performance comparableto that of turbocodes. See MacKay, D. J. C, and
`
`Neal, R. M.“Near Shannon Limit Performance of Low Density Parity Check
`
`Codes,” Electronics Letters, vol. 32, pp. 1645-46, 1996 (Ex. 1016.) This
`
`rediscovery was met with wide acclaim. Turbocodes and LDPC codes have
`
`commoncharacteristics: both codes use pseudo-random permutations to spread out
`
`redundancy, and both use iterative decoding algorithms.
`
`43.
`
`In 1995 and 1996, researchers began to explore “concatenated”
`
`convolutional codes. See Benedetto,S. et al., Serial Concatenation ofBlock and
`
`Convolutional Codes, 32.10 Electronics Letters 887-8, 1996 (Ex. 1007.) While
`
`turbo codes use two convolutional coders connected in parallel, concatenated
`
`convolutional codes use two convolutional coders connected in series: the
`
`information bits are encodedbya first encoder, the output of the first encoderis
`
`interleaved, and the interleaved sequence is encoded by a second, convolutional
`
`code. In such codes,the first and second encodersare often called the “outer
`
`coder” and the “inner coder,” respectively.
`
`

`

`U:S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`44.
`
`In 1998, researchers developed “repeat-accumulate,” or “RA codes,”
`
`by simplifying the principles underlying turbocodes. (See Ex. 1017.) In RA codes,
`
`the informationbits are first passed to a repeater that repeats (duplicates) the
`
`information bits and outputs a stream of repeated bits (the encoder described above
`
`in the context of the “repeat three” coding schemeis one example of a repeater).
`
`The repeated bits are then optionally reordered by an interleaver, and are then
`
`passed to an accumulator where they are “accumulated”to form theparity bits.
`
`45. Accumulation is a running sum process whereby each inputbit is
`
`added to the previous input bits to produce a sequence of running sums, each of
`
`which represents the sum ofall input bits yet received. More formally, if an
`
`accumulator receives a sequence ofinputbitsi), in, i3, ... i,, it will produce output
`
`bits 01, 02, 03, ... On, Such that:
`
`oO, = a4
`02 = 14 @ 19
`03 = 14 ®@ 19 ®B 1 3
`
`On = 110120136 mre @ In
`
`Where the © symbol] denotes modulo-2 addition. Accumulators can be
`
`implemented simply, allowing accumulate codes to be encoded rapidly and
`
`cheaply.
`
`

`

`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`E. Mathematical Representations of Error-Correcting Codes
`
`1.
`
`Linear Transformations
`
`46.
`
`Codingtheorists often think of error-correcting codesin linear-
`
`algebraic terms. For example, a k-bit block of information bits is a k-dimensional
`
`vector of bits, and an n-bit codeword is an n-dimensional vector of bits (a
`
`“dimensional vector”ofbits is a sequenceof bits). The encoding process, which
`
`converts blocks of information bits into codewords,is a linear transformation that
`
`mapsk-dimensionalbit vectors to n-dimensionalbit vectors. This transformation is
`
`represented by ann x k matrix G called a generator matrix. For a vector of
`
`information bits u, the corresponding codeword x is given by:
`
`.
`
`ve
`
`k
`
`» gnt U;
`
`The n-dimensional vectors that can be written in the form Gu are valid codewords.
`
`47. Most n-dimensionalvectors are not valid codewords. It is this
`
`property that allows a decoder to determine when there has been an error during
`
`transmission. This determination is made using an (n - k) x n matrix H,called a
`
`parity check matrix. Using a parity check matrix, a vector x is a valid codewordif
`
`16
`
`

`

`US. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`and only if Hx = 0. If the parity check shows that Hx does not equal 0, then the
`
`decoder detects a transmission error (muchlike the “010” example in the “repeat
`
`three” code above).
`
`48.
`
`Each of the n - k rowsofthe parity-check matrix H represents an
`
`equation that a valid codeword mustsatisfy. For example, consider a codeword x
`
`and a parity check matrix H given as follows:
`
`888 wo
`
`8
`
`ro |
`
`49.
`
`Ifx is a valid codeword, the product Hx must be equal to 0, so we
`
`have:
`
`we-[ete)-E
`
`£42,
`
`0
`
`Asthis equation shows,thefirst row of H represents the constraint that x3 + x4 = 0,
`
`and the second row of H represents the constraint that x, + x. = 0. If the vector x
`
`satisfies both of these constraints, it is a valid codeword.In practice, parity-check
`
`matrices often have hundreds or thousands of rows, each of which represents an
`
`equation of the form x, + x, + ... + x, = 0, similar to those shownin the above
`
`example. These equationsare called parity-check equations.
`
`ZL
`
`Repeating and Permuting as Examples ofLDGM Codes
`
`17
`
`

`

`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`50.
`
`As explained above, the encoding process can be represented using a
`
`generator matrix. This is true of all linear codes, including the “repeat,”
`
`“interleave,” and “accumulate” phases of the “repeat-accumulate” codes discussed
`
`above. Morespecifically, the “repeat” and interleave” phases can be represented
`
`using generator matrices whose entries are mostly zeros, with a relatively low
`
`proportion of 1s. Such generator matrices are called “low-density generator
`
`matrices” (LDGMs). The following is a generator matrix that repeats each
`
`information bit twice:
`
`
`
`GREPEAT2 = COVCOCOoCoooooooorFeS
`
`CDOOCOCCOCOCOCOCOrFKHOo
`
`ODOOCOOCOOCCOOrHKHOooo
`
`COOOCCoCooocoorHeHOoooooo
`
`COOCCOOCOrHHKOOOOoooo
`
`DODOCOCOFRKOOCOCOoOoCoooeo
`
`
`
`OOOOHHKCOCOOoOOoOo00O
`
`COOHHKOOOCoCoCoooooooo
`
`HHOOOCOOOCOCCOCOoo°oOoO0SO
`
`51. Using this generator matrix, encoding a sequence of inputbits
`
`“101010101” would result in the encoded sequence “110011001100110011”(the
`
`numberof times each input bit is repeated is determined by the numberof “1s”in
`
`the column correspondingto that input bit). In the 18 x9 matrix above, 11.1% of
`
`

`

`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`the entries are 1s, and therest are Os. Therelative scarcity of 1s meansthat
`
`GrepeaT2 is a low-density generator matrix (LDGM).
`
`52.
`
`Permutation is another linear transform that is represented by a low-
`
`density generator matrix. For example, the matrix below is a permutation matrix
`
`that will output bits in a different order than bits are input:
`
`01000000
`10000000
`00000010
`00001000
`GsHUFFLE=|]9 9 0 10000
`00000001
`00000100
`
`00100000
`
`53.
`
`Permutation matrices, like Gsyurrie above, have exactly one 1 per row
`
`and one | per column. Theorderof the output bits is determined by the position of
`
`each of these 1s within its row/column. The matrix above, for example, would
`
`transform inputbits i), 4, 13, i4, is, i6, i7, ig into the output sequence i, 11, i7, is, 14, ig,
`
`ig, 13. Permutation matrices are square, with dimensions nxn, because the input
`
`sequence and the output sequence have the same length. With only one | per
`
`row/columnof permutation matrices, the density of 1s is 1/n (Gsyurrie, Shown
`
`above,has a density of 1/8, or 12.5%). With linear codes, which often operate on
`
`blocks of more than a thousandbits at a time, a permutation matrix might comprise
`
`

`

`0.1% 1s or fewer, and are therefore very low-density generator matrices
`
`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`(LDGMs).”
`
`3;
`
`Tanner Graphs
`
`54. Another popular mathematical representation of error-correcting
`
`codes is the “Tanner Graph.” Tanner graphsare bipartite graphs wherein nodes can
`
`be divided into two groups. Every edge connects a node in one groupto a nodein
`
`the other (meaning no two nodesin the same group are connected by an edge). A
`
`bipartite graph is shown below:
`
`
`
`A Simple Bipartite Graph
`
`55.
`
`A Tannergraph includes one groupof nodescalled variable nodes
`
`that correspond to the information and parity bits, and a second group of nodes
`
`* In fact, for a given n and k, there are no viable codes whose generator matrices
`have a lower density than repeat-codes or permute-codes. Both of these codes have
`matrices with exactly one 1 per row — a matrix with lower density would
`necessarily have at least one row without any Is at all, which would result in every
`codeword having a 0 in a particular bit position, conveying no information about
`the original message.
`
`20
`
`

`

`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`called check nodes that represent constraints that parity and information bits must
`
`satisfy. In particular, when a set of variable nodes are connected to a particular
`
`check node, it meansthat the information/parity bits corresponding to those
`
`variable nodes must sum to 0.
`
`56.
`
`These two mathematical descriptions of linear codes — one using
`
`matrices, one using Tanner graphs — are two different ways of describing the same
`
`thing. Matrices and Tannergraphsare two different ways of describing the same
`
`set of linear codes, in much the same waythat “0.5” and “'%”are two different
`
`ways of describing the same number. Every generator matrix corresponds to a
`
`Tannergraph, and vice versa.
`
`F.
`
`57.
`
`Irregularity
`
`Irregular LDPC codes werefirst introduced in a 1997 paper by Luby.
`
`See Luby, M.et al., “Practical Loss-Resilient Codes,” STOC ’97, 1997 (Ex. 1008).
`
`The paper showedthatirregular codes perform better than regular codes on certain
`
`types of noisy channels.
`
`58.
`
`Regular codesare codes in which each information bit contributes to
`
`the same numberofparity bits, while irregular codes are codes in which different
`
`information bits or groups of information bits contribute to different numbers of
`
`parity bits. This is supported by the Board’s construction of “irregular” in prior
`
`proceedings. (See, e.g., IPR2015-00060, Paper 18, p. 12.) Irregularity can also be
`
`2]
`
`

`

`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`defined in terms of Tanner graphs. A regular code is one with a Tanner graph in
`
`which each variable node corresponding to an information bit is connected to the
`
`same numberof check nodes. Irregular codes are those with Tanner graphs in
`
`which some variable nodes corresponding to information bits are connected to
`
`more check nodes than others. These two formulationsofirregularity are different
`
`(and well-known) ways of describing the same concept.
`
`59. After Luby’s initial paper describing irregularity, the same team of
`
`researchers published a second paper, expanding the theory ofirregularity in error-
`
`correcting codes. (See Luby, M.et al., “Analysis of Low Density Codes and
`
`Improved Designs Using Irregular Graphs,” STOC ’98, pp. 249-59, published in
`
`1998.) (Ex. 1009.) At the time, both of these Luby papers were widely read by
`
`coding theorists, and motivated extensive research into irregularity.
`
`60.
`
`For example, the 1998 Luby paper wasthe first reference cited in a
`
`paper by Frey and MacKaytitled “Irregular Turbocodes,” presented at the 1999
`
`Allerton Conference on Communications, Control, and Computing. (See Ex.
`
`1002.) The second author, MacKay, was the same researcher who hadrediscovered
`
`LDPCcodes in 1995. In this paper, the authors applied the conceptofirregularity
`
`to turbo codes by explaining how to construct irregular turbo codes in which some
`
`information bits connect to more checknodes than others. The experimentalresults
`
`at
`
`

`

`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`presented in the Frey paper demonstrated that these irregular turbo codes perform
`
`better than the regular turbocodes that were known in the art. (See Ex. 1002.)
`61.
`By May 18, 2000,the claimed priority date ofthe ’032 patent, it was
`
`common knowledgethat the performanceoferror-correcting code could be
`
`improvedwith irregularit

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket