throbber
UNITED STATES PATENT AND TRADEMARK OFFICE
`
`
`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`
`
`
`Apple Inc.,
`Petitioner
`
`v.
`
`California Institute of Technology,
`Patent Owner.
`
`
`
`Case: IPR2017-00701
`
`
`
`DECLARATION OF JAMES A. DAVIS, PH.D.
`REGARDING U.S. PATENT NO. 7,421,032
`CLAIMS 1-10
`
`

`
`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`TABLE OF CONTENTS
`
`Page
`
`BACKGROUND ............................................................................................. 1
`II.
`III. LEGAL PRINCIPLES .................................................................................... 4
`IV. OVERVIEW OF THE TECHNOLOGY ........................................................ 6
`A.
`Error-Correcting Codes in General ....................................................... 6
`B.
`Coding Rate .......................................................................................... 9
`C.
`Performance of Error-Correcting Codes ............................................. 10
`D.
`LDPC Codes, Turbo Codes, and Repeat-Accumulate Codes ............. 11
`E. Mathematical Representations of Error-Correcting Codes ................. 16
`F.
`Irregularity .......................................................................................... 21
`V. Overview Of Primary Prior Art References .................................................. 23
`A.
`Ping ..................................................................................................... 23
`B. MacKay ............................................................................................... 30
`C.
`Divsalar ............................................................................................... 32
`D.
`Luby97 ................................................................................................ 34
`E.
`Pfister .................................................................................................. 35
`VI. PERSON OF ORDINARY SKILL IN THE ART ........................................ 36
`VII. OVERVIEW OF THE ’032 PATENT .......................................................... 36
`A.
`Claims ................................................................................................. 36
`B.
`Summary of the Specification ............................................................. 37
`VIII. CLAIM CONSTRUCTION .......................................................................... 38
`A.
`Equation in “generating” step of claim 1 ............................................ 39
`B.
`“irregular” ........................................................................................... 40
`IX. THE CHALLENGED CLAIMS ARE INVALID ........................................ 40
`A. Ground 1: Claims 1-10 Are Obvious over Ping in View of MacKay,
`Divsalar, and Luby97 .......................................................................... 40
`X. AVAILABILITY FOR CROSS-EXAMINATION ...................................... 77
`XI. RIGHT TO SUPPLEMENT ......................................................................... 78
`XII. JURAT .......................................................................................................... 78
`i
`
`

`
`
`
`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`ii
`
`

`
`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`I, James A. Davis, Ph.D., declare as follows:
`
`1. My name is James A. Davis.
`
`II. BACKGROUND
`2.
`
`I am a Professor of Mathematics at the University of Richmond in
`
`Richmond, Virginia.
`
`3.
`
`I received a B.S. in Mathematics (with honors) from Lafayette
`
`College in 1983 and an M.S. and Ph.D. in Mathematics from the University of
`
`Virginia in 1985 and 1987, respectively.
`
`4.
`
`After receiving my doctorate, I taught for one year at Lafayette
`
`College before accepting a position at the University of Richmond as an Assistant
`
`Professor of Mathematics in 1988. I became an Associate Professor of
`
`Mathematics in 1994 and a Full Professor of Mathematics in 2001.
`
`5.
`
`Since joining the faculty of the University of Richmond in 1988, I
`
`have been engaged in research in Coding Theory, Algebra, and Combinatorics. My
`
`research has appeared in journals such as IEEE Transactions on Information
`
`Theory, the Journal of Combinatorial Theory Series A, Designs, Codes, and
`
`Cryptography, the Proceedings of the American Mathematical Society, and the
`
`Journal of Algebra.
`
`6.
`
`I have made several major contributions to the field of coding theory
`
`in wireless communication and sequence design. I co-discovered the connection
`
`1
`
`

`
`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`between sequences with good power control and Reed-Muller codes, an important
`
`step in making OFDM communication practical. I co-discovered a technique for
`
`constructing difference sets that has been applied to constructions of bent
`
`functions. I co-wrote the paper on the non-existence of Barker arrays.
`
`7.
`
`I was a co-Principal Investigator of a $1.5 million National Science
`
`Foundation grant designed to engage undergraduates in long-term research projects
`
`in mathematics.
`
`8.
`
`I have taught mathematics courses in Calculus, Statistics, Linear
`
`Algebra, Abstract Algebra, Coding Theory, and Cryptography, among others. I
`
`have directed 12 honors projects and 76 summer research experiences for
`
`undergraduates in the general area of Coding Theory and Combinatorics.
`
`9.
`
`I spent two years (academic years 1995-96 and 2000-01) working at
`
`Hewlett-Packard Laboratories in Bristol, England. I was in a communications lab
`
`during this time, an industrial research lab focused on applications of Coding
`
`Theory to wireless communication and storage devices. I am co-inventor on 16
`
`patents based on my work during this time.
`
`10.
`
`I served as Chair of the Department of Mathematics and Computer
`
`Science 1997-2000.
`
`2
`
`

`
`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`11.
`
`I have authored or co-authored over 50 peer-reviewed academic
`
`publications in the fields of Coding Theory, Combinatorics, Finite Geometry, and
`
`Algebra.
`
`12. A copy of my curriculum vitae is attached as Appendix A.
`
`13.
`
`I have reviewed the specification and claims of U.S. Patent No.
`
`7,421,032 (the “’032 patent”; Ex. 1101). I have been informed that the ’032 patent
`
`is a continuation of U.S. Patent No. 7,116,710, which claims priority to provisional
`
`applications filed on May 18, 2000 and August 18, 2000.
`
`14.
`
`I have also reviewed the following references, all of which I
`
`understand to be prior art to the ’032 patent:
`
`• L. Ping, W. K. Leung, N. Phamdo, “Low Density Parity Check
`
`Codes with Semi-random Parity Check Matrix.” Electron. Letters,
`
`Vol. 35, No. 1, pp. 38-39, published on January 7, 1999 (“Ping”;
`
`Ex. 1103.)
`
`• D. J. C. MacKay, S. T. Wilson, and M. C. Davey, “Comparison of
`
`Constructions of Irregular Gallager Codes,” IEEE Trans. Commun.,
`
`Vol. 47, No. 10, pp. 1449-54, published in Oct. 1999 (“MacKay”;
`
`Ex. 1102.
`
`• D. Divsalar, H. Jin, and R. J. McEliece, “Coding Theorems for
`
`‘Turbo-like’ Codes,” Proc. 36th Allerton Conf. on Comm., Control
`
`3
`
`

`
`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`and Computing, Allerton, Illinois, pp. 201-10, March, 1999
`
`(“Divsalar”; Ex. 1117.)
`
`• Luby, M. et al., “Practical Loss-Resilient Codes,” STOC ’97, pp.
`
`150-159, published in 1997 (“Luby97”; Ex. 1108.)
`
`• Pfister, H. and Siegel, P, “The Serial Concatenation of Rate-1
`
`Codes Through Uniform Random Interleavers,” 37th Allerton
`
`Conf. on Comm., Control and Computing, Monticello, Illinois,
`
`published on or before September 24, 1999 (“Pfister”; Ex. 1122.)
`
`15.
`
`I am being compensated at my normal consulting rate for my work.
`
`16. My compensation is not dependent on and in no way affects the
`
`substance of my statements in this Declaration.
`
`17.
`
`I have no financial interest in Petitioners. I similarly have no financial
`
`interest in the ’032 patent.
`
`III. LEGAL PRINCIPLES
`18.
`
`I have been informed that a claim is invalid as anticipated under Pre-
`
`AIA 35 U.S.C. § 102(a) if “the invention was known or used by others in this
`
`country, or patented or described in a printed publication in this or a foreign
`
`country, before the invention thereof by the applicant for patent.” I have also been
`
`informed that a claim is invalid as anticipated under Pre-AIA 35 U.S.C. § 102(b) if
`
`“the invention was patented or described in a printed publication in this or a
`
`4
`
`

`
`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`foreign country or in public use or on sale in this country, more than one year prior
`
`to the date of the application for patent in the United States.” Further I have been
`
`informed that a claim is invalid as anticipated under Pre-AIA 35 U.S.C. § 102(e) if
`
`“the invention was described in … an application for patent, published under
`
`section 122(b), by another filed in the United States before the invention by the
`
`applicant for patent ….” It is my understanding that for a claim to be anticipated,
`
`all of the limitations must be present in a single prior art reference, either expressly
`
`or inherently.
`
`19.
`
`I have been informed that a claim is invalid as obvious under Pre-AIA
`
`35 U.S.C. § 103(a):
`
`
`
`if the differences between the subject matter sought to be patented and
`
`the prior art are such that the subject matter as a whole would have
`
`been obvious at the time the invention was made to a person having
`
`ordinary skill in the art to which [the] subject matter pertains.
`
`20.
`
`I understand that a claimed invention would have been obvious, and
`
`therefore not patentable, if the subject matter claimed would have been considered
`
`obvious to a person of ordinary skill in the art at the time that the invention was
`
`made. I understand that when there are known elements that perform in known
`
`ways and produce predictable results, the combination of those elements is
`
`probably obvious. Further, I understand that when there is a predictable variation
`
`5
`
`

`
`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`and a person would see the benefit of making that variation, implementing that
`
`predictable variation is probably not patentable. I have also been informed that
`
`obviousness does not require absolute predictability of success, but that what does
`
`matter is whether the prior art gives direction as to what parameters are critical and
`
`which of many possible choices may be successful.
`
`IV. OVERVIEW OF THE TECHNOLOGY
`21. The ’032 patent relates to the field of channel coding and error-
`
`correcting codes. This section provides an introduction to channel coding and
`
`error-correcting codes, highlighting developments relevant to the ’032 patent.
`
`A. Error-Correcting Codes in General
`
`22. Most computing devices and other digital electronics use bits to
`
`represent information. A bit is a binary unit of information that may have one of
`
`two values: 1 or 0. Any type of information, including, for example, text, music,
`
`images and video information, can be represented digitally with bits.
`
`23. When transmitting binary information over an analog communication
`
`channel, the data bits representing the information to be communicated (also called
`
`“information bits”) are converted into an analog signal that can be transmitted over
`
`the channel. This process is called modulation. The transmitted signal is then
`
`received by a device and converted back into binary form. This process, in which a
`
`6
`
`

`
`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`received analog waveform is converted into bits, is called demodulation. The steps
`
`of modulation and demodulation are illustrated in the figure below:
`
`
`
`Modulation, Transmission, and Demodulation
`
`24. Transmission over physical channels is rarely 100% reliable. The
`
`transmitted signal can be corrupted during transmission by “noise” caused by, for
`
`example, obstructions in the signal path, interference from other signals, or
`
`electrical/magnetic disturbances. Noise can cause bits to “flip” during
`
`transmission: for example, because of noise, a bit that was transmitted as a 1 can be
`
`corrupted during transmission and demodulated as 0, and vice versa.
`
`25. Error-correcting codes were developed to combat such transmission
`
`errors. Using the bits representing the information to be communicated (called
`
`“information bits”) an error-correcting code generates “parity bits” that allow the
`
`receiver to verify that the bits were transmitted correctly, and to correct
`
`transmission errors that occurred.
`
`7
`
`

`
`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`26. Bits are encoded by an encoder, which receives a sequence of
`
`information bits as input, generates parity bits with an encoding algorithm, and
`
`outputs a sequence of encoded bits called a codeword. The codeword produced by
`
`the encoder is then modulated and transmitted as an analog signal.
`
`27. At the receiver, the signal is demodulated, and the codeword is passed
`
`to the decoder, which uses a decoding algorithm to recover the original
`
`information bits.
`
`
`
`Encoding and Decoding
`
`28. Error-correcting codes work by adding redundant information to the
`
`original message. Due to redundancy, the information represented by a given
`
`information bit is spread across multiple bits of the codeword. Thus, even if one of
`
`those bits is flipped during transmission, the original information bit can still be
`
`recovered from the others.
`
`29. As a simple example, consider an encoding scheme, called “repeat-
`
`three,” that outputs three copies of each information bit. In this scheme, the
`
`information bits “1 0 1” would be encoded as “111 000 111.” Upon receipt, the
`
`8
`
`

`
`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`decoder converts instances of “111” into “1” and instances of “000” into “0” to
`
`produce the decoded bits “1 0 1,” which match the original information bits.
`
`30. Suppose a bit is flipped during transmission, changing “000” to “010.”
`
`The decoder can detect a transmission error, because “010” is not a valid “repeat-
`
`three” codeword. Using a “majority vote” rule, the decoder can infer that the
`
`original information bit was a 0, correcting the transmission error. Thus, due to the
`
`redundancy of the codeword, no information was lost due to the transmission error.
`
`31. Error-correcting codes are either systematic or non-systematic. (Ex.
`
`1120, p. 12, “Also, a [binary convolutional encoder] can be systematic or non-
`
`systematic.”; id. pp. 12-13, Figures 2.1 and 2.2, showing systematic and non-
`
`systematic encoders.) In a systematic code, both the parity bits and information bits
`
`are included in the codeword. (Ex. 1120, p. 14, “a systematic encoder is one for
`
`which the encoder input (the data) forms a substring of the output (the codword)”;
`
`Ex. 1121, pp. 6, 229.) In a non-systematic code, the encoded data only includes the
`
`parity bits.
`
`32. Systematic and non-systematic codes had been known in the art for
`
`decades prior to May 18, 2000, the claimed priority date of the ’032 patent.
`
`B. Coding Rate
`
`33. Many error-correcting codes encode information bits in groups, or
`
`blocks of fixed length n. An encoder receives a k-bit block of information bits as
`
`9
`
`

`
`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`input, and produces a corresponding n-bit codeword. The ratio k/n is called the rate
`
`of the code. Because the codeword generally includes redundant information, n is
`
`generally greater than k, and the rate k/n of an error-correcting code is generally
`
`less than one.
`
`C.
`
`Performance of Error-Correcting Codes
`
`34. The effectiveness of an error-correcting code may be measured using
`
`a variety of metrics.
`
`35. One tool used to assess the performance of a code is its bit-error rate
`
`(BER.) The BER is defined as the number of corrupted information bits divided by
`
`the total number of information bits during a given time interval. For example, if a
`
`decoder outputs one thousand bits in a given time period, and ten of those bits are
`
`corrupted (meaning they differ from the information bits originally received by the
`
`encoder), then the BER of the code during that time period is (10 bit errors) / (1000
`
`total bits) = 0.01 or 1%.1
`
`36. The BER of a transmission depends on the amount of noise in the
`
`communication channel and the strength of the transmitted signal (meaning the
`
`power that is used to transmit the signal). An increase in noise tends to increase the
`
`
`1 Note that as used herein, BER refers to the information BER, which measures the
`percentage of bits that remain incorrect after decoding. This is different than the
`transmission BER, which measures the percentage of bits that are incorrect when
`received by the decoder, but before the decoding process begins.
`10
`
`

`
`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`error rate and an increase in signal strength tends to decrease the error rate. The
`
`ratio of signal strength to noise, called the “signal-to-noise ratio,” is often used to
`
`characterize the channel over which the encoded signal is transmitted. The signal-
`
`to-noise ratio can be expressed mathematically as Eb/N0, in which Eb is the amount
`
`of energy used to transmit each bit of the signal, and N0 is the density of the noise.
`
`The BER of an error-correcting code is often measured for multiple values of Eb/N0
`
`to determine how the code performs under various channel conditions.
`
`37. Error-correcting codes may also be assessed based on their
`
`computational complexity. The complexity of a code is a rough estimate of how
`
`many calculations are required for the encoder to generate the encoded parity bits
`
`and how many calculations are required for the decoder to reconstruct the
`
`information bits from the parity bits. If a code is too complex, it may be
`
`impractical to build encoders/decoders that are fast enough to use it.
`
`D. LDPC Codes, Turbo Codes, and Repeat-Accumulate Codes
`
`38.
`
`In 1963, Robert Gallager described a set of error correcting codes
`
`called Low Density Parity Check (“LDPC”) codes. Gallager described how LDPC
`
`codes provide one method of generating parity bits from information bits using a
`
`matrix populated with mostly 0s and relatively few 1s, as explained in more detail
`
`below. (See Gallager, R., Low-Density Parity-Check Codes, Monograph, M.I.T.
`
`Press, 1963; Ex. 1105.)
`
`11
`
`

`
`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`39. Gallager’s work was largely ignored over the following decades, as
`
`researchers continued to discover other algorithms for calculating parity bits. These
`
`algorithms included, for example, convolutional encoding with Viterbi decoding
`
`and cyclic code encoding with bounded distance decoding. In many cases, these
`
`new codes could be decoded using low-complexity decoding algorithms.
`
`40.
`
`In 1993, researchers discovered “turbocodes,” a class of error-
`
`correcting codes capable of transmitting information at a rate close to the so-called
`
`“Shannon Limit” – the maximum rate at which information can be transmitted over
`
`a channel. A standard turbocoder encodes a sequence of information bits using two
`
`“convolutional” coders. The information bits are passed to the first convolutional
`
`coder in their original order. At the same time, a copy of the information bits that
`
`have been reordered by an interleaver is passed to the second convolutional coder.
`
`The figure below shows the structure of a typical turbocoder. See Berrou et al.,
`
`“Near Shannon Limit Error-Correcting Coding and Decoding: Turbo Codes," ICC
`
`’93, Technical Program, Conference Record 1064, Geneva 1993 (Ex. 1106)
`
`12
`
`

`
`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`
`41. The main drawback of convolutional codes is that they do not perform
`
`well over channels in which errors are clustered tightly together. Turbo codes
`
`overcome this deficiency by encoding the input bits twice. The input bits are fed to
`
`a first convolutional encoder in their normal order to produce a first set of parity
`
`bits. The same input bits are also reordered by an interleaver and then encoded by a
`
`second convolutional encoder to produce a second set of parity bits. The two sets
`
`of parity bits together with the information bits form a single codeword. Using a
`
`turbo code, a small number of errors will not result in loss of information unless
`
`13
`
`

`
`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`the errors happen to fall close together in both the original data stream and in the
`
`permuted data stream, which is unlikely.
`
`42.
`
`In 1995, David J. C. MacKay rediscovered Gallager’s work from
`
`1963 relating to low-density parity-check (LDPC) codes, and demonstrated that
`
`they have performance comparable to that of turbocodes. See MacKay, D. J. C, and
`
`Neal, R. M. “Near Shannon Limit Performance of Low Density Parity Check
`
`Codes,” Electronics Letters, vol. 32, pp. 1645-46, 1996 (Ex. 1116.) This
`
`rediscovery was met with wide acclaim. Turbocodes and LDPC codes have
`
`common characteristics: both codes use pseudo-random permutations to spread out
`
`redundancy, and both use iterative decoding algorithms.
`
`43.
`
`In 1995 and 1996, researchers began to explore “concatenated”
`
`convolutional codes. See Benedetto, S. et al., Serial Concatenation of Block and
`
`Convolutional Codes, 32.10 Electronics Letters 887-8, 1996 (Ex. 1107.) While
`
`turbo codes use two convolutional coders connected in parallel, concatenated
`
`convolutional codes use two convolutional coders connected in series: the
`
`information bits are encoded by a first encoder, the output of the first encoder is
`
`interleaved, and the interleaved sequence is encoded by a second, convolutional
`
`code. In such codes, the first and second encoders are often called the “outer
`
`coder” and the “inner coder,” respectively.
`
`14
`
`

`
`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`44.
`
`In 1998, researchers developed “repeat-accumulate,” or “RA codes,”
`
`by simplifying the principles underlying turbocodes. (See Ex. 1117.) In RA codes,
`
`the information bits are first passed to a repeater that repeats (duplicates) the
`
`information bits and outputs a stream of repeated bits (the encoder described above
`
`in the context of the “repeat three” coding scheme is one example of a repeater).
`
`The repeated bits are then optionally reordered by an interleaver, and are then
`
`passed to an accumulator where they are “accumulated” to form the parity bits.
`
`45. Accumulation is a running sum process whereby each input bit is
`
`added to the previous input bits to produce a sequence of running sums, each of
`
`which represents the sum of all input bits yet received. More formally, if an
`
`accumulator receives a sequence of input bits i1, i2, i3, … in, it will produce output
`
`bits o1, o2, o3, … on, such that:
`
`
`Where the ⊕ symbol denotes modulo-2 addition. Accumulators can be
`
`implemented simply, allowing accumulate codes to be encoded rapidly and
`
`cheaply.
`
`15
`
`

`
`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`E. Mathematical Representations of Error-Correcting Codes
`
`1. Linear Transformations
`46. Coding theorists often think of error-correcting codes in linear-
`
`algebraic terms. For example, a k-bit block of information bits is a k-dimensional
`
`vector of bits, and an n-bit codeword is an n-dimensional vector of bits (a
`
`“dimensional vector” of bits is a sequence of bits). The encoding process, which
`
`converts blocks of information bits into codewords, is a linear transformation that
`
`maps k-dimensional bit vectors to n-dimensional bit vectors. This transformation is
`
`represented by an n × k matrix G called a generator matrix. For a vector of
`
`information bits u, the corresponding codeword x is given by:
`
`
`The n-dimensional vectors that can be written in the form Gu are valid codewords.
`
`47. Most n-dimensional vectors are not valid codewords. It is this
`
`property that allows a decoder to determine when there has been an error during
`
`transmission. This determination is made using an (n - k) × n matrix H, called a
`
`parity check matrix. Using a parity check matrix, a vector x is a valid codeword if
`
`16
`
`

`
`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`and only if Hx = 0. If the parity check shows that Hx does not equal 0, then the
`
`decoder detects a transmission error (much like the “010” example in the “repeat
`
`three” code above).
`
`48. Each of the n - k rows of the parity-check matrix H represents an
`
`equation that a valid codeword must satisfy. For example, consider a codeword x
`
`and a parity check matrix H given as follows:
`
`
`
`If x is a valid codeword, the product Hx must be equal to 0, so we
`
`49.
`
`have:
`
`
`
`As this equation shows, the first row of H represents the constraint that x3 + x4 = 0,
`
`and the second row of H represents the constraint that x1 + x2 = 0. If the vector x
`
`satisfies both of these constraints, it is a valid codeword. In practice, parity-check
`
`matrices often have hundreds or thousands of rows, each of which represents an
`
`equation of the form xa + xb + … + xz = 0, similar to those shown in the above
`
`example. These equations are called parity-check equations.
`
`17
`
`

`
`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`2. Repeating and Permuting as Examples of LDGM Codes
`50. As explained above, the encoding process can be represented using a
`
`generator matrix. This is true of all linear codes, including the “repeat,”
`
`“interleave,” and “accumulate” phases of the “repeat-accumulate” codes discussed
`
`above. More specifically, the “repeat” and interleave” phases can be represented
`
`using generator matrices whose entries are mostly zeros, with a relatively low
`
`proportion of 1s. Such generator matrices are called “low-density generator
`
`matrices” (LDGMs). The following is a generator matrix that repeats each
`
`information bit twice:
`
`
`
`51. Using this generator matrix, encoding a sequence of input bits
`
`“101010101” would result in the encoded sequence “110011001100110011” (the
`
`number of times each input bit is repeated is determined by the number of “1s” in
`
`the column corresponding to that input bit). In the 18×9 matrix above, 11.1% of
`
`18
`
`

`
`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`the entries are 1s, and the rest are 0s. The relative scarcity of 1s means that
`
`GREPEAT2 is a low-density generator matrix (LDGM).
`
`52. Permutation is another linear transform that is represented by a low-
`
`density generator matrix. For example, the matrix below is a permutation matrix
`
`that will output bits in a different order than bits are input:
`
`
`53. Permutation matrices, like GSHUFFLE above, have exactly one 1 per row
`
`and one 1 per column. The order of the output bits is determined by the position of
`
`each of these 1s within its row/column. The matrix above, for example, would
`
`transform input bits i1, i2, i3, i4, i5, i6, i7, i8 into the output sequence i2, i1, i7, i5, i4, i8,
`
`i6, i3. Permutation matrices are square, with dimensions n×n, because the input
`
`sequence and the output sequence have the same length. With only one 1 per
`
`row/column of permutation matrices, the density of 1s is 1/n (GSHUFFLE, shown
`
`above, has a density of 1/8, or 12.5%). With linear codes, which often operate on
`
`blocks of more than a thousand bits at a time, a permutation matrix might comprise
`
`19
`
`

`
`0.1% 1s or fewer, and are therefore very low-density generator matrices
`
`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`(LDGMs).2
`
`3. Tanner Graphs
`
`54. Another popular mathematical representation of error-correcting
`
`codes is the “Tanner Graph.” Tanner graphs are bipartite graphs wherein nodes can
`
`be divided into two groups. Every edge connects a node in one group to a node in
`
`the other (meaning no two nodes in the same group are connected by an edge). A
`
`bipartite graph is shown below:
`
`
`55. A Tanner graph includes one group of nodes called variable nodes
`
`that correspond to the information and parity bits, and a second group of nodes
`
`
`2 In fact, for a given n and k, there are no viable codes whose generator matrices
`have a lower density than repeat-codes or permute-codes. Both of these codes have
`matrices with exactly one 1 per row – a matrix with lower density would
`necessarily have at least one row without any 1s at all, which would result in every
`codeword having a 0 in a particular bit position, conveying no information about
`the original message.
`
`20
`
`

`
`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`called check nodes that represent constraints that parity and information bits must
`
`satisfy. Specifically, when a set of variable nodes are connected to a given check
`
`node, it means that the information/parity bits corresponding to those variable
`
`nodes must sum to 0.
`
`56. These two mathematical descriptions of linear codes – one using
`
`matrices, one using Tanner graphs – are two different ways of describing the same
`
`thing. Matrices and Tanner graphs are two different ways of describing the same
`
`set of linear codes, in much the same way that “0.5” and “½” are two different
`
`ways of describing the same number. Every generator matrix corresponds to a
`
`Tanner graph, and vice versa.
`
`F.
`
`Irregularity
`
`57.
`
`Irregular LDPC codes were first introduced in a 1997 paper by Luby.
`
`See Luby, M. et al., “Practical Loss-Resilient Codes,” STOC ’97, 1997 (Ex. 1108).
`
`The paper showed that irregular codes perform better than regular codes on certain
`
`types of noisy channels.
`
`58. Regular codes are codes in which each information bit contributes to
`
`the same number of parity bits, while irregular codes are codes in which different
`
`information bits or groups of information bits contribute to different numbers of
`
`parity bits. This is supported by the Board’s construction of “irregular” in prior
`
`proceedings. (See, e.g., IPR2015-00060, Paper 18, p. 12.) Irregularity can also be
`
`21
`
`

`
`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`defined in terms of Tanner graphs. A regular code is one with a Tanner graph in
`
`which each variable node corresponding to an information bit is connected to the
`
`same number of check nodes. Irregular codes are those with Tanner graphs in
`
`which some variable nodes corresponding to information bits are connected to
`
`more check nodes than others. These two formulations of irregularity are different
`
`(and well-known) ways of describing the same concept.
`
`59. After Luby’s initial paper describing irregularity, the same team of
`
`researchers published a second paper, expanding the theory of irregularity in error-
`
`correcting codes. (See Luby, M. et al., “Analysis of Low Density Codes and
`
`Improved Designs Using Irregular Graphs,” STOC ’98, pp. 249-59, published in
`
`1998 (Ex. 1109).) At the time, both of these Luby papers were widely read by
`
`coding theorists, and motivated extensive research into irregularity.
`
`60. For example, the 1998 Luby paper was the first reference cited in a
`
`paper by Frey and MacKay titled “Irregular Turbocodes,” presented at the 1999
`
`Allerton Conference on Communications, Control, and Computing. (See Ex.
`
`1110.) The second author, MacKay, was the same researcher who had rediscovered
`
`LDPC codes in 1995. In this paper, the authors applied the concept of irregularity
`
`to turbo codes by explaining how to construct irregular turbo codes in which some
`
`information bits connect to more check nodes than others. The experimental results
`
`22
`
`

`
`U.S. Patent 7,421,032
`Declaration of James A. Davis, Ph.D.
`
`presented in the Frey paper demonstrated that these irregular turbo codes perform
`
`better than the regular turbocodes that were known in the art. (See Ex. 1110.)
`
`61. By May 18, 2000, the claimed priority date of the ’032 patent, it was
`
`common knowledge that the performance of error-correcting code could be
`
`improved with irregularity.
`
`V. Overview Of Primary Prior Art References
`A.
`Ping
`
`62. L. Ping, W. K. Leung, N. Phamdo, “Low Density Parity Check Codes
`
`with Semi-random Parity Check Matrix,” Electron. Letters, Vol. 35, No. 1, pp. 38-
`
`39 (“Ping”, Ex. 1103) was published in January 1999, more than a year before the
`
`filing of the provisional application to which the ʼ032 patent claims priority. I
`
`understand that Ping is thus prior art to the ʼ032 patent under 35 U.S.C. § 102(a)
`
`and (b). Ping was not

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket