`Petition for Inter Partes Review
`
`DOCKET NO.: 1033300-00287
`
`
`UNITED STATES PATENT AND TRADEMARK OFFICE
`
`
`7,421,032
`PATENT:
`INVENTORS: HUI JIN, AAMOD KHANDEKAR, ROBERT J. MCELIECE
`FILED:
`OCTOBER 3, 2006
`ISSUED:
`SEPTEMBER 2, 2008
`TITLE:
`SERIAL CONCATENATION OF INTERLEAVED
`CONVOLUTIONAL CODES FORMING TURBO-LIKE
`CODES
`___________________________________________
`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`____________________________________________
`
`Apple Inc.
`Petitioner
`
`v.
`
`California Institute of Technology
`Patent Owner
`
`Case IPR2017-00700
`
`
`
`PETITION FOR INTER PARTES REVIEW OF U.S. PATENT NO. 7,421,032
`UNDER 35 U.S.C. § 312 AND 37 C.F.R. § 42.104
`
`
`
`
`
`ActiveUS 160600254v.1
`
`
`
`U.S. Patent 7,421,032
`Petition for Inter Partes Review
`
`TABLE OF CONTENTS
`
`I. Mandatory Notices ............................................................................................... 3
`A. Real Party-in-Interest ..................................................................................... 3
`B. Related Matters .............................................................................................. 3
`C. Counsel .......................................................................................................... 4
`D. Service Information ....................................................................................... 4
`II. Certification of Grounds for Standing ................................................................. 4
`III. Overview of Challenge and Relief Requested ..................................................... 4
`A. Prior Art Patents and Printed Publications .................................................... 4
`B. Relief Requested ............................................................................................ 6
`IV. Overview of the Technology ................................................................................ 6
`A. Error-Correcting Codes in General ............................................................... 6
`B. Coding Rate ................................................................................................... 9
`C. Performance of Error-Correcting Codes ..................................................... 10
`D. LDPC Codes, Turbocodes, and Repeat-Accumulate Codes ....................... 10
`E. Mathematical Representations of Error-Correcting Codes ......................... 13
`F.
`Irregularity ................................................................................................... 19
`V. Overview of the ’032 Patent .............................................................................. 21
`A. Claims .......................................................................................................... 21
`B. Summary of the Specification ..................................................................... 21
`C. Level of Ordinary Skill in the Art ............................................................... 23
`VI. Claim Construction ............................................................................................ 23
`A. “irregular” .................................................................................................... 24
`B. “Tanner graph” (Claims 11, 18) .................................................................. 25
`VII. Overview of Primary Prior Art References ................................................ 27
`A. Ping .............................................................................................................. 27
`B. MacKay ........................................................................................................ 33
`C. Divsalar ........................................................................................................ 34
`D. Luby97 ......................................................................................................... 37
`
`ActiveUS 160600254v.1
`
`1
`
`
`
`U.S. Patent 7,421,032
`Petition for Inter Partes Review
`
`E. Pfister ........................................................................................................... 37
`VIII. Grounds for Challenge ................................................................................ 38
`A. Ground 1: Claims 11, 12, and 14-16 Are Obvious over Ping in View of
`MacKay and Further in View of Divsalar ................................................... 39
`B. Ground 2: Claim 13 Is Obvious over Ping in View of MacKay, Divsalar,
`and Luby97 .................................................................................................. 64
`C. Ground 3: Claim 17 Is Obvious over Ping in View of MacKay, Divsalar,
`and Pfister .................................................................................................... 71
`IX. Conclusion .......................................................................................................... 73
`
`
`
`
`
`ActiveUS 160600254v.1
`
`2
`
`
`
`U.S. Patent 7,421,032
`Petition for Inter Partes Review
`
`I. MANDATORY NOTICES
`A. Real Party-in-Interest
`Apple Inc. (“Apple” or “Petitioner”) and Broadcom Corp. are the real
`
`parties-in-interest.
`
`B. Related Matters
`U.S. Pat. No. 7,421,032 (the “’032 patent,” Ex. 1001) is assigned to the
`
`California Institute of Technology (“Caltech” or “Patent Owner.”) On May 26,
`
`2016, Caltech sued Apple, Broadcom Corp., and Avago Technologies, Ltd. in the
`
`U.S. District Court for the Central District of California, claiming that Apple
`
`products compliant with the 802.11n and 802.11ac wireless communication
`
`standards infringe the ’032 patent (and three others). On August 15, 2016, Caltech
`
`amended its complaint to assert patent infringement against Cypress
`
`Semiconductor Corp. See Amended Complaint, California Institute of Technology
`
`v. Broadcom, Ltd. et al. (Case 2:16-cv-03714), Docket No. 36. The ’032 patent
`
`was also asserted by Caltech against Hughes Communications Inc. in California
`
`Institute of Technology v. Hughes Communs., Inc (Case 2:13-cv-07245), and its
`
`claims were challenged in one petition for inter partes review, IPR2015-00060.
`
`Patents in the priority chain of the ’032 patent were challenged in IPR2015-00068,
`
`IPR 2015-00067, IPR2015-00059, IPR2015-00061, IPR2015-00081, IPR2017-
`
`00210, IPR2017-00211, IPR2017-00219, IPR2017-00297, and IPR2017-00423.
`
`ActiveUS 160600254v.1
`
`3
`
`
`
`U.S. Patent 7,421,032
`Petition for Inter Partes Review
`
`C. Counsel
`Lead Counsel: Richard Goldenberg (Registration No. 38,895)
`
`Backup Counsel: Brian M. Seeve (Registration No. 71,721)
`
`D.
`Service Information
`Petitioner consents to electronic service.
`
`E-mail: richard.goldenberg@wilmerhale.com
`
`Post and Hand Delivery: WilmerHale, 60 State St., Boston MA 02109
`
`Telephone: 617-526-6548
`
`II. CERTIFICATION OF GROUNDS FOR STANDING
`Petitioner certifies pursuant to Rule 42.104(a) that the patent for which
`
`review is sought is available for inter partes review and that Petitioner is not
`
`barred or estopped from requesting an inter partes review challenging the patent
`
`claims on the grounds identified in this Petition.
`
`III. OVERVIEW OF CHALLENGE AND RELIEF REQUESTED
`Pursuant to Rules 42.22(a)(1) and 42.104(b)(1)-(2), Petitioner challenges
`
`claims 11-17 of the ’032 Patent (“the challenged claims”) and requests that each
`
`challenged claim be canceled.
`
`A.
`Prior Art Patents and Printed Publications
`Petitioner relies upon the patents and printed publications listed in the Table
`
`of Exhibits, including:
`
`ActiveUS 160600254v.1
`
`4
`
`
`
`U.S. Patent 7,421,032
`Petition for Inter Partes Review
`1. L. Ping, W. K. Leung, N. Phamdo, “Low Density Parity Check
`
`Codes with Semi-random Parity Check Matrix.” Electron. Letters,
`
`Vol. 35, No. 1, pp. 38-39, published on January 7, 1999 (“Ping”;
`
`Ex. 1003.)
`
`2. D. J. C. MacKay, S. T. Wilson, and M. C. Davey, “Comparison of
`
`Constructions of Irregular Gallager Codes,” IEEE Trans.
`
`Commun., Vol. 47, No. 10, pp. 1449-54, published in Oct. 1999
`
`(“MacKay”; Ex. 1002.)
`
`3. D. Divsalar, H. Jin, and R. J. McEliece, “Coding Theorems for
`
`‘Turbo-like’ Codes,” Proc. 36th Allerton Conf. on Comm., Control
`
`and Computing, Allerton, Illinois, pp. 201-10, September 1998
`
`(“Divsalar”; Ex. 1017.)
`
`4. Luby, M. et al., “Practical Loss-Resilient Codes,” STOC ’97, pp.
`
`150-59, published in 1997 (“Luby97”; Ex. 1008.)
`
`5. Pfister, H. and Siegel, P., “The Serial Concatenation of Rate-1
`
`Codes Through Uniform Random Interleavers,” 37th Allerton
`
`Conf. on Comm., Control and Computing, Monticello, Illinois,
`
`published on or before September 24, 1999 (“Pfister”; Ex. 1022.)
`
`The ’032 patent is a continuation of U.S. Patent No. 7,116,710, which claims
`
`priority to provisional applications filed on May 18, 2000 and August 18, 2000.
`
`ActiveUS 160600254v.1
`
`5
`
`
`
`U.S. Patent 7,421,032
`Petition for Inter Partes Review
`The Petitioner reserves the right to dispute that the challenged claims are entitled to
`
`these priority dates. However, even if the challenged claims are entitled to the
`
`earliest claimed priority date of May 18, 2000, the references relied upon in this
`
`Petition qualify as prior art under 35 U.S.C. § 102.
`
`B. Relief Requested
`Petitioner requests that the Patent Trial and Appeal Board cancel the
`
`challenged claims because they are unpatentable under 35 U.S.C. § 103 as set forth
`
`in this Petition. This conclusion is further supported by the declaration of Professor
`
`James Davis, Ph.D. (“Davis Declaration,” Ex. 1004), filed herewith.
`
`IV. OVERVIEW OF THE TECHNOLOGY
`The ’032 patent relates to the field of channel coding and error-correcting
`
`codes. This section provides a brief introduction to channel coding and error-
`
`correcting codes, and highlights a few of the developments in the field that are
`
`relevant to the ’032 patent. (Ex. 1004, ¶21.)
`
`A. Error-Correcting Codes in General
`Most digital electronics use bits to represent information. A bit is a binary
`
`unit of information that may be a 1 or 0. Any type of information can be
`
`represented digitally as a collection of bits. (Ex. 1004, ¶22.)
`
`When transmitting binary information over an analog communication
`
`channel, the data bits representing the information to be communicated
`
`ActiveUS 160600254v.1
`
`6
`
`
`
`U.S. Patent 7,421,032
`Petition for Inter Partes Review
`(“information bits”) are converted into an analog signal. This process is called
`
`modulation. The transmitted signal is then received by a receiving device and
`
`converted back into binary form through demodulation.
`
`Modulation, Transmission, and Demodulation
`Transmission over physical channels is rarely 100% reliable. The transmitted
`
`
`
`signal can be corrupted during transmission by “noise” caused by, e.g.,
`
`obstructions in the signal path, interference from other signals, or
`
`electrical/magnetic disturbances. Noise can cause bits to “flip” during
`
`transmission, causing a bit transmitted as a 1 to be demodulated as 0, and vice
`
`versa. (Ex. 1004, ¶¶23-24.)
`
`Error-correcting codes were developed to mitigate transmission errors.
`
`Using the information bits, an error-correcting code generates “parity bits” that
`
`allow the receiver to verify that the information bits were transmitted correctly, and
`
`to correct transmission errors. (Ex. 1004, ¶25.)
`
`ActiveUS 160600254v.1
`
`7
`
`
`
`U.S. Patent 7,421,032
`Petition for Inter Partes Review
`Bits are encoded by an encoder, which receives a sequence of information
`
`bits as input, generates parity bits according to a particular encoding algorithm, and
`
`outputs a sequence of encoded bits (or data elements) called a codeword. The
`
`codeword produced by the encoder is then modulated and transmitted as an analog
`
`signal. (Ex. 1004, ¶26.)
`
`At the receiver, the signal is received and demodulated, and the codeword is
`
`passed to the decoder, which uses a decoding algorithm to recover the original
`
`information bits.
`
`Encoding and Decoding
`Error-correcting codes work by adding redundant information to the original
`
`
`
`message. Due to redundancy, the information represented by a given information
`
`bit is spread across multiple bits of the codeword. Thus, if a codeword bit is flipped
`
`during transmission, the original information bit can still be recovered from the
`
`other codeword bits. (Ex. 1004, ¶¶27-28.)
`
`As a simple example, consider an encoding scheme, called “repeat-three,”
`
`that outputs three copies of each information bit. In this scheme, the information
`
`bits “1 0 1” would be encoded as “111 000 111.” Upon receipt, the decoder
`
`ActiveUS 160600254v.1
`
`8
`
`
`
`U.S. Patent 7,421,032
`Petition for Inter Partes Review
`converts instances of “111” into “1” and instances of “000” into “0” to produce the
`
`decoded bits “1 0 1,” which match the original information bits. (Ex. 1004, ¶29.)
`
`Suppose a bit is flipped during transmission, changing “000” to “010.” The
`
`decoder will be able to detect a transmission error, because “010” is not a valid
`
`“repeat-three” codeword. Using a “majority vote” rule, the decoder can infer that
`
`the original information bit was 0, correcting the transmission error. Thus, due to
`
`the redundancy incorporated into the codeword, no information was lost. (Ex.
`
`1004, ¶30.)
`
`Error-correcting codes may be either systematic or non-systematic. (Ex.
`
`1020, p. 12 “Also, a BCI can be systematic or non-systematic.”; pp. 12-13, Figures
`
`2.1 and 2.2 captions showing both systematic and non-systematic encoders.) In a
`
`systematic code, parity bits and original information bits are included in the
`
`codeword. (Ex. 1020, p. 14 “For example, a systematic encoder is one for which
`
`the encoder input (the data) forms a substring of the output (the codword).”; Ex.
`
`1021, pp. 6, 229.) In a non-systematic code, the encoded data includes only parity
`
`bits. (Id.) Systematic and non-systematic codes had been known for decades before
`
`May 18, 2000, the claimed priority date of the ʼ032 patent. (Ex. 1004, ¶¶31-32.)
`
`B. Coding Rate
`Many error-correcting codes encode information bits in groups, or blocks of
`
`fixed length n. An encoder receives a k-bit block of information bits as input, and
`
`ActiveUS 160600254v.1
`
`9
`
`
`
`U.S. Patent 7,421,032
`Petition for Inter Partes Review
`produces a corresponding n-bit codeword. The ratio k/n is called the rate of the
`
`code. Because the codeword generally includes redundant information, n is
`
`generally greater than k, and the rate k/n of an error-correcting code is generally
`
`less than one. (Ex. 1004, ¶33.)
`
`C.
`Performance of Error-Correcting Codes
`One metric used to assess the performance of an error-correcting code is its
`
`bit-error rate (BER.) The BER is defined as the number of corrupted information
`
`bits divided by the total information bits during a particular interval. For example,
`
`if a decoder outputs one thousand bits and ten are corrupted (i.e., they differ from
`
`the original information bits), then the BER of the code during that interval is (10
`
`bit errors) / (1000 total bits) = 0.01 or 1%. (Ex. 1004, ¶¶34-35.)
`
`D. LDPC Codes, Turbocodes, and Repeat-Accumulate Codes
`In 1963, Robert Gallager described a set of error-correcting codes called
`
`Low Density Parity Check (“LDPC”) codes. Gallager described how LDPC codes
`
`provide one method of generating parity bits from information bits using a matrix
`
`populated with mostly 0s and relatively few 1s, as explained below. (See Gallager,
`
`R., Low-Density Parity-Check Codes, Monograph, M.I.T. Press, 1963; Ex. 1005.)
`
`(Ex. 1004, ¶38.)
`
`Gallager’s work was largely ignored over the following decades, as
`
`researchers continued to discover other algorithms for calculating parity bits. These
`
`ActiveUS 160600254v.1
`
`10
`
`
`
`U.S. Patent 7,421,032
`Petition for Inter Partes Review
`algorithms included, for example, convolutional encoding with Viterbi decoding
`
`and cyclic code encoding with bounded distance decoding. In many cases, these
`
`new codes could be decoded using low-complexity decoding algorithms. (Ex.
`
`1004, ¶39.)
`
`In 1993, researchers discovered “turbocodes,” a class of error-correcting
`
`codes capable of transmitting information at a rate close to the so-called “Shannon
`
`Limit” – the maximum rate at which information can be transmitted over a
`
`channel. As explained further in the Davis Declaration, the main drawback of
`
`convolutional codes is that they do not perform well over channels in which errors
`
`are clustered tightly together. Turbocodes overcome this deficiency: using a
`
`turbocode, a small number of errors will not result in loss of information unless the
`
`errors happen to fall close together in both the original data stream and in the
`
`permuted data stream, which is unlikely. (Ex. 1004, ¶¶40-41.)
`
`In 1995, David J. C. MacKay rediscovered Gallager’s work from 1963
`
`relating to low-density parity-check (LDPC) codes and demonstrated that they
`
`have performance comparable to that of turbocodes. See MacKay, D. J. C, and
`
`Neal, R. M. “Near Shannon Limit Performance of Low Density Parity Check
`
`Codes,” Electronics Letters, vol. 32, pp. 1645-46, 1996 (Ex. 1016.) This
`
`rediscovery was met with wide acclaim. Turbocodes and LDPC codes have some
`
`ActiveUS 160600254v.1
`
`11
`
`
`
`U.S. Patent 7,421,032
`Petition for Inter Partes Review
`common characteristics: both codes use permutations to spread out redundancy,
`
`and both use iterative decoding algorithms. (Ex. 1004, ¶42.)
`
`In 1995 and 1996, researchers began to explore “concatenated”
`
`convolutional codes. See Benedetto, S. et al., Serial Concatenation of Block and
`
`Convolutional Codes, 32.10 Electronics Letters 887-8, 1996 (Ex. 1007.) While
`
`turbocodes use two convolutional coders connected in parallel, concatenated
`
`convolutional codes use two convolutional coders connected in series: the
`
`information bits are encoded by a first encoder, the output of the first encoder is
`
`interleaved, and the interleaved sequence is encoded by a second encoder. In such
`
`codes, the first and second encoders are often called the “outer coder” and the
`
`“inner coder,” respectively. (Ex. 1004, ¶43.)
`
`In 1998, researchers developed “repeat-accumulate,” or “RA codes,” by
`
`simplifying the principles underlying turbocodes. (See Ex. 1017.) In RA codes,
`
`information bits are first passed to a repeater that repeats (i.e., duplicates) them and
`
`outputs a stream of repeated bits (the encoder described above in the “repeat three”
`
`coding scheme is one example of a repeater). The repeated bits are then optionally
`
`reordered by an interleaver and passed to an “accumulator” to form parity bits. (Ex.
`
`1004, ¶44.)
`
`In accumulation, each input bit is added to the previous input bits to produce
`
`a sequence of running sums, each of which represents the sum of all input bits yet
`
`ActiveUS 160600254v.1
`
`12
`
`
`
`U.S. Patent 7,421,032
`Petition for Inter Partes Review
`received. More formally, if an accumulator receives a sequence of input bits i1, i2,
`
`i3, … in, it will produce output bits o1, o2, o3, … on, such that:
`
`
`The (cid:134) symbol denotes modulo-2 addition.1 Accumulators can be
`
`implemented simply, allowing accumulate codes to be encoded quickly and
`
`cheaply. (Ex. 1004, ¶45.)
`
`E. Mathematical Representations of Error-Correcting Codes
`1.
`Linear Transformations
`Coding theorists often think of error-correcting codes in linear-algebraic
`
`terms. For instance, a k-bit block of information bits is a k-dimensional vector of
`
`bits and an n-bit codeword is an n-dimensional vector of bits (a “k-dimensional
`
`vector of bits” is a sequence of k bits). The encoding process, which converts
`
`blocks of information bits into codewords, is a linear transformation that maps k-
`
`dimensional bit vectors to n-dimensional bit vectors. This transformation is
`
`
`1 Modulo-2 addition (also called “XOR”) is an addition operation that is performed
`
`on bits, defined as follows: 1(cid:134)1=0, 1(cid:134)0=1, 0(cid:134)1=1, and 0(cid:134)0=0.
`
`ActiveUS 160600254v.1
`
`13
`
`
`
`U.S. Patent 7,421,032
`Petition for Inter Partes Review
`represented by an n × k matrix G called a generator matrix. For a vector of
`
`information bits u, the corresponding codeword x is given by:
`
`
`The n-dimensional vectors that can be written in the form Gu are valid codewords.
`
`(Ex. 1004, ¶46.)
`
`Most n-dimensional vectors are not valid codewords. It is this property that
`
`allows a decoder to determine when there has been an error during transmission.
`
`This determination is made using an (n-k) × n matrix H, called a parity check
`
`matrix. Using a parity check matrix, a given vector x is a valid codeword if and
`
`only if Hx = 0. As explained in the Davis Declaration, if the parity check shows
`
`that Hx does not equal 0, then the decoder determines there has been a
`
`transmission error (much like the “010” example in the “repeat three” code above).
`
`(Ex. 1004, ¶47.)
`
`Each of the n-k rows of the parity-check matrix H represents an equation
`
`that a valid codeword must satisfy. For example, consider a codeword x and a
`
`parity check matrix H given as follows:
`
`ActiveUS 160600254v.1
`
`14
`
`
`
`U.S. Patent 7,421,032
`Petition for Inter Partes Review
`
`
`
`If x is a valid codeword, the product Hx must be equal to 0, so we have:
`
`
`
`As this equation shows, the first row of H represents the constraint that x3+x4=0,
`
`and the second row of H represents the constraint that x1+x2=0. If the vector x
`
`satisfies both of these constraints, it is a valid codeword. These can be referred to
`
`as “parity constraints.” In practice, parity-check matrices often have hundreds or
`
`thousands of rows, each of which represents an equation of the form xa+xb+…+xz =
`
`0. These equations are called parity-check equations. (Ex. 1004, ¶¶48-49.)
`
`2.
`Repeating and Permuting as Examples of LDGM Codes
`As explained above, the encoding process can be represented using a
`
`generator matrix. This is true of all linear codes, including the “repeat,”
`
`“interleave,” and “accumulate” phases of the “repeat-accumulate” codes discussed
`
`above. Specifically, the “repeat” and “interleave” phases can be represented using
`
`generator matrices whose entries are mostly 0s, with a relatively low proportion of
`
`1s. Such generator matrices are called “low-density generator matrices” (LDGMs).
`
`The following is a generator matrix that repeats each information bit twice:
`
`ActiveUS 160600254v.1
`
`15
`
`
`
`U.S. Patent 7,421,032
`Petition for Inter Partes Review
`
`
`
`Using this generator matrix, encoding a sequence of input bits “101010101” would
`
`result in the encoded sequence “110011001100110011” (the number of times each
`
`input bit is repeated is determined by the number of “1s” in the column
`
`corresponding to that input bit). In the 18×9 matrix above, 11.1% of the entries are
`
`1s, and the rest are 0s, making GREPEAT2 a low-density generator matrix (LDGM).
`
`(Ex. 1004, ¶¶50-51.)
`
`Permutation is another linear transform represented by a low-density
`
`generator matrix. For example, the matrix below is a permutation matrix that will
`
`output bits in a different order than the input bits:
`
`ActiveUS 160600254v.1
`
`16
`
`
`
`U.S. Patent 7,421,032
`Petition for Inter Partes Review
`
`
`Permutation matrices, like GSHUFFLE above, have exactly one 1 per row and one 1
`
`per column. The order of the output bits is determined by the position of each 1
`
`within its row/column. The matrix above, for example, would transform input bits
`
`i1, i2, i3, i4, i5, i6, i7, i8 into the output sequence i2, i1, i7, i5, i4, i8, i6, i3. Permutation
`
`matrices are square, with dimensions n×n, because the input sequence and the
`
`output sequence have the same length. With only one 1 per row/column of
`
`permutation matrices, the density of 1s is 1/n (GSHUFFLE has a density of 1/8, or
`
`12.5%). With linear codes, which often operate on blocks of more than a thousand
`
`bits, a permutation matrix might comprise 0.1% 1s or fewer, and, as explained in
`
`the Davis Declaration, are therefore very low-density generator matrices
`
`(LDGMs). (Ex. 1004, ¶¶52-53.)
`
`3.
`Tanner Graphs
`Another widespread mathematical representation of error-correcting codes is
`
`the “Tanner graph.” Tanner graphs are bipartite graphs wherein nodes can be
`
`divided into two groups. Every edge connects a node in one group to a node in the
`
`ActiveUS 160600254v.1
`
`17
`
`
`
`U.S. Patent 7,421,032
`Petition for Inter Partes Review
`other (i.e., no two nodes in the same group are connected by an edge). A bipartite
`
`graph is shown below:
`
`
`A Tanner graph includes one group of nodes called variable nodes that correspond
`
`to the information and parity bits, and a second group of nodes called check nodes
`
`that represent constraints that parity and information bits must satisfy. In particular,
`
`when a set of variable nodes are connected to a particular check node, it means that
`
`the information and parity bits corresponding to those variable nodes must sum to
`
`zero. (Ex. 1004, ¶¶54-55.)
`
`These two mathematical descriptions of linear codes – one using matrices,
`
`one using Tanner graphs – are two different ways of describing the same thing, in
`
`much the same way that “0.5” and “½” describe the same number. Every generator
`
`matrix corresponds to a Tanner graph, and vice versa. (Ex. 1004, ¶56.)
`
`ActiveUS 160600254v.1
`
`18
`
`
`
`U.S. Patent 7,421,032
`Petition for Inter Partes Review
`
`F.
`Irregularity
`Irregular LDPC codes were first introduced in a 1997 paper by Luby et al.
`
`See Luby, M. et al., “Practical Loss-Resilient Codes,” STOC ’97, 1997 (Ex. 1008).
`
`The paper showed that irregular codes perform better than regular codes on certain
`
`types of noisy channels. (Ex. 1004, ¶57.)
`
`In regular codes, each information bit contributes to the same number of
`
`parity bits, while in irregular codes different information bits or groups of
`
`information bits contribute to different numbers of parity bits.2 This is consistent
`
`with the Board’s construction of “irregular” in prior proceedings. (See, e.g.,
`
`IPR2015-00060, Paper 18, p. 12.) Irregularity can also be defined in terms of
`
`Tanner graphs. A regular code has a Tanner graph in which each variable node
`
`corresponding to an information bit is connected to the same number of check
`
`nodes. Irregular codes have Tanner graphs in which some variable nodes
`
`corresponding to information bits are connected to more check nodes than others.
`
`These two formulations of irregularity are alternative (and well-known) ways of
`
`describing the same concept. (Ex. 1004, ¶58.)
`
`
`2 An information bit contributes to a parity bit if the column in the generator matrix
`
`corresponding to the information bit has a 1 in the row corresponding to the parity
`
`bit.
`
`ActiveUS 160600254v.1
`
`19
`
`
`
`U.S. Patent 7,421,032
`Petition for Inter Partes Review
`After Luby’s initial paper describing irregularity, the same team of
`
`researchers published a second paper, expanding the theory of irregularity in error-
`
`correcting codes. (See Luby, M. et al., “Analysis of Low Density Codes and
`
`Improved Designs Using Irregular Graphs,” STOC ’98, pp. 249-59, published in
`
`1998; Ex. 1009). At the time, both of these papers were widely read by coding
`
`theorists, and motivated extensive research into irregularity. (Ex. 1004, ¶59.)
`
`For example, the 1998 Luby paper was the first reference cited in a paper by
`
`Frey and MacKay titled “Irregular Turbocodes,” presented at the 1999 Allerton
`
`Conference on Communications, Control, and Computing. (See Ex. 1010.) The
`
`second author, MacKay, was the same researcher who had rediscovered LDPC
`
`codes in 1995. In “Irregular Turbocodes,” the authors applied the concept of
`
`irregularity to turbo codes by explaining how to construct irregular turbo codes in
`
`which some information bits connect to more check nodes than others. The
`
`experimental results presented in the paper demonstrated that these irregular
`
`turbocodes perform better than regular turbocodes known in the art. (See Ex. 1010,
`
`p. 7.) (Ex. 1004, ¶60.)
`
`By May 18, 2000, the claimed priority date of the ’032 patent, it was
`
`common knowledge that the performance of any type of error-correcting code
`
`could be improved with irregularity. (Ex. 1004, ¶61.)
`
`ActiveUS 160600254v.1
`
`20
`
`
`
`U.S. Patent 7,421,032
`Petition for Inter Partes Review
`
`V. OVERVIEW OF THE ’032 PATENT
`A. Claims
`The ’032 patent includes 23 claims, of which claims 1, 11, and 18 are
`
`independent. Claims 11-17 are challenged in this petition. Independent claim 11 is
`
`directed to an encoder that generates a sequence of parity bits from a collection of
`
`message bits in accordance with a particular Tanner graph. Independent claim 18 is
`
`directed to a device for decoding a data stream that has been encoded in
`
`accordance with the same Tanner graph. (Ex. 1001, 7:61-10:57.)
`
`B.
`Summary of the Specification
`The specification of the ’032 patent is generally directed to irregular RA
`
`codes (or “IRA” codes). Figure 2 of the specification, below, shows the structure
`
`of an IRA encoder:
`
`Ex. 1001, Fig. 2
`Explaining this figure, the patent describes encoding data using an outer coder 202
`
`
`
`connected to an inner coder 206 via an interleaver 204 (labeled “P”) (Ex. 1001,
`
`2:35-42.) (Ex. 1004, ¶¶100-101.)
`
`ActiveUS 160600254v.1
`
`21
`
`
`
`U.S. Patent 7,421,032
`Petition for Inter Partes Review
`Outer coder 202 receives a block of information bits and duplicates each bit
`
`a given number of times, producing a sequence of repeated bits at its output. (Ex.
`
`1001, 2:52-60.) The outer coder repeats bits irregularly – i.e., it outputs more
`
`duplicates of some information bits than others. (Id.) (Ex. 1004, ¶102.)
`
`The repeated bits are passed to an interleaver 204, where they are scrambled.
`
`(Ex. 1001, 3:24-28.) The scrambled bits are then passed to the inner coder 206,
`
`where they are accumulated to form parity bits. (Id., 2:66-3:18.) According to the
`
`specification:
`
`Ex. 1001 at 3:3-18
`The specification describes systematic and non-systematic codes. In a systematic
`
`
`
`code, the encoder outputs a copy of the information bits in addition to the parity
`
`bits output by inner coder 206 (the systematic output is represented in Fig. 2. as an
`
`arrow running toward the right along the top of the figure). In a non-systematic
`
`ActiveUS 160600254v.1
`
`22
`
`
`
`U.S. Patent 7,421,032
`Petition for Inter Partes Review
`code, the codeword output by the encoder is comprised of parity bits only. (Ex.
`
`1004, ¶¶103-104.)
`
`C. Level of Ordinary Skill in the Art
`A person of ordinary skill in the art is a person with a Ph.D. in mathematics,
`
`electrical or computer engineering, or computer science with emphasis in signal
`
`processing, communications, or coding, or a master’s degree in the above area with
`
`at least three years of work experience in this field at the time of the alleged
`
`invention. (Ex. 1004, ¶98.)
`
`VI. CLAIM CONSTRUCTION
`A claim in inter partes review is given the “broadest reasonable construction
`
`in light of the specification.” 37 C.F.R. § 42.100(b). See Cuozzo Speed
`
`Technologies, LLC v. Lee, No. 15446, 579 U.S. ____, slip. op. at 13 (U.S. Jun. 20,
`
`2016). Any claim term which lacks a definition in the specification is also given a
`
`broad interpretation. In re ICON Health & Fitness, Inc., 496 F.3d 1374, 1379 (Fed.
`
`Cir. 2007).
`
`Should the Patent Owner, seeking to avoid the prior art, contend that the
`
`claims warrant a narrower construction, the appropriate course is for the Patent
`
`Owner to seek to amend the claims to expressly correspond to its contentions in
`
`this proceeding. See 77 Fed. Reg. 48764 (Aug. 14, 2012).
`
`ActiveUS 160600254v.1
`
`23
`
`
`
`U.S. Patent 7,421,032
`Petition for Inter Partes Review
`Consistent with this standard, this section proposes constructions of terms
`
`and provides support for these proposed constructions. Terms not included in this
`
`section have their broadest reasonable meaning in light of the specification as
`
`commonly understood by those of ordinary skill.
`
`A.
`“irregular”
`In IPR2015-00060, the Board stated that the term “‘irregular’ refers to the
`
`notion that different message bits or groups of message bits contribute to different
`
`numbers of parity bits.” (IPR2015-00060, Paper 18, p. 11.) Petitioner agrees with
`
`the Board’s previous construction of this term.
`
`As explained above, in regular codes, each information bit contributes to the
`
`same number of parity bits. In irregular codes, different information bits or groups
`
`of information bits contribute to different numbers of parity bits. (Ex. 10