`Petition for Inter Partes Review
`
`DOCKET NO.: 1033300-00287
`
`
`UNITED STATES PATENT AND TRADEMARK OFFICE
`
`
`PATENT:
`
`7,116,710
`
`INVENTORS: HUI JIN, AAMOD KHANDEKAR, ROBERT J. MCELIECE
`
`FILED:
`
`ISSUED:
`
`TITLE:
`
`MAY 18, 2001
`
`OCTOBER 3, 2006
`
`SERIAL CONCATENATION OF INTERLEAVED
`CONVOLUTIONAL CODES FORMING TURBO-LIKE
`CODES
`
`___________________________________________
`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`____________________________________________
`
`Apple Inc.
`Petitioner
`
`v.
`
`California Institute of Technology
`Patent Owner
`
`Case IPR2017-00211
`
`
`
`PETITION FOR INTER PARTES REVIEW OF U.S. PATENT NO. 7,116,710
`UNDER 35 U.S.C. § 312 AND 37 C.F.R. § 42.104
`
`
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`
`TABLE OF CONTENTS
`
`I. Mandatory Notices ............................................................................................... 3
`A. Real Party-in-Interest .................................................................................... 3
`B. Related Matters ............................................................................................. 3
`C. Counsel .......................................................................................................... 4
`D. Service Information ....................................................................................... 4
`II. Certification of Grounds for Standing ................................................................. 4
`III. Overview of Challenge and Relief Requested ..................................................... 4
`A. Prior Art Patents and Printed Publications .................................................... 5
`B. Relief Requested ........................................................................................... 6
`IV. Overview of the Technology ................................................................................ 6
`A. Error-Correcting Codes in General ............................................................... 6
`B. Coding Rate ................................................................................................. 10
`C. Performance of Error-Correcting Codes ..................................................... 10
`D. LDPC Codes, Turbocodes, and Repeat-Accumulate Codes ....................... 11
`E. Mathematical Representations of Error-Correcting Codes ......................... 15
`F.
`Irregularity ................................................................................................... 20
`V. The ’710 Patent .................................................................................................. 22
`A. Claims ......................................................................................................... 22
`B. Summary of the Specification ..................................................................... 22
`C. Level of Ordinary Skill in the Art ............................................................... 24
`VI. Claim Construction ............................................................................................ 24
`A. “close to one” (Claims 1 and 3) .................................................................. 25
`VII. Overview Of Primary Prior Art References................................................ 26
`A. Frey.............................................................................................................. 26
`B. Frey Slides ................................................................................................... 29
`C. Divsalar ....................................................................................................... 31
`D. Luby97 ........................................................................................................ 34
`E. Pfister .......................................................................................................... 36
`
`1
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`VIII. Grounds for Challenge ................................................................................ 37
`A. Ground 1: Claims 1 and 3 Are Anticipated by the Frey Slides .................. 37
`B. Ground 2: Claims 1-8 and 11-14 Are Obvious Over Divsalar in View of
`the Frey Slides ............................................................................................. 46
`C. Ground 3: Claims 15-17, 19-22, and 24-33 Are Obvious Over Divsalar in
`View of the Frey Slides, and Further in View of Luby97 .......................... 62
`D. Ground 4: Claim 10 Is Obvious in View of Divsalar and the Frey Slides,
`and Further in View of Pfister..................................................................... 74
`E. Ground 5: Claim 23 Is Obvious in View of Divsalar, the Frey Slides, and
`Luby97, and Further in View of Pfister ...................................................... 76
`IX. Conclusion .......................................................................................................... 77
`
`
`
`
`
`2
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`
`I. MANDATORY NOTICES
`A. Real Party-in-Interest
`
`Apple Inc. (“Apple” or “Petitioner”) and Broadcom Corp. are the real
`
`parties-in-interest.
`
`B. Related Matters
`
`U.S. Pat. No. 7,116,710 (the “’710 patent,” Ex. 1101) is assigned to the
`
`California Institute of Technology (“Caltech” or “Patent Owner.”) On May 26,
`
`2016, Caltech sued Apple, Broadcom Corp., and Avago Technologies, Ltd. in the
`
`U.S. District Court for the Central District of California, claiming that Apple
`
`products compliant with the 802.11n and 802.11ac wireless communication
`
`standards infringe the ’710 patent (along with three additional patents that claim
`
`priority to the ’710 patent). On August 15, 2016, Caltech amended its complaint to
`
`add a claim of patent infringement against Cypress Semiconductor Corp. See
`
`Amended Complaint for Patent Infringement, California Institute of Technology v.
`
`Broadcom, Ltd. et al. (Case 2:16-cv-03714), Docket No. 36. The ’710 patent was
`
`also asserted by Caltech against Hughes Communications Inc. in California
`
`Institute of Technology v. Hughes Communs., Inc (Case 2:13-cv-07245), and its
`
`claims were challenged in two prior petitions for inter partes review, IPR2015-
`
`00068 and IPR 2015-00067. Patents claiming priority to the ’710 patent were
`
`3
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`challenged in IPR2015-00060, IPR2015-00061, IPR-2015-00081, and IPR2015-
`
`00059.
`
`C. Counsel
`
`Lead Counsel:
`
`Richard Goldenberg (Registration No. 38,895)
`
`Backup Counsel: Brian M. Seeve (Registration No. 71,721)
`
`D.
`
`Service Information
`
`Petitioner consents to electronic service.
`
`E-mail: richard.goldenberg@wilmerhale.com
`
`Post and Hand Delivery: WilmerHale, 60 State St., Boston MA 02109
`
`Telephone: 617-526-6548
`
`Fax No. 617-526-5000
`
`II. CERTIFICATION OF GROUNDS FOR STANDING
`Petitioner certifies pursuant to Rule 42.104(a) that the patent for which
`
`review is sought is available for inter partes review and that Petitioner is not
`
`barred or estopped from requesting an inter partes review challenging the patent
`
`claims on the grounds identified in this Petition.
`
`III. OVERVIEW OF CHALLENGE AND RELIEF REQUESTED
`Pursuant to Rules 42.22(a)(1) and 42.104(b)(1)-(2), Petitioner challenges
`
`claims 1-8, 10-17, and 19-33 of the ’710 Patent (“the challenged claims”) and
`
`requests that each challenged claim be canceled.
`
`4
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`Prior Art Patents and Printed Publications
`
`A.
`
`Petitioner relies upon the patents and printed publications listed in the Table
`
`of Exhibits, including:
`
`1. Frey, B. J. and MacKay, D. J. C., “Irregular Turbocodes,” Proc. 37th
`
`Allerton Conf. on Comm., Control and Computing, Monticello,
`
`Illinois, published on or before March 20, 2000 (“Frey”; Ex. 1102.)
`
`2. Frey, B. J. and MacKay, D. J. C., “Irregular Turbo-Like Codes,” 37th
`
`Allerton Conf. on Comm., Control and Computing, Monticello,
`
`Illinois, published on or before September 24, 1999 (“Frey Slides”; Ex.
`
`1113).
`
`3. D. Divsalar, H. Jin, and R. J. McEliece, “Coding theorems for "turbo-
`
`like" codes,” Proc. 36th Allerton Conf. on Comm., Control and
`
`Computing, Allerton, Illinois, pp. 201-10, March, 1999 (“Divsalar”;
`
`Ex. 1103.)
`
`4. Luby, M. et al., “Practical Loss-Resilient Codes,” STOC ’97, pp. 150-
`
`159, published in 1997 (“Luby97”; Ex. 1111).
`
`5. Pfister, H. and Siegel, P, “The Serial Concatenation of Rate-1 Codes
`
`Through Uniform Random Interleavers,” 37th Allerton Conf. on
`
`Comm., Control and Computing, Monticello, Illinois, published on or
`
`before September 24, 1999 (“Pfister”; Ex. 1105.)
`
`5
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`The ’710 patent claims priority to a provisional application filed on May 18,
`
`2000, and to U.S. application Ser. No. 09/922,852, filed on Aug. 18, 2000. The
`
`Petitioner reserves the right to dispute that the challenged claims are entitled to a
`
`priority date of May 18, 2000. However, even if the challenged claims are entitled
`
`to the benefit of this priority date, the references relied upon in this Petition qualify
`
`as prior art under 35 U.S.C. §102.
`
`B. Relief Requested
`
`Petitioner requests that the Patent Trial and Appeal Board cancel the
`
`challenged claims because they are unpatentable under 35 U.S.C. §§ 102 and 103
`
`as set forth in this Petition. This conclusion is further supported by the declaration
`
`of Professor James Davis, Ph.D. (“Davis Declaration,” Ex. 1106), filed herewith.
`
`IV. OVERVIEW OF THE TECHNOLOGY
`The ’710 patent relates to the field of channel coding and error-correcting
`
`codes. This section provides a brief introduction to channel coding and error-
`
`correcting codes, and highlights a few of the developments in the field that are
`
`relevant to the ’710 patent. (Ex. 1106, ¶21.)
`
`A. Error-Correcting Codes in General
`
`Most computing devices and other digital electronics use bits to represent
`
`information. A bit is a binary unit of information that may have one of two values:
`
`6
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`1 or 0. Any type of information, including, e.g., text, music, images and video
`
`information, can be represented digitally as a collection of bits. (Ex. 1106, ¶22.)
`
`When transmitting binary information over an analog communication
`
`channel, the data bits representing the information to be communicated (also called
`
`“information bits”) are converted into an analog signal that can be transmitted over
`
`the channel. This process is called modulation. The transmitted signal is then
`
`received by a receiving device and converted back into binary form. This process,
`
`in which a received analog waveform is converted into bits, is called
`
`demodulation. The steps of modulation and demodulation are shown in the figure
`
`below:
`
`
`
`Modulation, Transmission, and Demodulation
`
`Transmission over physical channels is rarely 100% reliable. The transmitted
`
`signal can be corrupted during transmission by “noise” caused by, e.g.,
`
`obstructions in the signal path, interference from other signals, or
`
`electrical/magnetic disturbances. Noise can cause bits to “flip” during
`
`7
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`transmission: for example, because of noise, a bit that was transmitted as a 1 can be
`
`corrupted during transmission and demodulated as 0, and vice versa. (Ex. 1106,
`
`¶¶23-24.)
`
`Error-correcting codes were developed to mitigate such transmission errors.
`
`Using the bits representing the information to be communicated (called
`
`“information bits”) an error-correcting code generates “parity bits” that allow the
`
`receiver to verify that the bits were transmitted correctly, and to correct
`
`transmission errors that occurred. (Ex. 1106, ¶25.)
`
`Bits are encoded by an encoder, which receives a sequence of information
`
`bits as input, generates parity bits based on the information bits according to a
`
`particular encoding algorithm, and outputs a sequence of encoded bits (or data
`
`elements) called a codeword. The codeword produced by the encoder is then
`
`modulated and transmitted as an analog signal. (Ex. 1106, ¶26.)
`
`At the receiver, the signal is received and demodulated, and the codeword is
`
`passed to the decoder, which uses a decoding algorithm to recover the original
`
`information bits.
`
`
`
`8
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`Encoding and Decoding
`
`Error-correcting codes work by adding redundant information to the original
`
`message. Due to redundancy, the information represented by a given information
`
`bit is spread across multiple bits of the codeword. Thus, even if one of those bits is
`
`flipped during transmission, the original information bit can still be recovered from
`
`the others. (Ex. 1106, ¶¶27-28.)
`
`As a simple example, consider an encoding scheme, called “repeat-three,”
`
`that outputs three copies of each information bit. In this scheme, the information
`
`bits “1 0 1” would be encoded as “111 000 111.” Upon receipt, the decoder
`
`converts instances of “111” into “1” and instances of “000” into “0” to produce the
`
`decoded bits “1 0 1,” which match the original information bits. (Ex. 1106, ¶29.)
`
`Suppose a bit is flipped during transmission, changing “000” to “010.” The
`
`decoder will be able to detect a transmission error, because “010” is not a valid
`
`“repeat-three” codeword. Using a “majority vote” rule, the decoder can infer that
`
`the original information bit was a 0, correcting the transmission error. Thus, due to
`
`the redundancy incorporated into the codeword, no information was lost due to the
`
`transmission error. Error-correcting codes may be either systematic or non-
`
`systematic. In a systematic code, both the parity bits and the original information
`
`bits are included in the codeword. In a non-systematic code, the encoded data only
`
`includes the parity bits. Systematic and non-systematic codes had been known in
`
`9
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`the art for decades prior to May 18, 2000, the claimed priority date of the ’710
`
`patent. (Ex. 1106, ¶30.)
`
`B. Coding Rate
`
`Many error-correcting codes encode information bits in groups, or blocks of
`
`fixed length n. An encoder receives a k-bit block of information bits as input, and
`
`produces a corresponding n-bit codeword. The ratio k/n is called the rate of the
`
`code. Because the codeword generally includes redundant information, n is
`
`generally greater than k, and the rate k/n of an error-correcting code is generally
`
`less than one. (Ex. 1106, ¶33.)
`
`C.
`
`Performance of Error-Correcting Codes
`
`The effectiveness of an error-correcting code may be measured using a
`
`variety of metrics.
`
`One tool used to assess the performance of a code is its bit-error rate (BER.)
`
`The BER is defined as the number of corrupted information bits divided by the
`
`total number of information bits during a particular time interval. For example, if a
`
`decoder outputs one thousand bits in a given time period, and ten of those bits are
`
`corrupted (i.e., they differ from the information bits originally received by the
`
`encoder), then the BER of the code during that time period is (10 bit errors) / (1000
`
`total bits) = 0.01 or 1%. (Ex. 1106, ¶35.)
`
`10
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`The BER of a transmission depends on the amount of noise that is present in
`
`the communication channel and the strength of the transmitted signal (i.e., the
`
`power that is used to transmit the modulated waveform). An increase in noise tends
`
`to increase the error rate and an increase in signal strength tends to decrease the
`
`error rate. The ratio of the signal strength to the noise, called the “signal-to-noise
`
`ratio,” is often used to characterize the channel over which the encoded signal is
`
`transmitted. The signal-to-noise ratio can be expressed mathematically as Eb/N0, in
`
`which Eb is the amount of energy used to transmit each bit of the signal, and N0 is
`
`the density of the noise on the channel. The BER of an error-correcting code is
`
`often measured for multiple values of Eb/N0 to determine how the code performs
`
`under various channel conditions. (Ex. 1106, ¶36.)
`
`D. LDPC Codes, Turbocodes, and Repeat-Accumulate Codes
`
`In 1963, Robert Gallager described a set of error correcting codes called
`
`Low Density Parity Check (“LDPC”) codes. Gallager described how LDPC codes
`
`provide one method of generating parity bits from information bits using a matrix
`
`populated with mostly 0s and relatively few 1s, as explained in more detail below.
`
`(See Gallager, R., Low-Density Parity-Check Codes, Monograph, M.I.T. Press,
`
`1963; Ex. 1107.) (Ex. 1106, ¶38.)
`
`Gallager’s work was largely ignored over the following decades, as
`
`researchers continued to discover other algorithms for calculating parity bits. These
`
`11
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`algorithms included, for example, convolutional encoding with Viterbi decoding
`
`and cyclic code encoding with bounded distance decoding. In many cases, these
`
`new codes could be decoded using low-complexity decoding algorithms. (Ex.
`
`1106, ¶39.)
`
`In 1993, researchers discovered “turbocodes,” a class of error-correcting
`
`codes capable of transmitting information at a rate close to the so-called “Shannon
`
`Limit” – the maximum rate at which information can be transmitted over a
`
`channel. As explained further in the Davis Declaration, the main drawback of
`
`convolutional codes is that they do not perform well over channels in which errors
`
`are clustered tightly together. Turbocodes overcome this deficiency by encoding
`
`the input bits twice. The input bits are fed to a first convolutional encoder in their
`
`normal order to produce a first set of parity bits. The same input bits are also
`
`reordered by an interleaver (i.e., a component that shuffles the input bits and
`
`outputs them in a different order) and then encoded by a second convolutional
`
`encoder to produce a second set of parity bits. The two sets of parity bits together
`
`with the information bits form a single codeword. Using a turbocode, a small
`
`number of errors will not result in loss of information unless the errors happen to
`
`fall close together in both the original data stream and in the permuted data stream,
`
`which is unlikely. (Ex. 1106, ¶¶40-41.)
`
`12
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`In 1995, David J. C. MacKay rediscovered Gallager’s work from 1963
`
`relating to low-density parity-check (LDPC) codes, and demonstrated that they
`
`have performance comparable to that of turbocodes. See MacKay, D. J. C, and
`
`Neal, R. M. “Near Shannon Limit Performance of Low Density Parity Check
`
`Codes,” Electronics Letters, vol. 32, pp. 1645-46, 1996 (Ex. 1109.) This
`
`rediscovery was met with wide acclaim. Turbocodes and LDPC codes have some
`
`common characteristics: both codes use permutations to spread out redundancy,
`
`and both use iterative decoding algorithms. (Ex. 1106, ¶42.)
`
`In 1995 and 1996, researchers began to explore “concatenated”
`
`convolutional codes. See Benedetto, S. et al., Serial Concatenation of Block and
`
`Convolutional Codes, 32.10 Electronics Letters 887-8, 1996 (Ex. 1110.) While
`
`turbocodes use two convolutional coders connected in parallel, concatenated
`
`convolutional codes use two convolutional coders connected in series: the
`
`information bits are encoded by a first encoder, the output of the first encoder is
`
`interleaved, and the interleaved sequence is encoded by a second, convolutional
`
`code. In such codes, the first and second encoders are often called the “outer
`
`coder” and the “inner coder,” respectively. (Ex. 1106, ¶43.)
`
`In 1998, researchers developed “repeat-accumulate,” or “RA codes,” by
`
`simplifying the principles underlying turbocodes. (See Ex. 1103.) In RA codes, the
`
`information bits are first passed to a repeater that repeats (i.e., duplicates) the
`
`13
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`information bits and outputs a stream of repeated bits (the encoder described above
`
`in the context of the “repeat three” coding scheme is one example of a repeater).
`
`The repeated bits are then optionally reordered by an interleaver, and are then
`
`passed to an accumulator where they are “accumulated” to form the parity bits.
`
`(Ex. 1106, ¶44.)
`
`Accumulation is a running sum process whereby each input bit is added to
`
`the previous input bits to produce a sequence of running sums, each of which
`
`represents the sum of all input bits yet received. More formally, if an accumulator
`
`receives a sequence of input bits i1, i2, i3, … in, it will produce output bits o1, o2, o3,
`
`… on, such that:
`
`
`
`14
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`Where the symbol denotes modulo-2 addition. 1 Accumulators can be
`
`implemented simply, allowing accumulate codes to be encoded quickly and
`
`cheaply. (Ex. 1106, ¶45.)
`
`E. Mathematical Representations of Error-Correcting Codes
`
`1. Linear Transformations
`Coding theorists often think of error-correcting codes in linear-algebraic
`
`terms. For instance, a k-bit block of information bits is a k-dimensional vector of
`
`bits, and an n-bit codeword is an n-dimensional vector of bits (where an “n
`
`dimensional vector” of bits is a sequence of n bits). The encoding process, which
`
`converts blocks of information bits into codewords, is a linear transformation that
`
`maps k-dimensional bit vectors to n-dimensional bit vectors. This transformation is
`
`represented by an n × k matrix G called a generator matrix. For a vector of
`
`information bits u, the corresponding codeword x is given by:
`
`
`
` 1
`
` Modulo-2 addition (also called “XOR”) is an addition operation that is performed
`
`on bits, defined as follows: 11=0, 10=1, 01=1, and 00=0.
`
`15
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`
`
`The n-dimensional vectors that can be written in the form Gu are valid codewords.
`
`Most n-dimensional vectors are not valid codewords. It is this property that
`
`allows a decoder to determine when there has been an error during transmission.
`
`This determination is made using an (n - k) × n matrix H, called a parity check
`
`matrix. Using a parity check matrix, a given vector x is a valid codeword if and
`
`only if Hx = 0. As explained in the Davis declaration, if the parity check shows
`
`that Hx does not equal 0, then the decoder knows there has been a transmission
`
`error (much like the “010” example in the “repeat three” code above). (Ex. 1106,
`
`¶XXX.) In practice, parity-check matrices often have hundreds or thousands of
`
`rows, each of which represents an equation of the form xa + xb + … + xz = 0. These
`
`equations are called parity-check equations. (Ex. 1106, ¶¶47-50.)
`
`2. Repeating and Permuting as Examples of LDGM Codes
`As explained above, the encoding process can be represented using a matrix
`
`called a generator matrix. This is true of all linear codes, including the “repeat,”
`
`“interleave,” and “accumulate” phases of the “repeat-accumulate” codes discussed
`
`16
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`above. More specifically, the “repeat” and interleave” phases can be represented
`
`using generator matrices whose entries are mostly zeros, with a relatively low
`
`proportion of 1s. Such generator matrices are called “low-density generator
`
`matrices” (LDGMs). The following is a generator matrix that can be used to repeat
`
`each information bit twice:
`
`
`
`Using the generator matrix above, encoding a sequence of input bits “101010101”
`
`would result in the encoded sequence “110011001100110011” (the number of
`
`times each input bit is repeated is determined by the number of “1s” appearing in
`
`the column corresponding to that input bit). In the 18×9 matrix above, 11.1% of
`
`the entries are 1s, and the rest are 0s. The relative scarcity of 1s means that
`
`GREPEAT2 is a low-density generator matrix (LDGM). (Ex. 1106, ¶¶51-52.)
`
`17
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`Permutation is another linear transform that is represented by a low-density
`
`
`
`generator matrix. For example, the matrix below is a permutation matrix that will
`
`output bits in a different order than bits are input:
`
`
`Permutation matrices, like GSHUFFLE above, have exactly one 1 per row and one 1
`
`per column. The order of the output bits is determined by the position of each of
`
`these 1s within its row/column. The matrix above, for example, would transform
`
`input bits i1, i2, i3, i4, i5, i6, i7, i8 into the output sequence i2, i1, i7, i5, i4, i8, i6, i3.
`
`Permutation matrices are square, with dimensions n×n, because the input sequence
`
`and the output sequence have the same length. Because there is only one 1 per
`
`row/column of permutation matrices, the density of 1s is 1/n (GSHUFFLE, shown
`
`above, has a density of 1/8, or 12.5%). In the context of linear codes, which often
`
`operate on blocks of more than a thousand bits at a time, a permutation matrix
`
`might comprise 0.1% 1s or fewer, and, as further explained in the Davis
`
`Declaration, are therefore very low-density generator matrices (LDGMs). (Ex.
`
`1106, ¶¶53-54.)
`
`18
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`
`3. Tanner Graphs
`Another widespread mathematical representation of error-correcting codes is
`
`the “Tanner Graph.” Tanner graphs are bipartite graphs wherein nodes can be
`
`divided into two groups. Every edge connects a node in one group to a node in the
`
`other (i.e., no two nodes in the same group are connected by an edge). A bipartite
`
`graph is shown below:
`
`
`A Tanner graph includes one group of nodes called variable nodes that correspond
`
`to the information and parity bits, and a second group of nodes called check nodes
`
`that represent constraints that parity and information bits must satisfy. In particular,
`
`when a set of variable nodes are connected to a particular check node, it means that
`
`the information and parity bits corresponding to those variable nodes must sum to
`
`0. (Ex. 1106, ¶¶55-56.)
`
`These two mathematical descriptions of linear codes – one using matrices,
`
`one using Tanner graphs – are two different ways of describing the same thing, in
`
`19
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`much the same way that “0.5” and “½” are two different ways of describing the
`
`same number. Every generator matrix corresponds to a Tanner graph, and vice
`
`versa. (Ex. 1106, ¶57.)
`
`F.
`
`Irregularity
`
`Irregular LDPC codes were first introduced in a 1997 paper by Luby. See
`
`Luby, M. et al., “Practical Loss-Resilient Codes,” STOC ’97, 1997 (Ex. 1111). The
`
`paper showed that irregular codes perform better than regular codes on certain
`
`types of noisy channels. (Ex. 1106, ¶58.)
`
`Regular codes are codes in which each information bit contributes to the
`
`same number of parity bits, while irregular codes are codes in which different
`
`information bits or groups of information bits contribute to different numbers of
`
`parity bits. This is consistent with the Board’s construction of “irregular” in prior
`
`proceedings. (See, e.g., IPR2015-00060, Paper 18, p. 12.) Irregularity can also be
`
`defined in terms of Tanner graphs. A regular code is one with a Tanner graph in
`
`which each variable node corresponding to an information bit is connected to the
`
`same number of check nodes. Irregular codes are those with Tanner graphs in
`
`which some variable nodes corresponding to information bits are connected to
`
`more check nodes than others. These two formulations of irregularity are
`
`alternative (and well-known) ways of describing the same concept. (Ex. 1106,
`
`¶59.)
`
`20
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`After Luby’s initial paper describing irregularity, the same team of
`
`researchers published a second paper, expanding the theory of irregularity in error-
`
`correcting codes. (See Luby, M. et al., “Analysis of Low Density Codes and
`
`Improved Designs Using Irregular Graphs,” STOC ’98, pp. 249-59, published in
`
`1998; Ex. 1104.) At the time, both of these Luby papers were widely read by
`
`coding theorists, and gave rise to a great deal of research into irregular error-
`
`correcting codes. (Ex. 1106, ¶60.)
`
`For example, the 1998 Luby paper was the first reference cited in a paper by
`
`Frey and MacKay titled “Irregular Turbocodes,” presented at the 1999 Allerton
`
`Conference on Communications, Control, and Computing. (Ex. 1102.) The second
`
`author, MacKay, was the same researcher who had rediscovered LDPC codes in
`
`1995. In this paper, the authors applied the concept of irregularity to turbocodes by
`
`explaining how to construct irregular turbocodes in which some information bits
`
`connect to more check nodes than others. The experimental results presented in the
`
`Frey paper demonstrated that these irregular turbocodes perform better than the
`
`regular turbocodes that were known in the art. (See Ex. 1102.) (Ex. 1106, ¶61.)
`
`By May 18, 2000, the claimed priority date of the ’710 patent, it was
`
`common knowledge that the performance of error-correcting code could be
`
`improved by adding irregularity. (Ex. 1106, ¶62.)
`
`21
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`
`V. THE ’710 PATENT
`A. Claims
`
`The ’710 patent includes 33 claims, of which claims 1, 11, 15, and 25 are
`
`independent. Independent claims 1 and 11 are directed to methods of encoding a
`
`signal that include “first encoding” and “second encoding” steps. Independent
`
`claim 15 is directed to a “coder” for encoding bits that includes a “first coder” and
`
`a “second coder.” Claim 25 is directed to a “coding system” that also encodes bits
`
`using a first and second coder, and further includes a decoder for decoding the
`
`encoded bits. (Ex. 1101, 7:15-8:64.) Claims 1-8, 10-17, and 19-33 are challenged
`
`in this Petition. (Ex. 1106, ¶96.)
`
`B.
`
`Summary of the Specification
`
`The specification of the ’710 patent is generally directed to irregular RA
`
`codes (or “IRA” codes). Figure 2 of the specification, reproduced below, shows the
`
`structure of an IRA encoder:
`
`
`
`22
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`Explaining this figure, the patent describes encoding data using an outer
`
`coder 202 connected to an inner coder 206 via an interleaver 204 (labeled “P”) (Ex.
`
`1101, 2:33-40.) (Ex. 1106, ¶¶97-98.)
`
`Outer coder 202 receives a block of information bits and duplicates each of
`
`the bits in the block a given number of times, producing a sequence of repeated
`
`bits at its output. (Ex. 1101, 2:50-52.) The outer coder repeats bits irregularly – i.e.,
`
`it outputs more duplicates of some information bits than others. (Id., 2:48-50.) (Ex.
`
`1106, ¶99.)
`
`The repeated bits are passed to an interleaver 204, where they are scrambled.
`
`(Ex. 1101, 3:18-22.) The scrambled bits are then passed to the inner coder 206,
`
`where they are accumulated to form parity bits. (Id., 2:65-67, 2:33-38.) According
`
`to the specification:
`
`
`Ex. 1101, 3:2-10
`
`
`
`23
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`The specification teaches both systematic and non-systematic codes. In a
`
`systematic code, the encoder outputs a copy of the information bits in addition to
`
`the parity bits output by inner coder 206 (the systematic output is represented in
`
`Fig. 2. as an arrow running toward the right along the top of the figure). (Ex. 1106,
`
`¶100-101.)
`
`C. Level of Ordinary Skill in the Art
`
`A person of ordinary skill in the art is a person with a Ph.D. in mathematics,
`
`electrical or computer engineering, or computer science with emphasis in signal
`
`processing, communications, or coding, or a master’s degree in the above area with
`
`at least three years of work experience in this field at the time of the alleged
`
`invention. (Ex. 1106, ¶95.)
`
`VI. CLAIM CONSTRUCTION
`A claim in inter partes review is given the “broadest reasonable construction
`
`in light of the specification.” 37 C.F.R. § 42.100(b). See Cuozzo Speed
`
`Technologies. LLC v. Lee, No. 15446, 579 U.S. ____, slip. op. at 13 (U.S. Jun. 20,
`
`2016). Any claim term which lacks a definition in the specification is also given a
`
`broad interpretation. In re ICON Health and Fitness, Inc., 496 F.3d 1374, 1379
`
`(Fed. Cir. 2007.)
`
`Should the Patent Owner, seeking to avoid the prior art, contend that the
`
`claims warrant a narrower construction, the appropriate course is for the Patent
`
`24
`
`
`
`U.S. Patent 7,116,710