`Petition for Inter Partes Review
`
`DOCKET NO.: 1033300-00287
`
`
`UNITED STATES PATENT AND TRADEMARK OFFICE
`
`
`PATENT:
`
`7,116,710
`
`INVENTORS: HUI JIN, AAMOD KHANDEKAR, ROBERT J. MCELIECE
`
`FILED:
`
`ISSUED:
`
`TITLE:
`
`MAY 18, 2001
`
`OCTOBER 3, 2006
`
`SERIAL CONCATENATION OF INTERLEAVED
`CONVOLUTIONAL CODES FORMING TURBO-LIKE
`CODES
`
`___________________________________________
`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`____________________________________________
`
`Apple Inc.
`Petitioner
`
`v.
`
`California Institute of Technology
`Patent Owner
`
`Case IPR2017-00219
`
`
`
`PETITION FOR INTER PARTES REVIEW OF U.S. PATENT NO. 7,116,710
`UNDER 35 U.S.C. § 312 AND 37 C.F.R. § 42.104
`
`
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`
`TABLE OF CONTENTS
`
`I. Mandatory Notices ............................................................................................... 3
`A. Real Party-in-Interest .................................................................................... 3
`B. Related Matters ............................................................................................. 3
`C. Counsel .......................................................................................................... 4
`D. Service Information ....................................................................................... 4
`II. Certification of Grounds for Standing ................................................................. 4
`III. Overview of Challenge and Relief Requested ..................................................... 4
`A. Prior Art Patents and Printed Publications .................................................... 4
`B. Relief Requested ........................................................................................... 6
`IV. Overview of the Technology ................................................................................ 6
`A. Error-Correcting Codes in General ............................................................... 6
`B. Coding Rate ................................................................................................... 9
`C. Performance of Error-Correcting Codes ..................................................... 10
`D. LDPC Codes, Turbocodes, and Repeat-Accumulate Codes ....................... 11
`E. Mathematical Representations of Error-Correcting Codes ......................... 14
`F.
`Irregularity ................................................................................................... 19
`V. The ’710 Patent .................................................................................................. 21
`A. Claims ......................................................................................................... 21
`B. Summary of the Specification ..................................................................... 21
`C. Level of Ordinary Skill in the Art ............................................................... 23
`VI. Claim Construction ............................................................................................ 24
`A. “close to one” (Claims 1 and 3) .................................................................. 24
`VII. Overview Of Primary Prior Art References................................................ 25
`A. Divsalar ....................................................................................................... 25
`B. Luby ............................................................................................................ 28
`C. Luby97 ........................................................................................................ 31
`D. Pfister .......................................................................................................... 32
`VIII. Grounds for Challenge ................................................................................ 33
`
`1
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`A. Ground 1: Claims 1-8 and 11-14 Are Obvious Over Divsalar in View of
`Luby ............................................................................................................ 34
`B. Ground 2: Claims 15-17, 19-22, and 24-33 Are Obvious Over Divsalar in
`View of Luby, and Further in View of Luby97 .......................................... 55
`C. Ground 3: Claim 10 Is Obvious Over Divsalar in View of Luby, and
`Further in View of Pfister ........................................................................... 69
`D. Ground 4: Claim 23 Is Obvious Over Divsalar, Luby, and Luby97, and
`Further in View of Pfister ........................................................................... 70
`IX. Conclusion .......................................................................................................... 72
`
`
`
`
`
`2
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`
`I. MANDATORY NOTICES
`A. Real Party-in-Interest
`
`Apple Inc. (“Apple” or “Petitioner”) and Broadcom Corp. are the real
`
`parties-in-interest.
`
`B. Related Matters
`
`U.S. Pat. No. 7,116,710 (the “’710 patent,” Ex. 1201) is assigned to the
`
`California Institute of Technology (“Caltech” or “Patent Owner.”) On May 26,
`
`2016, Caltech sued Apple, Broadcom Corp., and Avago Technologies, Ltd. in the
`
`U.S. District Court for the Central District of California, claiming that Apple
`
`products compliant with the 802.11n and 802.11ac wireless communication
`
`standards infringe the ’710 patent (and three others). On August 15, 2016, Caltech
`
`amended its complaint to assert patent infringement against Cypress
`
`Semiconductor Corp. See Amended Complaint, California Institute of Technology
`
`v. Broadcom, Ltd. et al. (Case 2:16-cv-03714), Docket No. 36. The ’710 patent
`
`was also asserted by Caltech against Hughes Communications Inc. in California
`
`Institute of Technology v. Hughes Communs., Inc (Case 2:13-cv-07245), and its
`
`claims were challenged in two petitions for inter partes review, IPR2015-00068
`
`and IPR 2015-00067. Patents claiming priority to the ’710 patent were challenged
`
`in IPR2015-00060, IPR2015-00061, IPR2015-00081, and IPR2015-00059.
`
`3
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`
`C. Counsel
`
`Lead Counsel:
`
`Richard Goldenberg (Registration No. 38,895)
`
`Backup Counsel: Brian M. Seeve (Registration No. 71,721)
`
`D.
`
`Service Information
`
`Petitioner consents to electronic service.
`
`E-mail: richard.goldenberg@wilmerhale.com
`
`Post and Hand Delivery: WilmerHale, 60 State St., Boston MA 02109
`
`Telephone: 617-526-6548
`
`II. CERTIFICATION OF GROUNDS FOR STANDING
`Petitioner certifies pursuant to Rule 42.104(a) that the patent for which
`
`review is sought is available for inter partes review and that Petitioner is not
`
`barred or estopped from requesting inter partes review on the grounds in this
`
`Petition.
`
`III. OVERVIEW OF CHALLENGE AND RELIEF REQUESTED
`Pursuant to Rules 42.22(a)(1) and 42.104(b)(1)-(2), Petitioner challenges
`
`claims 1-8, 10-17, and 19-33 of the ’710 Patent (“the challenged claims”) and
`
`requests that each challenged claim be canceled.
`
`A.
`
`Prior Art Patents and Printed Publications
`
`Petitioner relies upon the patents and printed publications listed in the Table
`
`of Exhibits, including:
`
`4
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`1. D. Divsalar, H. Jin, and R. J. McEliece, “Coding theorems for "turbo-
`
`like" codes,” Proc. 36th Allerton Conf. on Comm., Control and
`
`Computing, Allerton, Illinois, pp. 201-10, March, 1999 (“Divsalar”;
`
`Ex. 1203.)
`
`2. Luby, M. et al., “Analysis of Low Density Codes and Improved
`
`Designs Using Irregular Graphs,” STOC ’98, pp. 249-59, published in
`
`1998 (“Luby”; Ex. 1204.)
`
`3. Luby, M. et al., “Practical Loss-Resilient Codes,” STOC ’97, pp. 150-
`
`159, published in 1997 (“Luby97”; Ex. 1211).
`
`4. Pfister, H. and Siegel, P, “The Serial Concatenation of Rate-1 Codes
`
`Through Uniform Random Interleavers,” 37th Allerton Conf. on
`
`Comm., Control and Computing, Monticello, Illinois, published on or
`
`before September 24, 1999 (“Pfister”; Ex. 1205.)
`
`The ’710 patent claims priority to a provisional application filed on May 18,
`
`2000, and to U.S. application Ser. No. 09/922,852, filed on Aug. 18, 2000. The
`
`Petitioner reserves the right to dispute that the challenged claims are entitled to a
`
`priority date of May 18, 2000. However, even if the challenged claims are entitled
`
`to this priority date, the above references qualify as prior art under 35 U.S.C. §
`
`102.
`
`5
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`
`B. Relief Requested
`
`Petitioner requests that the Patent Trial and Appeal Board cancel the
`
`challenged claims because they are unpatentable under 35 U.S.C. §§ 102 and 103.
`
`This conclusion is further supported by the declaration of Professor James Davis,
`
`Ph.D. (“Davis Declaration,” Ex. 1206).
`
`IV. OVERVIEW OF THE TECHNOLOGY
`The ’710 patent relates to the field of channel coding and error-correcting
`
`codes. This section provides a brief introduction to channel coding and error-
`
`correcting codes, and highlights developments in the field relevant to the ’710
`
`patent. (Ex. 1206, ¶21.)
`
`A. Error-Correcting Codes in General
`
`Most digital electronics use bits to represent information. A bit is a binary
`
`unit of information that may be a 1 or 0. Any type of information can be
`
`represented digitally as a collection of bits. (Ex. 1206, ¶22.)
`
`When transmitting binary information over an analog communication
`
`channel, the data bits representing the information to be communicated
`
`(“information bits”) are converted into an analog signal. This process is called
`
`modulation. The transmitted signal is then received by a receiving device and
`
`converted back into binary form through demodulation.
`
`6
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`
`
`
`Modulation, Transmission, and Demodulation
`
`Transmission over physical channels is rarely 100% reliable. The transmitted
`
`signal can be corrupted during transmission by “noise” caused by, e.g.,
`
`obstructions in the signal path, interference from other signals, or
`
`electrical/magnetic disturbances. Noise can cause bits to “flip” during
`
`transmission, causing a bit transmitted as a 1 to be demodulated as 0, and vice
`
`versa. (Ex. 1206, ¶23-24.)
`
`Error-correcting codes were developed to mitigate transmission errors.
`
`Using the information bits, an error-correcting code generates “parity bits” that
`
`allow the receiver to verify that the information bits were transmitted correctly, and
`
`to correct transmission errors. (Ex. 1206, ¶25.)
`
`Bits are encoded by an encoder, which receives a sequence of information
`
`bits as input, generates parity bits according to a particular encoding algorithm, and
`
`outputs a sequence of encoded bits (or data elements) called a codeword. The
`
`7
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`codeword produced by the encoder is then modulated and transmitted as an analog
`
`signal. (Ex. 1206, ¶26.)
`
`At the receiver, the signal is received and demodulated, and the codeword is
`
`passed to the decoder, which uses a decoding algorithm to recover the original
`
`information bits.
`
`
`
`Encoding and Decoding
`
`Error-correcting codes work by adding redundant information to the original
`
`message. Due to redundancy, the information represented by a given information
`
`bit is spread across multiple bits of the codeword. Thus, if a codeword bit is flipped
`
`during transmission, the original information bit can still be recovered from the
`
`other codeword bits. (Ex. 1206, ¶27-28.)
`
`As a simple example, consider an encoding scheme, called “repeat-three,”
`
`that outputs three copies of each information bit. In this scheme, the information
`
`bits “1 0 1” would be encoded as “111 000 111.” Upon receipt, the decoder
`
`converts instances of “111” into “1” and instances of “000” into “0” to produce the
`
`decoded bits “1 0 1,” which match the original information bits. (Ex. 1206, ¶29.)
`
`8
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`Suppose a bit is flipped during transmission, changing “000” to “010.” The
`
`decoder will be able to detect a transmission error, because “010” is not a valid
`
`“repeat-three” codeword. Using a “majority vote” rule, the decoder can infer that
`
`the original information bit was a 0, correcting the transmission error. Thus, due to
`
`the redundancy incorporated into the codeword, no information was lost. (Ex.
`
`1206, ¶30.)
`
`Error-correcting codes may be either systematic or non-systematic. In a
`
`systematic code, parity bits and original information bits are included in the
`
`codeword. In a non-systematic code, the encoded data includes only parity bits.
`
`Systematic and non-systematic codes had been known for decades before May 18,
`
`2000, the claimed priority date of the ’710 patent. (Ex. 1206, ¶31-32.)
`
`B. Coding Rate
`
`Many error-correcting codes encode information bits in groups, or blocks of
`
`fixed length n. An encoder receives a k-bit block of information bits as input, and
`
`produces a corresponding n-bit codeword. The ratio k/n is called the rate of the
`
`code. Because the codeword generally includes redundant information, n is
`
`generally greater than k, and the rate k/n of an error-correcting code is generally
`
`less than one. (Ex. 1206, ¶33.)
`
`9
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`Performance of Error-Correcting Codes
`
`C.
`
`One metric used to assess the performance of an error-correcting code is its
`
`bit-error rate (BER.) The BER is defined as the number of corrupted information
`
`bits divided by the total information bits during a particular interval. For example,
`
`if a decoder outputs one thousand bits and ten are corrupted (i.e., they differ from
`
`the original information bits), then the BER of the code during that interval is (10
`
`bit errors) / (1000 total bits) = 0.01 or 1%. (Ex. 1206, ¶35.)
`
`The BER of a transmission depends on the amount of noise present in the
`
`communication channel and the strength of the transmitted signal (i.e., the power
`
`used to transmit the modulated waveform). An increase in noise tends to increase
`
`the error rate and an increase in signal strength tends to decrease the error rate. The
`
`ratio of the signal strength to the noise, called the “signal-to-noise ratio,” is often
`
`used to characterize the channel over which the encoded signal is transmitted. The
`
`signal-to-noise ratio can be expressed mathematically as Eb/N0, in which Eb is the
`
`amount of energy used to transmit each bit of the signal, and N0 is the density of
`
`the noise on the channel. The BER of an error-correcting code is often measured
`
`for multiple values of Eb/N0 to determine how the code performs under various
`
`channel conditions. (Ex. 1206, ¶36.)
`
`10
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`D. LDPC Codes, Turbocodes, and Repeat-Accumulate Codes
`
`In 1963, Robert Gallager described a set of error-correcting codes called
`
`Low Density Parity Check (“LDPC”) codes. Gallager described how LDPC codes
`
`provide one method of generating parity bits from information bits using a matrix
`
`populated with mostly 0s and relatively few 1s, as explained below. (See Gallager,
`
`R., Low-Density Parity-Check Codes, Monograph, M.I.T. Press, 1963; Ex. 1207.)
`
`(Ex. 1206, ¶38.)
`
`Gallager’s work was largely ignored over the following decades, as
`
`researchers continued to discover other algorithms for calculating parity bits. These
`
`algorithms included, for example, convolutional encoding with Viterbi decoding
`
`and cyclic code encoding with bounded distance decoding. In many cases, these
`
`new codes could be decoded using low-complexity decoding algorithms. (Ex.
`
`1206, ¶39.)
`
`In 1993, researchers discovered “turbocodes,” a class of error-correcting
`
`codes capable of transmitting information at a rate close to the so-called “Shannon
`
`Limit” – the maximum rate at which information can be transmitted over a
`
`channel. See Berrou et al., “Near Shannon Limit Error-Correcting Coding and
`
`Decoding: Turbo Codes," ICC ’93, Technical Program, Conference Record 1064,
`
`Geneva 1993 (Ex. 1208.) (Ex. 1206, ¶40.)
`
`11
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`As explained further in the Davis Declaration, the main drawback of
`
`convolutional codes is that they do not perform well over channels in which errors
`
`are clustered tightly together. Turbocodes overcome this deficiency by encoding
`
`the input bits twice. The input bits are fed to a first convolutional encoder in their
`
`normal order to produce a first set of parity bits. The same input bits are also
`
`reordered by an interleaver (i.e., a component that shuffles the input bits and
`
`outputs them in a different order) and then encoded by a second convolutional
`
`encoder to produce a second set of parity bits. The two sets of parity bits together
`
`with the information bits form a single codeword. Using a turbocode, a small
`
`number of errors will not result in loss of information unless the errors happen to
`
`fall close together in both the original data stream and in the permuted data stream,
`
`which is unlikely. (Ex. 1206, ¶41.)
`
`In 1995, David J. C. MacKay rediscovered Gallager’s work from 1963
`
`relating to low-density parity-check (LDPC) codes and demonstrated that they
`
`have performance comparable to that of turbocodes. See MacKay, D. J. C, and
`
`Neal, R. M. “Near Shannon Limit Performance of Low Density Parity Check
`
`Codes,” Electronics Letters, vol. 32, pp. 1645-46, 1996 (Ex. 1209.) This
`
`rediscovery was met with wide acclaim. Turbocodes and LDPC codes have some
`
`common characteristics: both codes use permutations to spread out redundancy,
`
`and both use iterative decoding algorithms. (Ex. 1206, ¶42)
`
`12
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`In 1995 and 1996, researchers began to explore “concatenated”
`
`convolutional codes. See Benedetto, S. et al., Serial Concatenation of Block and
`
`Convolutional Codes, 32.10 Electronics Letters 887-8, 1996 (Ex. 1210.) While
`
`turbocodes use two convolutional coders connected in parallel, concatenated
`
`convolutional codes use two convolutional coders connected in series: the
`
`information bits are encoded by a first encoder, the output of the first encoder is
`
`interleaved, and the interleaved sequence is encoded by a second encoder. In such
`
`codes, the first and second encoders are often called the “outer coder” and the
`
`“inner coder,” respectively. (Ex. 1206, ¶43.)
`
`In 1998, researchers developed “repeat-accumulate,” or “RA codes,” by
`
`simplifying the principles underlying turbocodes. (See Ex. 1203.) In RA codes,
`
`information bits are first passed to a repeater that repeats (i.e., duplicates) them and
`
`outputs a stream of repeated bits (the encoder described above in the “repeat three”
`
`coding scheme is one example of a repeater). The repeated bits are then optionally
`
`reordered by an interleaver and passed to an “accumulator” to form parity bits. (Ex.
`
`1206, ¶44.)
`
`13
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`In accumulation, each input bit is added to the previous input bits to produce
`
`a sequence of running sums, each of which represents the sum of all input bits yet
`
`received. More formally, if an accumulator receives a sequence of input bits i1, i2,
`
`i3, … in, it will produce output bits o1, o2, o3, … on, such that:
`
`
`Where the symbol denotes modulo-2 addition.1 Accumulators can be
`
`implemented simply, allowing accumulate codes to be encoded quickly and
`
`cheaply. (Ex. 1206, ¶45.)
`
`E. Mathematical Representations of Error-Correcting Codes
`
`1. Linear Transformations
`Coding theorists often describe error-correcting codes in linear-algebraic
`
`terms. For instance, a k-bit block of information bits is a k-dimensional vector of
`
`bits, and an n-bit codeword is an n-dimensional vector of bits (where an “n
`
`
`
` 1
`
` Modulo-2 addition (also called “XOR”) is an addition operation that is performed
`
`on bits, defined as follows: 11=0, 10=1, 01=1, and 00=0.
`
`14
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`dimensional vector” of bits is a sequence of n bits). The encoding process, which
`
`converts blocks of information bits into codewords, is a linear transformation that
`
`maps k-dimensional bit vectors to n-dimensional bit vectors. This transformation is
`
`represented by an n × k matrix G called a generator matrix. For a vector of
`
`information bits u, the corresponding codeword x is given by:
`
`
`The n-dimensional vectors that can be written in the form Gu are valid codewords.
`
`Most n-dimensional vectors are not valid codewords. It is this property that allows
`
`a decoder to determine when there has been an error during transmission. This
`
`determination is made using an (n - k) × n matrix H, called a parity check matrix.
`
`Using a parity check matrix, a given vector x is a valid codeword if and only if Hx
`
`= 0. As explained in the Davis Declaration, if the parity check shows that Hx does
`
`not equal 0, then the decoder determines there has been a transmission error (much
`
`like the “010” example in the “repeat three” code above). In practice, parity-check
`
`matrices often have hundreds or thousands of rows, each of which represents an
`
`15
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`equation of the form xa + xb + … + xz = 0. These equations are called parity-check
`
`equations. (Ex. 1206, ¶47-50.)
`
`2. Repeating and Permuting as Examples of LDGM Codes
`As explained above, the encoding process can be represented using a matrix
`
`called a generator matrix. This is true of all linear codes, including the “repeat,”
`
`“interleave,” and “accumulate” phases of the “repeat-accumulate” codes discussed
`
`above. More specifically, the “repeat” and “interleave” phases can be represented
`
`using generator matrices whose entries are mostly 0s, with a relatively low
`
`proportion of 1s. Such generator matrices are called “low-density generator
`
`matrices” (LDGMs). The following is a generator matrix that can be used to repeat
`
`each information bit twice:
`
`
`
`Using the generator matrix above, encoding a sequence of input bits “101010101”
`
`would result in the encoded sequence “110011001100110011” (the number of
`
`16
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`times each input bit is repeated is determined by the number of “1s” appearing in
`
`the column corresponding to that input bit). In the 18×9 matrix above, 11.1% of
`
`the entries are 1s, and the rest are 0s, making GREPEAT2 a low-density generator
`
`matrix (LDGM). (Ex. 1206, ¶51-52.)
`
`
`
`Permutation is another linear transform represented by a low-density
`
`generator matrix. For example, the matrix below is a permutation matrix that will
`
`output bits in a different order than input:
`
`
`Permutation matrices, like GSHUFFLE above, have exactly one 1 per row and one 1
`
`per column. The order of the output bits is determined by the position of each 1
`
`within its row/column. The matrix above, for example, would transform input bits
`
`i1, i2, i3, i4, i5, i6, i7, i8 into the output sequence i2, i1, i7, i5, i4, i8, i6, i3. Permutation
`
`matrices are square, with dimensions n×n, because the input sequence and the
`
`output sequence have the same length. Because there is only one 1 per row/column
`
`of permutation matrices, the density of 1s is 1/n (GSHUFFLE has a density of 1/8, or
`
`12.5%). In the context of linear codes, which often operate on blocks of more than
`
`17
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`a thousand bits, a permutation matrix might comprise 0.1% 1s or fewer, and, as
`
`further explained in the Davis Declaration, are therefore very low-density
`
`generator matrices (LDGMs). (Ex. 1206, ¶53-54.)
`
`3. Tanner Graphs
`Another widespread mathematical representation of error-correcting codes is
`
`the “Tanner graph.” Tanner graphs are bipartite graphs wherein nodes can be
`
`divided into two groups. Every edge connects a node in one group to a node in the
`
`other (i.e., no two nodes in the same group are connected by an edge). A bipartite
`
`graph is shown below:
`
`
`A Tanner graph includes one group of nodes called variable nodes that correspond
`
`to the information and parity bits, and a second group of nodes called check nodes
`
`that represent constraints that parity and information bits must satisfy. In particular,
`
`when a set of variable nodes are connected to a particular check node, it means that
`
`18
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`the information and parity bits corresponding to those variable nodes must sum to
`
`0. (Ex. 1206, ¶55-56.)
`
`These two mathematical descriptions of linear codes – one using matrices,
`
`one using Tanner graphs – are two different ways of describing the same thing, in
`
`much the same way that “0.5” and “½”describe the same number. Every generator
`
`matrix corresponds to a Tanner graph, and vice versa. (Ex. 1206, ¶57.)
`
`F.
`
`Irregularity
`
`Irregular LDPC codes were first introduced in a 1997 paper by Luby. See
`
`Luby, M. et al., “Practical Loss-Resilient Codes,” STOC ’97, 1997 (Ex. 1211). The
`
`paper showed that irregular codes perform better than regular codes on certain
`
`types of noisy channels. (Ex. 1206, ¶58.)
`
`In regular codes, each information bit contributes to the same number of
`
`parity bits, while in irregular codes different information bits or groups
`
`of information bits contribute to different numbers of parity bits. This is consistent
`
`with the Board’s construction of “irregular” in prior proceedings. (See, e.g.,
`
`IPR2015-00060, Paper 18, p. 12.) Irregularity can also be defined in terms of
`
`Tanner graphs. A regular code has a Tanner graph in which each variable node
`
`corresponding to an information bit is connected to the same number of check
`
`nodes. Irregular codes have Tanner graphs in which some variable nodes
`
`corresponding to information bits are connected to more check nodes than
`
`19
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`others. These two formulations of irregularity are different (and well-known) ways
`
`of describing the same concept. (Ex. 1206, ¶59.)
`
`After Luby’s initial paper describing irregularity, the same team of
`
`researchers published a second paper, expanding the theory of irregularity in error-
`
`correcting codes. (See Luby, M. et al., “Analysis of Low Density Codes and
`
`Improved Designs Using Irregular Graphs,” STOC ’98, pp. 249-59, published in
`
`1998; Ex. 1204.) At the time, both of these Luby papers were widely read by
`
`coding theorists, and gave rise to a great deal of research into irregular error-
`
`correcting codes. (Ex. 1206, ¶60.)
`
`For example, the 1998 Luby paper was the first reference cited in a paper by
`
`Frey and MacKay titled “Irregular Turbocodes,” presented at the 1999 Allerton
`
`Conference on Communications, Control, and Computing. (Ex. 1202.) The second
`
`author, MacKay, was the same researcher who had rediscovered LDPC codes in
`
`1995. In ”Irregular Turbocodes,” the authors applied the concept of irregularity to
`
`turbocodes by explaining how to construct irregular turbocodes in which some
`
`information bits connect to more check nodes than others. The experimental results
`
`presented in the paper demonstrated that these irregular turbocodes perform better
`
`than regular turbocodes known in the art. (See Ex. 1202.) (Ex. 1206, ¶61.)
`
`20
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`By May 18, 2000, the claimed priority date of the ’710 patent, it was
`
`common knowledge that the performance of error-correcting code could be
`
`improved by adding irregularity. (Ex. 1206, ¶62.)
`
`V. THE ’710 PATENT
`A. Claims
`
`The ’710 patent includes 33 claims, of which claims 1, 11, 15, and 25 are
`
`independent. Independent claims 1 and 11 are directed to methods of encoding a
`
`signal that include “first encoding” and “second encoding” steps. Independent
`
`claim 15 is directed to a “coder” for encoding bits that includes a “first coder” and
`
`a “second coder.” Claim 25 is directed to a “coding system” that also encodes bits
`
`using a first and second coder, and further includes a decoder for decoding the
`
`encoded bits. (Ex. 1201, 7:15-8:64.) Claims 1-8, 10-17, and 19-33 are challenged
`
`in this Petition. (Ex. 1206, ¶96.)
`
`B.
`
`Summary of the Specification
`
`The specification of the ’710 patent is generally directed to irregular RA
`
`codes (or “IRA” codes). Figure 2 of the specification, below, shows the structure
`
`of an IRA encoder:
`
`21
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`
`
`Explaining this figure, the patent describes encoding data using an outer
`
`coder 202 connected to an inner coder 206 via an interleaver 204 (labeled “P”) (Ex.
`
`1201, 2:33-40.) (Ex. 1206, ¶97-98.)
`
`Outer coder 202 receives a block of information bits and duplicates each bit
`
`a given number of times, producing a sequence of repeated bits at its output. (Ex.
`
`1201, 2:50-52.) The outer coder repeats bits irregularly – i.e., it outputs more
`
`duplicates of some information bits than others. (Id., 2:48-50.) (Ex. 1206, ¶99.)
`
`The repeated bits are passed to an interleaver 204, where they are scrambled.
`
`(Ex. 1201, 3:18-22.) The scrambled bits are then passed to the inner coder 206,
`
`where they are accumulated to form parity bits. (Id., 2:65-67, 2:33-38.) According
`
`to the specification:
`
`22
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`
`
`Ex. 1201, 3:2-10
`
`
`
`The specification describes systematic and non-systematic codes. In a
`
`systematic code, the encoder outputs a copy of the information bits in addition to
`
`the parity bits output by inner coder 206 (the systematic output is represented in
`
`Fig. 2. as an arrow running toward the right along the top of the figure). (Ex. 1206,
`
`¶100-101.)
`
`C. Level of Ordinary Skill in the Art
`
`A person of ordinary skill in the art is a person with a Ph.D. in mathematics,
`
`electrical or computer engineering, or computer science with emphasis in signal
`
`processing, communications, or coding, or a master’s degree in the above area with
`
`at least three years of work experience in this field at the time of the alleged
`
`invention. (Ex. 1206, ¶95.)
`
`23
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`
`VI. CLAIM CONSTRUCTION
`A claim in inter partes review is given the “broadest reasonable construction
`
`in light of the specification.” 37 C.F.R. § 42.100(b). See Cuozzo Speed
`
`Technologies. LLC v. Lee, No. 15446, 579 U.S. ____, slip. op. at 13 (U.S. Jun. 20,
`
`2016). Any claim term which lacks a definition in the specification is also given a
`
`broad interpretation. In re ICON Health and Fitness, Inc., 496 F.3d 1374, 1379
`
`(Fed. Cir. 2007.)
`
`Should the Patent Owner, seeking to avoid the prior art, contending that the
`
`claims warrant a narrower construction, the appropriate course is for the Patent
`
`Owner to seek to amend the claims to correspond expressly to its contentions in
`
`this proceeding. (See 77 Fed. Reg. 48764, Aug. 14, 2012.)
`
`Consistent with this standard, this section proposes constructions of terms
`
`and provides support for these proposed constructions. Terms not included in this
`
`section have their broadest reasonable meaning in light of the specification as
`
`commonly understood by those of ordinary skill.
`
`A.
`
`“close to one” (Claims 1 and 3)
`
`Under the broadest reasonable construction standard, the term “close to one”
`
`should be construed to mean “within 50% of one.” This construction is consistent
`
`with the specification of the ’710 patent, which explains that the inner code 210
`
`“can have a rate that is close to one, e.g., within 50%, more preferably 10% and
`
`24
`
`
`
`U.S. Patent 7,116,710
`Petition for Inter Partes Review
`perhaps even more preferably within 1% of 1.” (Ex. 1201, 2:62-64, emphasis
`
`added.) (Ex. 1206, ¶102-103.)
`
`This is also consistent with the prosecution history of the ’710 patent, in
`
`which Patent Owner attempted to overcome a prior art rejection by replacing “a
`
`rate close to one” with “a rate within 50% of one,” in issued claim 15. (Ex. 1214,
`
`pp. 7-8.) This suggests that Patent Owner agrees that “close to one” encompasses
`
`rates “within 50% of one” (indeed, that this amendment was made to narrow the
`
`claim suggests Patent Owner believes the broadest reasonable interpretation of
`
`“close to one” is even broader). (Ex. 1206, ¶104.)
`
`This is further consistent with the understanding of a person of ordinary
`
`skill, who would have understood that “close to one,”