`Petition for Inter Partes Review
`
`DOCKET NO.: 1033300-00287
`
`
`
`UNITED STATES PATENT AND TRADEMARK OFFICE
`
`
`PATENT:
`
`7,916,781
`
`INVENTORS: JIN HUI, AAMOD KHANDEKAR, ROBERT J. McELIECE
`
`FILED:
`
`ISSUED:
`
`TITLE:
`
`JUNE 30, 2008
`
`MARCH 29, 2011
`
`SERIAL CONCATENATION OF INTERLEAVED
`CONVOLUTIONAL CODES FORMING TURBO-LIKE
`CODES
`
`
`
`___________________________________________
`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`____________________________________________
`
`Apple Inc.
`Petitioner
`
`v.
`
`California Institute of Technology
`Patent Owner
`
`Case IPR2017-00423
`
`
`
`PETITION FOR INTER PARTES REVIEW OF U.S. PATENT NO. 7,916,781
`UNDER 35 U.S.C. § 312 AND 37 C.F.R. § 42.104
`
`
`
`
`
`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`
`TABLE OF CONTENTS
`
`I. Mandatory Notices ............................................................................................... 1
`A. Real Party-in-Interest .................................................................................... 1
`B. Related Matters ............................................................................................. 1
`C. Counsel .......................................................................................................... 1
`D. Service Information ....................................................................................... 2
`II. Certification of Grounds for Standing ................................................................. 2
`III. Overview of Challenge and Relief Requested ..................................................... 2
`A. Prior Art Patents and Printed Publications .................................................... 2
`B. Relief Requested ........................................................................................... 3
`IV. Overview of the Technology ................................................................................ 4
`A. Error-Correcting Codes in General ............................................................... 4
`B. Coding Rate ................................................................................................... 7
`C. Performance of Error-Correcting Codes ....................................................... 7
`D. LDPC Codes, Turbocodes, and Repeat-Accumulate Codes ......................... 8
`E. Mathematical Representations of Error-Correcting Codes ......................... 12
`F.
`Irregularity ................................................................................................... 17
`V. The ’781 Patent .................................................................................................. 19
`A. Claims ......................................................................................................... 19
`B. Summary of the Specification ..................................................................... 20
`C. Level of Ordinary Skill in the Art ............................................................... 22
`D. Summary of the Prosecution History .......................................................... 22
`VI. Claim Construction ............................................................................................ 23
`A. “codeword” (all claims) .............................................................................. 23
`VII. Overview of Primary Prior Art References ................................................ 24
`A. Ping.............................................................................................................. 24
`B. MacKay ....................................................................................................... 29
`C. Coombes ...................................................................................................... 30
`VIII. Grounds for Challenge ................................................................................ 31
`
`
`
`i
`
`
`
`A. Ground 1: Claims 13-15 and 17-22 Are Obvious Over Ping in View of
`MacKay ....................................................................................................... 31
`B. Ground 2: Claim 16 Is Obvious Over Ping in View of MacKay, and
`Further in View of Coombes ....................................................................... 48
`IX. Conclusion .......................................................................................................... 51
`
`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`
`
`
`
`
`ii
`
`
`
`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`
`I. MANDATORY NOTICES
`
`A. Real Party-in-Interest
`
`Apple Inc. (“Apple” or “Petitioner”) and Broadcom Corp. are the real
`
`parties-in-interest.
`
`B. Related Matters
`
`U.S. Pat. No. 7,916,781 (the “’781 patent,” Ex. 1101) is assigned to the
`
`California Institute of Technology (“Caltech” or “Patent Owner.”) On May 26,
`
`2016, Caltech sued Apple, Broadcom Corp., and Avago Technologies, Ltd. in the
`
`U.S. District Court for the Central District of California, claiming that Apple
`
`products compliant with the 802.11n and 802.11ac wireless communication
`
`standards infringe the ’781 patent (and three others). On August 15, 2016, Caltech
`
`amended its complaint to assert patent infringement against Cypress
`
`Semiconductor Corp. See Amended Complaint, California Institute of Technology
`
`v. Broadcom, Ltd. et al. (Case 2:16-cv-03714), Docket No. 36. The ’781 patent
`
`was also asserted by Caltech against Hughes Communications Inc. in California
`
`Institute of Technology v. Hughes Communs., Inc (Case 2:13-cv-07245), and its
`
`claims were challenged in one petition for inter partes review, IPR2015-00059.
`
`Patents in the priority chain of the ’781 patent were challenged in IPR2015-00068,
`
`IPR 2015-00067, IPR2015-00060, IPR2015-00061, and IPR-2015-00081.
`
`C. Counsel
`
`
`
`1
`
`
`
`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`Richard Goldenberg (Registration No. 38,895)
`
`Lead Counsel:
`
`Backup Counsel: Brian M. Seeve (Registration No. 71,721)
`
`D.
`
`Service Information
`
`E-mail: richard.goldenberg@wilmerhale.com
`
`Post and Hand Delivery: WilmerHale, 60 State St., Boston MA 02109
`
`Telephone: 617-526-6548
`
`II. CERTIFICATION OF GROUNDS FOR STANDING
`
`Petitioner certifies pursuant to Rule 42.104(a) that the patent for which
`
`review is sought is available for inter partes review and that Petitioner is not
`
`barred or estopped from requesting an inter partes review challenging the patent
`
`claims on the grounds identified in this Petition.
`
`III. OVERVIEW OF CHALLENGE AND RELIEF REQUESTED
`
`Pursuant to Rules 42.22(a)(1) and 42.104(b)(1)-(2), Petitioner challenges
`
`claims 13-22 of the ’781 Patent (“the challenged claims”) and requests that each
`
`challenged claim be canceled.1
`
`A.
`
`Prior Art Patents and Printed Publications
`
`Petitioner relies upon the patents and printed publications listed in the Table
`
`of Exhibits, including:
`
`
`1 Petitioner has filed a second petition challenging claims 3-12 and 19-21 of the
`
`ʼ781 patent, which is also dated December 12, 2016.
`
`
`
`2
`
`
`
`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`1. L. Ping, W. K. Leung, N. Phamdo, “Low Density Parity Check Codes
`
`with Semi-random Parity Check Matrix.” Electron. Letters, Vol. 35,
`
`No. 1, pp. 38-39, January 7, 1999 (“Ping”; Ex. 1103.)
`
`2. D. J. C. MacKay, S. T. Wilson, and M. C. Davey, “Comparison of
`
`constructions of irregular Gallager codes,” IEEE Trans. Commun.,
`
`Vol. 47, No. 10, pp. 1449-54, October 1999 (“MacKay”; Ex. 1102.)
`
`3. U.S. Patent No. 4,271,520 (1981) (“Coombes”; Ex. 1118.)
`
`The ’781 patent claims priority to a provisional application filed on May 18,
`
`2000, and to U.S. application Ser. No. 09/922,852, filed on Aug. 18, 2000. The
`
`Petitioner reserves the right to dispute that the challenged claims are entitled to a
`
`priority date of May 18, 2000. However, even if the challenged claims are entitled
`
`to this priority date, the above references qualify as prior art under 35 U.S.C. §
`
`102.
`
`B. Relief Requested
`
`Petitioner requests that the Patent Trial and Appeal Board cancel the
`
`challenged claims because they are unpatentable under 35 U.S.C. §§ 102 and 103.
`
`This conclusion is further supported by the declaration of Professor James Davis,
`
`Ph.D. (“Davis Declaration,” Ex. 1104).
`
`
`
`3
`
`
`
`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`
`IV. OVERVIEW OF THE TECHNOLOGY
`
`The ’781 patent relates to the field of channel coding and error-correcting
`
`codes. This section provides a brief introduction to channel coding and error-
`
`correcting codes, and highlights developments in the field relevant to the ’781
`
`patent. (Ex. 1104, ¶21.)
`
`A. Error-Correcting Codes in General
`
`Digital electronics use bits to represent information. A bit is a binary unit of
`
`information that may be a 1 or 0. Any type of information can be represented
`
`digitally as a collection of bits. (Ex. 1104, ¶22.)
`
`When transmitting binary information over an analog communication
`
`channel, the data bits representing the information to be communicated
`
`(“information bits”) are converted into an analog signal. This process is called
`
`modulation. The transmitted signal is then received by a receiving device and
`
`converted back into binary form through demodulation. (Ex. 1104, ¶23.)
`
`Modulation, Transmission, and Demodulation
`
`
`
`
`
`4
`
`
`
`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`Transmission over physical channels is rarely 100% reliable. The
`
`transmitted signal can be corrupted during transmission by “noise” caused by, e.g.,
`
`obstructions in the signal path, interference from other signals, or
`
`electrical/magnetic disturbances. Noise can cause bits to “flip” during
`
`transmission, causing a bit transmitted as a 1 to be demodulated as 0, and vice
`
`versa. (Ex. 1104, ¶24.)
`
`Error-correcting codes were developed to mitigate transmission errors.
`
`Using the information bits, an error-correcting code generates “parity bits” that
`
`allow the receiver to verify that the information bits were transmitted correctly, and
`
`to correct transmission errors. (Ex. 1104, ¶25.)
`
`Bits are encoded by an encoder, which receives a sequence of information
`
`bits as input, generates parity bits according to a particular encoding algorithm, and
`
`outputs a sequence of encoded bits (or data elements) called a codeword. The
`
`codeword produced by the encoder is then modulated and transmitted as an analog
`
`signal. (Ex. 1104, ¶26.)
`
`At the receiver, the signal is received and demodulated, and the codeword is
`
`passed to the decoder, which uses a decoding algorithm to recover the original
`
`information bits. (Ex. 1104, ¶27.)
`
`
`
`5
`
`
`
`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`
`Encoding and Decoding
`
`
`
`Error-correcting codes work by adding redundant information to the original
`
`message. Due to redundancy, the information represented by a given information
`
`bit is spread across multiple bits of the codeword. Thus, if a codeword bit is flipped
`
`during transmission, the original information bit can still be recovered from the
`
`other codeword bits. (Ex. 1104, ¶28.)
`
`As a simple example, consider an encoding scheme, called “repeat-three,”
`
`that outputs three copies of each information bit. In this scheme, the information
`
`bits “1 0 1” would be encoded as “111 000 111.” Upon receipt, the decoder
`
`converts instances of “111” into “1” and instances of “000” into “0” to produce the
`
`decoded bits “1 0 1,” which match the original information bits. (Ex. 1104, ¶29.)
`
`Suppose a bit is flipped during transmission, changing “000” to “010.” The
`
`decoder will be able to detect a transmission error, because “010” is not a valid
`
`“repeat-three” codeword. Using a “majority vote” rule, the decoder can infer that
`
`the original information bit was a 0, correcting the transmission error. Thus, due to
`
`the redundancy incorporated into the codeword, no information was lost. (Ex.
`
`1104, ¶30.)
`
`
`
`6
`
`
`
`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`Error-correcting codes may be either systematic or non-systematic. (Ex.
`
`1120, p. 12, “Also, a [binary convolutional encoder] can be systematic or non-
`
`systematic.”; id. pp. 12-13, Figures 2.1 and 2.2, captions showing both systematic
`
`and non-systematic encoders.) In a systematic code, parity bits and original
`
`information bits are included in the codeword. (Ex. 1120, p. 14, “For example, a
`
`systematic encoder is one for which the encoder input (the data) forms a substring
`
`of the output (the codeword).”; Ex. 1121, pp. 6, 229.) In a non-systematic code, the
`
`encoded data includes only parity bits. (Id.) Systematic and non-systematic codes
`
`had been known for decades before May 18, 2000, the claimed priority date of the
`
`ʼ781 patent. (Ex. 1104, ¶¶31-32.)
`
`B. Coding Rate
`
`Many error-correcting codes encode information bits in groups, or blocks of
`
`fixed length n. An encoder receives a k-bit block of information bits as input, and
`
`produces a corresponding n-bit codeword. The ratio k/n is called the rate of the
`
`code. Because the codeword generally includes redundant information, n is
`
`generally greater than k, and the rate k/n of an error-correcting code is generally
`
`less than one. (Ex. 1104, ¶33.)
`
`C.
`
`Performance of Error-Correcting Codes
`
`One metric used to assess the performance of an error-correcting code is its
`
`bit-error rate (BER.) BER is defined as the number of corrupted information bits
`
`
`
`7
`
`
`
`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`divided by the total information bits during a particular interval. For example, if a
`
`decoder outputs one thousand bits, and ten of those bits are corrupted (i.e., they
`
`differ from the original information bits), then the BER of the code during that
`
`interval is (10 bit errors) / (1000 total bits) = 0.01 or 1%. (Ex. 1104, ¶35.)
`
`The BER of a transmission depends on the amount of noise present in the
`
`communication channel and the strength of the transmitted signal (i.e., the power
`
`used to transmit the modulated waveform). An increase in noise tends to increase
`
`the error rate and an increase in signal strength tends to decrease the error rate. The
`
`ratio of the signal strength to the noise, called the “signal-to-noise ratio,” is often
`
`used to characterize the channel over which the encoded signal is transmitted. The
`
`signal-to-noise ratio can be expressed mathematically as Eb/N0, in which Eb is the
`
`amount of energy used to transmit each bit of the signal, and N0 is the density of
`
`the noise on the channel. The BER of an error-correcting code is often measured
`
`for multiple values of Eb/N0 to determine how the code performs under various
`
`channel conditions. (Ex. 1104, ¶36.)
`
`D. LDPC Codes, Turbocodes, and Repeat-Accumulate Codes
`
`In 1963, Robert Gallager described a set of error-correcting codes called
`
`Low Density Parity Check (“LDPC”) codes. Gallager described how LDPC codes
`
`provide one method of generating parity bits from information bits using a matrix
`
`populated with mostly 0s and relatively few 1s, as explained below. (See Gallager,
`
`
`
`8
`
`
`
`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`R., Low-Density Parity-Check Codes, Monograph, M.I.T. Press, 1963; Ex. 1105.)
`
`(Ex. 1104, ¶38.)
`
`Gallager’s work was largely ignored over the following decades, as
`
`researchers continued to discover other algorithms for calculating parity bits. These
`
`algorithms included, for example, convolutional encoding with Viterbi decoding
`
`and cyclic code encoding with bounded distance decoding. In many cases, these
`
`new codes could be decoded using low-complexity decoding algorithms. (Ex.
`
`1104, ¶39.)
`
`In 1993, researchers discovered “turbocodes,” a class of error-correcting
`
`codes capable of transmitting information at a rate close to the so-called “Shannon
`
`Limit” – the maximum rate at which information can be transmitted over a
`
`channel. See Berrou et al., “Near Shannon Limit Error-Correcting Coding and
`
`Decoding: Turbo Codes," ICC ’93, Technical Program, Conference Record 1064,
`
`Geneva 1993 (Ex. 1106.) (Ex. 1104, ¶40.)
`
`As explained further in the Davis Declaration, the main drawback of
`
`convolutional codes is that they do not perform well over channels in which errors
`
`are clustered tightly together. Turbocodes overcome this deficiency by encoding
`
`the input bits twice. The input bits are fed to a first convolutional encoder in their
`
`normal order to produce a first set of parity bits. The same input bits are also
`
`reordered by an interleaver (i.e., a component that shuffles the input bits and
`
`
`
`9
`
`
`
`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`outputs them in a different order) and then encoded by a second convolutional
`
`encoder to produce a second set of parity bits. The two sets of parity bits together
`
`with the information bits form a single codeword. Using a turbocode, a small
`
`number of errors will not result in loss of information unless the errors happen to
`
`fall close together in both the original data stream and in the permuted data stream,
`
`which is unlikely. (Ex. 1104, ¶41.)
`
`In 1995, David J. C. MacKay rediscovered Gallager’s work from 1963
`
`relating to low-density parity-check (LDPC) codes, and demonstrated that they
`
`have performance comparable to that of turbocodes. See MacKay, D. J. C, and
`
`Neal, R. M. “Near Shannon Limit Performance of Low Density Parity Check
`
`Codes,” Electronics Letters, vol. 32, pp. 1645-46, 1996 (Ex. 1116.) This
`
`rediscovery was met with wide acclaim. Turbocodes and LDPC codes have some
`
`common characteristics: both codes use pseudo-random permutations to spread out
`
`redundancy, and both use iterative decoding algorithms. (Ex. 1104, ¶42.)
`
`In 1995 and 1996, researchers began to explore “concatenated”
`
`convolutional codes. See Benedetto, S. et al., Serial Concatenation of Block and
`
`Convolutional Codes, 32.10 Electronics Letters 887-88, 1996 (Ex. 1107.) While
`
`turbo codes use two convolutional coders connected in parallel, concatenated
`
`convolutional codes use two convolutional coders connected in series: the
`
`information bits are encoded by a first encoder, the output of the first encoder is
`
`
`
`10
`
`
`
`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`interleaved, and the interleaved sequence is encoded by a second encoder. In such
`
`codes, the first and second encoders are often called the “outer coder” and the
`
`“inner coder,” respectively. (Ex. 1104, ¶43.)
`
`In 1998, researchers developed “repeat-accumulate,” or “RA codes,” by
`
`simplifying the principles underlying turbocodes. (See Ex. 1103). In RA codes, the
`
`information bits are first passed to a repeater that repeats (i.e., duplicates) them and
`
`outputs a stream of repeated bits (the encoder described above in the “repeat three”
`
`coding scheme is one example of a repeater). The repeated bits are then optionally
`
`reordered by an interleaver, and passed to an “accumulator” to form parity bits.
`
`(Ex. 1104, ¶44.)
`
`In accumulation, each input bit is added to the previous input bits to produce
`
`a sequence of running sums, each of which represents the sum of all input bits yet
`
`received. More formally, if an accumulator receives a sequence of input bits i1, i2,
`
`i3, … in, it will produce output bits o1, o2, o3, … on, such that:
`
`
`
`
`
`11
`
`
`
`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`The symbol denotes modulo-2 addition.2 Accumulators can be
`
`implemented simply, allowing accumulate codes to be encoded quickly and
`
`cheaply. (Ex. 1104, ¶45.)
`
`E. Mathematical Representations of Error-Correcting Codes
`
`
`1.
`
`Linear Transformations
`
`Coding theorists often describe error-correcting codes in linear-algebraic
`
`terms. For instance, a k-bit block of information bits is a k-dimensional vector of
`
`bits and an n-bit codeword is an n-dimensional vector of bits (where a “k-
`
`dimensional vector of bits” is a sequence of k bits). The encoding process, which
`
`converts blocks of information bits into codewords, is a linear transformation that
`
`maps k-dimensional bit vectors to n-dimensional bit vectors. This transformation is
`
`represented by an n × k matrix G called a generator matrix. For a vector of
`
`information bits u, the corresponding codeword x is given by:
`
`
`2 Modulo-2 addition (also called “XOR”) is an addition operation that is performed
`
`on bits, defined as follows: 11=0, 10=1, 01=1, and 00=0.
`
`
`
`12
`
`
`
`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`
`
`The n-dimensional vectors that can be written in the form Gu are valid
`
`codewords. (Ex. 1104, ¶46.)
`
`Most n-dimensional vectors are not valid codewords. It is this property that
`
`allows a decoder to determine when there has been an error during transmission.
`
`This determination is made using an (n - k) × n matrix H, called a parity-check
`
`matrix. Using a parity-check matrix, a given vector x is a valid codeword if and
`
`only if Hx = 0. As explained in the Davis Declaration, if the parity check shows
`
`that Hx does not equal 0, then the decoder determines there has been a
`
`transmission error (much like the “010” example in the “repeat three” code above).
`
`In practice, parity-check matrices often have hundreds or thousands of rows, each
`
`of which represents an equation of the form xa + xb + … + xz = 0. These equations
`
`are called parity-check equations. (Ex. 1104, ¶47.)
`
`
`2.
`
`Repeating and Permuting as Examples of LDGM Codes
`
`As explained above, the encoding process can be represented using a matrix
`
`called a generator matrix. This is true of all linear codes, including the “repeat,”
`
`
`
`13
`
`
`
`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`“interleave,” and “accumulate” phases of the “repeat-accumulate” codes discussed
`
`above. More specifically, the “repeat” and “interleave” phases can be represented
`
`using generator matrices whose entries are mostly 0s, with a relatively low
`
`proportion of 1s. Such generator matrices are called “low-density generator
`
`matrices” (LDGMs). The following is a generator matrix that can be used to repeat
`
`each information bit twice (Ex. 1104, ¶50):
`
`
`
`Using the generator matrix above, encoding a sequence of input bits
`
`“101010101” would result in the encoded sequence “110011001100110011” (the
`
`number of times each input bit is repeated is determined by the number of “1s”
`
`appearing in the column corresponding to that input bit). In the 18×9 matrix above,
`
`11.1% of the entries are 1s, and the rest are 0s, making GREPEAT2 a low-density
`
`generator matrix (LDGM). (Ex. 1104, ¶51.)
`
`
`
`14
`
`
`
`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`Permutation is another linear transform represented by a low-density
`
`generator matrix. For example, the matrix below is a permutation matrix that will
`
`output bits in a different order than the input bits (Ex. 1104, ¶52):
`
`
`Permutation matrices, like GSHUFFLE above, have exactly one 1 per row and
`
`one 1 per column. The order of the output bits is determined by the position of
`
`each 1 within its row/column. The matrix above, for example, would transform
`
`input bits i1, i2, i3, i4, i5, i6, i7, i8 into the output sequence i2, i1, i7, i5, i4, i8, i6, i3.
`
`Permutation matrices are square, with dimensions n×n, because the input sequence
`
`and the output sequence have the same length. Because there is only one 1 per
`
`row/column of permutation matrices, the density of 1s is 1/n (GSHUFFLE has a
`
`density of 1/8, or 12.5%). In the context of linear codes, which often operate on
`
`blocks of more than a thousand bits, a permutation matrix might comprise 0.1% 1s
`
`or fewer, and, as further explained in the Davis Declaration, are therefore very
`
`low-density generator matrices (LDGMs). (Ex. 1104, ¶53.)
`
`
`3.
`
`Tanner Graphs
`
`
`
`15
`
`
`
`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`Another widespread mathematical representation of error-correcting codes is
`
`the “Tanner graph.” Tanner graphs are bipartite graphs wherein nodes can be
`
`divided into two groups. Every edge connects a node in one group to a node in the
`
`other (i.e., no two nodes in the same group are connected by an edge). A bipartite
`
`graph is shown below (Ex. 1104, ¶54):
`
`
`A Tanner graph includes one group of nodes called variable nodes that
`
`correspond to the information and parity bits, and a second group of nodes called
`
`check nodes that represent constraints that parity and information bits must satisfy.
`
`In particular, when a set of variable nodes are connected to a particular check node,
`
`it means that the information and parity bits corresponding to those variable nodes
`
`must sum to 0. (Ex. 1104, ¶55.)
`
`These two mathematical descriptions of linear codes – one using matrices,
`
`one using Tanner graphs – are alternative ways of describing the same thing, in
`
`
`
`16
`
`
`
`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`much the same way that “0.5” and “½”describe the same number. Every generator
`
`matrix corresponds to a Tanner graph, and vice versa. (Ex. 1104, ¶56.)
`
`F.
`
`Irregularity
`
`Irregular LDPC codes were first introduced in a 1997 paper by Luby et al.
`
`See Luby, M. et al., “Practical Loss-Resilient Codes,” STOC ’97, 1997 (Ex. 1108).
`
`The paper showed that irregular codes perform better than regular codes on certain
`
`types of noisy channels. (Ex. 1104, ¶57.)
`
`In regular codes, each information bit contributes to the same number of
`
`parity bits, while in irregular codes different information bits or groups of
`
`information bits contribute to different numbers of parity bits.3 This is consistent
`
`with the Board’s construction of “irregular” in prior proceedings. (See, e.g.,
`
`IPR2015-00060, Paper 18, p. 12.) Irregularity can also be defined in terms of
`
`Tanner graphs. A regular code has a Tanner graph in which each variable node
`
`corresponding to an information bit is connected to the same number of check
`
`nodes. Irregular codes have Tanner graphs in which some variable nodes
`
`corresponding to information bits are connected to more check nodes than others.
`
`
`3 An information bit contributes to a parity bit if the column in the generator matrix
`
`corresponding to the information bit has a 1 in the row corresponding to the parity
`
`bit.
`
`
`
`17
`
`
`
`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`These two formulations of irregularity are alternative (and well-known) ways of
`
`describing the same concept. (Ex. 1104, ¶58.)
`
`After Luby’s initial paper describing irregularity, the same team of
`
`researchers published a second paper, expanding the theory of irregularity in error-
`
`correcting codes. (See Luby, M. et al., “Analysis of Low Density Codes and
`
`Improved Designs Using Irregular Graphs,” STOC ’98, pp. 249-59, published in
`
`1998; Ex. 1109). At the time, both of these papers were widely read by coding
`
`theorists, and gave rise to a great deal of research into irregular error-correcting
`
`codes. (Ex. 1104, ¶59.)
`
`For example, the 1998 Luby paper was the first reference cited in a paper by
`
`Frey and MacKay titled “Irregular Turbocodes,” presented at the 1999 Allerton
`
`Conference on Communications, Control, and Computing. (See Ex. 1110.) The
`
`second author, MacKay, was the same researcher who had rediscovered LDPC
`
`codes in 1995. In “Irregular Turbocodes,” the authors applied the concept of
`
`irregularity to turbo codes by explaining how to construct irregular turbo codes in
`
`which some information bits connect to more check nodes than others. The
`
`experimental results presented in the paper demonstrated that these irregular
`
`turbocodes perform better than regular turbocodes known in the art. (See Ex. 1110,
`
`p. 7.) (Ex. 1104, ¶60.)
`
`
`
`18
`
`
`
`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`By May 18, 2000, the claimed priority date of the ’781 patent, it was
`
`common knowledge that the performance of any type of error-correcting code
`
`could be improved by adding irregularity. (Ex. 1104, ¶61.)
`
`V. THE ’781 PATENT
`
`A. Claims
`
`The ’781 patent includes 22 claims, of which claims 1, 13, 19, 20, and 21 are
`
`independent. Claims 13-22 are challenged in this Petition. Claims 1 and 2 are not
`
`challenged here because the Board already invalidated them in IPR2015-00059.
`
`(Ex. 1111, p. 43.) Nonetheless, the manner in which the prior art invalidates claims
`
`1 and 2 is explained herein to demonstrate why claims that depend from claims 1
`
`and 2 are also invalid.
`
`Independent claim 1 is directed to a two-step process for encoding a signal,
`
`where the first encoding step involves a linear transform operation and the second
`
`involves an accumulation operation. Independent claims 13 and 19 are directed to
`
`methods of encoding a signal that generate codewords by summing information
`
`bits and accumulating the resulting sums. Independent claims 20 and 21 are
`
`directed to methods that involve the summation (Σ) of information bits and the
`
`mod-2 summing of parity bits with those summations to generate a portion of an
`
`encoded signal. (Ex. 1104, ¶79.)
`
`
`
`19
`
`
`
`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`Although the specification and some of the claims of the ’781 patent relate
`
`to irregular codes, claims 1-8, 10-12, and 19-21 do not. These claims cover regular
`
`codes and are met by Ping’s teaching of regular codes. The Board reached this
`
`same conclusion for claims 1, 19, 20, and 21 in IPR2015-00059. (Ex. 1014, p. 13,
`
`“However, we do not read claim 1 as being limited to irregular repeat coding.”; id.,
`
`p. 14, “Claim 1 does not require an irregular repeat (the only independent claim
`
`requiring irregular repeat is claim 13),” emphasis added.)
`
`B.
`
`Summary of the Specification
`
`The specification of the ’781 patent is generally directed to irregular RA
`
`codes (or “IRA” codes). Figure 2 of the specification, below, shows the structure
`
`of an IRA encoder (Ex. 1104, ¶80):
`
`Ex. 1101, Fig. 2
`
`
`
`Explaining this figure, the patent describes encoding data using an outer
`
`coder 202 connected to an inner coder 206 via an interleaver 204 (labeled “P”) (Ex.
`
`1101, 2:39-46.) (Ex. 1104, ¶81.)
`
`
`
`20
`
`
`
`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`Outer coder 202 receives a block of information bits and duplicates each bit
`
`a given number of times, producing a sequence of repeated bits at its output. (Ex.
`
`1101, 2:57-64.) The outer coder repeats bits irregularly – i.e., it outputs more
`
`duplicates of some information bits than others. (Id.) (Ex. 1104, ¶82.)
`
`The repeated bits are passed to an interleaver 204, where they are scrambled.
`
`(Ex. 1101, 3:29-33.) The scrambled bits are then passed to the inner coder 206,
`
`where they are accumulated to form parity bits. (Id., 3:3-23.) According to the
`
`specification (Ex. 1104, ¶83):
`
`Ex. 1101 at 3:7-23
`
`
`
`The specification describes systematic and non-systematic codes. In a
`
`systematic code, the encoder outputs a copy of the information bits in addition to
`
`the parity bits output by inner coder 206 (the systematic output is represented in
`
`Fig. 2. as an arrow running toward the right along the top of the figure). (Ex. 1104,
`
`¶84.)
`
`
`
`21
`
`
`
`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`
`C. Level of Ordinary Skill in the Art
`
`A person of ordinary skill in the art is a person with a Ph.D. in mathematics,
`
`electrical or computer engineering, or computer science with emphasis in signal
`
`processing, communications, or coding, or a master’s degree in the above area with
`
`at least three years of work experience in this field at the time of the alleged
`
`invention. (Ex. 1104, ¶77.)
`
`D.
`
`Summary of the Prosecution History
`
`On October 28, 2010, the patent examiner issued a first office action
`
`allowing some claims but rejecting claims 13-17 and 20 as anticipated by U.S.
`
`Patent 5,181,207 (to Chapman, et. al.) and requiring applicants to clarify the term
`
`“irregular,” as it appeared in claims 9 and 23.
`
`In a January 27, 2011 response, in order to overcome the examiner’s
`
`rejection, the applicant canceled claim 21 and incorporated its subject matter into
`
`independent claim 13. As amended, claim 13 requires that “the information bits
`
`appear in a variable number of subsets.” (Ex. 1112, Response dated Jan. 27, 2011,
`
`p. 4.)
`
`In accompanying remarks, applicants disagreed with the examiner’s
`
`statement that the term “irregular” was unclear, stating that “[i]t is believed that the
`
`meaning of the term ‘irregular’ in the claims is clear and is well known in the art of
`
`comp