throbber
U.S. Patent 7,916,781
`Petition for Inter Partes Review
`
`DOCKET NO.: 1033300-00287
`
`
`
`UNITED STATES PATENT AND TRADEMARK OFFICE
`
`
`PATENT:
`
`7,916,781
`
`INVENTORS: JIN HUI, AAMOD KHANDEKAR, ROBERT J. McELIECE
`
`FILED:
`
`ISSUED:
`
`TITLE:
`
`JUNE 30, 2008
`
`MARCH 29, 2011
`
`SERIAL CONCATENATION OF INTERLEAVED
`CONVOLUTIONAL CODES FORMING TURBO-LIKE
`CODES
`
`
`
`___________________________________________
`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`____________________________________________
`
`Apple Inc.
`Petitioner
`
`v.
`
`California Institute of Technology
`Patent Owner
`
`Case IPR2017-00297
`
`
`
`PETITION FOR INTER PARTES REVIEW OF U.S. PATENT NO. 7,916,781
`UNDER 35 U.S.C. § 312 AND 37 C.F.R. § 42.104
`
`
`
`

`

`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`
`TABLE OF CONTENTS
`
`I. Mandatory Notices ............................................................................................... 1
`A. Real Party-in-Interest .................................................................................... 1
`B. Related Matters ............................................................................................. 1
`C. Counsel .......................................................................................................... 1
`D. Service Information ....................................................................................... 2
`II. Certification of Grounds for Standing ................................................................. 2
`III. Overview of Challenge and Relief Requested ..................................................... 2
`A. Prior Art Patents and Printed Publications .................................................... 2
`B. Relief Requested ........................................................................................... 3
`IV. Overview of the Technology ................................................................................ 4
`A. Error-Correcting Codes in General ............................................................... 4
`B. Coding Rate ................................................................................................... 7
`C. Performance of Error-Correcting Codes ....................................................... 7
`D. LDPC Codes, Turbocodes, and Repeat-Accumulate Codes ......................... 8
`E. Mathematical Representations of Error-Correcting Codes ......................... 12
`F.
`Irregularity ................................................................................................... 17
`V. The ’781 Patent .................................................................................................. 19
`A. Claims ......................................................................................................... 19
`B. Summary of the Specification ..................................................................... 19
`C. Level of Ordinary Skill in the Art ............................................................... 21
`D. Summary of the Prosecution History .......................................................... 21
`VI. Claim Construction ............................................................................................ 22
`A. “codeword” (all claims) .............................................................................. 23
`B. “linear transformation” (Claim 1) ............................................................... 23
`VII. Overview of Primary Prior Art References ................................................ 24
`A. Ping.............................................................................................................. 24
`B. MacKay ....................................................................................................... 29
`C. Divsalar ....................................................................................................... 31
`
`
`
`i
`
`

`

`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`
`D. Coombes ...................................................................................................... 33
`VIII. Grounds for Challenge ................................................................................ 34
`A. Ground 1: Claims 1-3, 5-8, 10, and 12 Are Obvious Over Ping in View of
`Divsalar ....................................................................................................... 34
`B. Ground 2: Claims 19-21 Are Anticipated by Ping ..................................... 57
`C. Ground 3: Claim 9 Is Obvious Over Ping in View of Divsalar, and Further
`in View of MacKay ..................................................................................... 60
`D. Ground 4: Claims 4 and 11 Are Obvious Over Ping in View of Divsalar,
`and Further in View of Coombes ................................................................ 66
`IX. Conclusion .......................................................................................................... 70
`
`
`
`
`
`ii
`
`

`

`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`
`I. MANDATORY NOTICES
`
`A. Real Party-in-Interest
`
`Apple Inc. (“Apple” or “Petitioner”) and Broadcom Corp. are the real
`
`parties-in-interest.
`
`B. Related Matters
`
`U.S. Pat. No. 7,916,781 (the “’781 patent,” Ex. 1001) is assigned to the
`
`California Institute of Technology (“Caltech” or “Patent Owner.”) On May 26,
`
`2016, Caltech sued Apple, Broadcom Corp., and Avago Technologies, Ltd. in the
`
`U.S. District Court for the Central District of California, claiming that Apple
`
`products compliant with the 802.11n and 802.11ac wireless communication
`
`standards infringe the ’781 patent (and three others). On August 15, 2016, Caltech
`
`amended its complaint to assert patent infringement against Cypress
`
`Semiconductor Corp. See Amended Complaint, California Institute of Technology
`
`v. Broadcom, Ltd. et al. (Case 2:16-cv-03714), Docket No. 36. The ’781 patent
`
`was also asserted by Caltech against Hughes Communications Inc. in California
`
`Institute of Technology v. Hughes Communs., Inc (Case 2:13-cv-07245), and its
`
`claims were challenged in one petition for inter partes review, IPR2015-00059.
`
`Patents in the priority chain of the ’781 patent were challenged in IPR2015-00068,
`
`IPR 2015-00067, IPR2015-00060, IPR2015-00061, and IPR-2015-00081.
`
`C. Counsel
`
`
`
`1
`
`

`

`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`Richard Goldenberg (Registration No. 38,895)
`
`Lead Counsel:
`
`Backup Counsel: Brian M. Seeve (Registration No. 71,721)
`
`D.
`
`Service Information
`
`E-mail: richard.goldenberg@wilmerhale.com
`
`Post and Hand Delivery: WilmerHale, 60 State St., Boston MA 02109
`
`Telephone: 617-526-6548
`
`II. CERTIFICATION OF GROUNDS FOR STANDING
`
`Petitioner certifies pursuant to Rule 42.104(a) that the patent for which
`
`review is sought is available for inter partes review and that Petitioner is not
`
`barred or estopped from requesting an inter partes review challenging the patent
`
`claims on the grounds identified in this Petition.
`
`III. OVERVIEW OF CHALLENGE AND RELIEF REQUESTED
`
`Pursuant to Rules 42.22(a)(1) and 42.104(b)(1)-(2), Petitioner challenges
`
`claims 3-12 and 19-21 of the ’781 Patent (“the challenged claims”) and requests
`
`that each challenged claim be canceled.1
`
`A.
`
`Prior Art Patents and Printed Publications
`
`Petitioner relies upon the patents and printed publications listed in the Table
`
`of Exhibits, including:
`
`
`1 Petitioner has filed a second petition challenging claims 13-22 of the ʼ781 patent,
`
`which is also dated December 12, 2016.
`
`
`
`2
`
`

`

`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`1. L. Ping, W. K. Leung, N. Phamdo, “Low Density Parity Check Codes
`
`with Semi-random Parity Check Matrix.” Electron. Letters, Vol. 35,
`
`No. 1, pp. 38-39, January 7, 1999 (“Ping”; Ex. 1003.)
`
`2. D. J. C. MacKay, S. T. Wilson, and M. C. Davey, “Comparison of
`
`constructions of irregular Gallager codes,” IEEE Trans. Commun.,
`
`Vol. 47, No. 10, pp. 1449-54, October 1999 (“MacKay”; Ex. 1002.)
`
`3. D. Divsalar, H. Jin, and R. J. McEliece, “Coding theorems for "turbo-
`
`like" codes,” Proc. 36th Allerton Conf. on Comm., Control and
`
`Computing, Allerton, Illinois, pp. 201-10, September 1998
`
`(“Divsalar”; Ex. 1017.)
`
`4. U.S. Patent No. 4,271,520 (1981) (“Coombes”; Ex. 1018.)
`
`The ’781 patent claims priority to a provisional application filed on May 18,
`
`2000, and to U.S. application Ser. No. 09/922,852, filed on Aug. 18, 2000. The
`
`Petitioner reserves the right to dispute that the challenged claims are entitled to a
`
`priority date of May 18, 2000. However, even if the challenged claims are entitled
`
`to this priority date, the above references qualify as prior art under 35 U.S.C. §
`
`102.
`
`B. Relief Requested
`
`Petitioner requests that the Patent Trial and Appeal Board cancel the
`
`challenged claims because they are unpatentable under 35 U.S.C. §§ 102 and 103.
`
`
`
`3
`
`

`

`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`This conclusion is further supported by the declaration of Professor James Davis,
`
`Ph.D. (“Davis Declaration,” Ex. 1004).
`
`IV. OVERVIEW OF THE TECHNOLOGY
`
`The ’781 patent relates to the field of channel coding and error-correcting
`
`codes. This section provides an introduction to channel coding and error-correcting
`
`codes, highlighting developments relevant to the ’781 patent. (Ex. 1004, ¶21.)
`
`A. Error-Correcting Codes in General
`
`Digital electronics use bits to represent information. A bit is a binary unit of
`
`information that may be a 1 or 0. Any type of information can be represented
`
`digitally with bits. (Ex. 1004, ¶22.)
`
`When transmitting binary information over an analog communication
`
`channel, the data bits representing the information (“information bits”) are
`
`converted into an analog signal. This process is called modulation. The transmitted
`
`signal is then received by a device and converted back into binary form through
`
`demodulation. (Ex. 1004, ¶23.)
`
`
`
`4
`
`
`
`

`

`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`Modulation, Transmission, and Demodulation
`
`Transmission over physical channels is rarely 100% reliable. The
`
`transmitted signal can be corrupted during transmission by “noise” caused by, e.g.,
`
`obstructions in the signal path, interference from other signals, or
`
`electrical/magnetic disturbances. Noise can cause bits to “flip” during
`
`transmission, causing a bit transmitted as a 1 to be demodulated as 0, and vice
`
`versa. (Ex. 1004, ¶24.)
`
`Error-correcting codes were developed to mitigate transmission errors.
`
`Using the information bits, an error-correcting code generates “parity bits” that
`
`allow the receiver to verify that the information bits were transmitted correctly, and
`
`to correct transmission errors. (Ex. 1004, ¶25.)
`
`Bits are encoded by an encoder, which receives a sequence of information
`
`bits as input, generates parity bits with an encoding algorithm, and outputs a
`
`sequence of encoded bits called a codeword. The codeword produced by the
`
`encoder is then modulated and transmitted as an analog signal. (Ex. 1004, ¶26.)
`
`At the receiver, the signal is demodulated, and the codeword is passed to the
`
`decoder, which uses a decoding algorithm to recover the original information bits.
`
`(Ex. 1004, ¶27.)
`
`
`
`5
`
`

`

`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`
`Encoding and Decoding
`
`
`
`Error-correcting codes work by adding redundant information to the original
`
`message. Due to redundancy, the information represented by a given information
`
`bit is spread across multiple bits of the codeword. Thus, if a codeword bit is flipped
`
`during transmission, the original information bit can still be recovered from the
`
`other codeword bits. (Ex. 1004, ¶28.)
`
`As a simple example, consider an encoding scheme, called “repeat-three,”
`
`that outputs three copies of each information bit. In this scheme, the information
`
`bits “1 0 1” would be encoded as “111 000 111.” Upon receipt, the decoder
`
`converts instances of “111” into “1” and instances of “000” into “0” to produce the
`
`decoded bits “1 0 1,” which match the original information bits. (Ex. 1004, ¶29.)
`
`Suppose a bit is flipped during transmission, changing “000” to “010.” The
`
`decoder can detect a transmission error, because “010” is not a valid “repeat-three”
`
`codeword. Using a “majority vote” rule, the decoder can infer that the original
`
`information bit was a 0, correcting the transmission error. Thus, due to the
`
`redundancy of the codeword, no information was lost. (Ex. 1004, ¶30.)
`
`
`
`6
`
`

`

`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`Error-correcting codes are either systematic or non-systematic. (Ex. 1020, p.
`
`12, “Also, a [binary convolutional encoder] can be systematic or non-systematic.”;
`
`id. pp. 12-13, Figures 2.1 and 2.2, showing systematic and non-systematic
`
`encoders.) In a systematic code, parity bits and information bits are included in the
`
`codeword. (Ex. 1020, p. 14, “a systematic encoder is one for which the encoder
`
`input (the data) forms a substring of the output (the codeword)”; Ex. 1021, pp. 6,
`
`229.) In a non-systematic code, the encoded data includes only parity bits. (Id.)
`
`Systematic and non-systematic codes had been known for decades before May 18,
`
`2000, the claimed priority date of the ʼ781 patent. (Ex. 1004, ¶¶31-32.)
`
`B. Coding Rate
`
`Many error-correcting codes encode information bits in groups, or blocks of
`
`fixed length n. An encoder receives a k-bit block of information bits as input, and
`
`produces a corresponding n-bit codeword. The ratio k/n is called the rate of the
`
`code. Because the codeword generally includes redundant information, n is
`
`generally greater than k, and the rate k/n of an error-correcting code is generally
`
`less than one. (Ex. 1004, ¶33.)
`
`C.
`
`Performance of Error-Correcting Codes
`
`One metric used to assess the performance of an error-correcting code is its
`
`bit-error rate (BER.) BER is the number of corrupted information bits divided by
`
`the total information bits during a particular interval. For example, if a decoder
`
`
`
`7
`
`

`

`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`outputs one thousand bits, and ten of those bits are corrupted (i.e., they differ from
`
`the original information bits), then the BER of the code during that interval is (10
`
`bit errors) / (1000 total bits) = 0.01 or 1%. (Ex. 1004, ¶¶34-35.)
`
`The BER of a transmission depends on the amount of noise in the
`
`communication channel and the strength of the transmitted signal (i.e., the power
`
`used to transmit the signal). An increase in noise tends to increase the error rate
`
`and an increase in signal strength tends to decrease the error rate. The ratio of
`
`signal strength to noise, called the “signal-to-noise ratio,” is often used to
`
`characterize the channel over which the encoded signal is transmitted. The signal-
`
`to-noise ratio can be expressed mathematically as Eb/N0, in which Eb is the amount
`
`of energy used to transmit each bit of the signal, and N0 is the density of the noise.
`
`The BER of an error-correcting code is often measured for multiple values of Eb/N0
`
`to determine how the code performs under various channel conditions. (Ex. 1004,
`
`¶36.)
`
`D. LDPC Codes, Turbocodes, and Repeat-Accumulate Codes
`
`In 1963, Robert Gallager described a set of error-correcting codes called
`
`Low Density Parity Check (“LDPC”) codes. Gallager described how LDPC codes
`
`provide one method of generating parity bits from information bits using a matrix
`
`populated with mostly 0s and relatively few 1s, as explained below. (See Gallager,
`
`
`
`8
`
`

`

`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`R., Low-Density Parity-Check Codes, Monograph, M.I.T. Press, 1963; Ex. 1005.)
`
`(Ex. 1004, ¶38.)
`
`Gallager’s work was largely ignored over the following decades, as
`
`researchers discovered other algorithms for calculating parity bits. These
`
`algorithms included, for example, convolutional encoding with Viterbi decoding
`
`and cyclic code encoding with bounded distance decoding. In many cases, these
`
`new codes could be decoded using low-complexity decoding algorithms. (Ex.
`
`1004, ¶39.)
`
`In 1993, researchers discovered “turbocodes,” a class of error-correcting
`
`codes capable of transmitting information at a rate close to the so-called “Shannon
`
`Limit” – the maximum rate at which information can be transmitted over a
`
`channel. See Berrou et al., “Near Shannon Limit Error-Correcting Coding and
`
`Decoding: Turbo Codes," ICC ’93, Technical Program, Conference Record 1064,
`
`Geneva 1993 (Ex. 1006.) (Ex. 1004, ¶40.)
`
`As explained in the Davis Declaration, the main drawback of convolutional
`
`codes is that they do not perform well over channels in which errors are clustered
`
`tightly together. Turbocodes overcome this deficiency by encoding the input bits
`
`twice. The input bits are fed to a first convolutional encoder in their normal order
`
`to produce a first set of parity bits. The same input bits are also reordered by an
`
`interleaver (i.e., a component that shuffles the input bits and outputs them in a
`
`
`
`9
`
`

`

`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`different order) and then encoded by a second convolutional encoder to produce a
`
`second set of parity bits. The two sets of parity bits together with the information
`
`bits form a single codeword. Using a turbocode, a small number of errors will not
`
`result in loss of information unless the errors happen to fall close together in both
`
`the original data stream and in the permuted data stream, which is unlikely. (Ex.
`
`1004, ¶41.)
`
`In 1995, David J. C. MacKay rediscovered Gallager’s work from 1963
`
`relating to low-density parity-check (LDPC) codes, and demonstrated that they
`
`have performance comparable to that of turbocodes. See MacKay, D. J. C, and
`
`Neal, R. M. “Near Shannon Limit Performance of Low Density Parity Check
`
`Codes,” Electronics Letters, vol. 32, pp. 1645-46, 1996 (Ex. 1016.) This
`
`rediscovery was met with wide acclaim. Turbocodes and LDPC codes have
`
`common characteristics: both codes use pseudo-random permutations to spread out
`
`redundancy, and both use iterative decoding algorithms. (Ex. 1004, ¶42.)
`
`In 1995 and 1996, researchers began exploring “concatenated” convolutional
`
`codes. See Benedetto, S. et al., Serial Concatenation of Block and Convolutional
`
`Codes, 32.10 Electronics Letters 887-88, 1996 (Ex. 1007.) While turbo codes use
`
`two convolutional coders connected in parallel, concatenated convolutional codes
`
`use two convolutional coders connected in series: the information bits are encoded
`
`by a first encoder, the output of the first encoder is interleaved, and the interleaved
`
`
`
`10
`
`

`

`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`sequence is encoded by a second encoder. In such codes, the first and second
`
`encoders are often called the “outer coder” and the “inner coder,” respectively.
`
`(Ex. 1004, ¶43.)
`
`In 1998, researchers developed “repeat-accumulate,” or “RA codes,” by
`
`simplifying the principles underlying turbocodes. (See Ex. 1003). In RA codes, the
`
`information bits are first passed to a repeater that repeats (i.e., duplicates) them and
`
`outputs a stream of repeated bits (the encoder described above in the “repeat three”
`
`coding scheme is one example of a repeater). The repeated bits are then optionally
`
`reordered by an interleaver, and passed to an “accumulator” to form parity bits.
`
`(Ex. 1004, ¶44.)
`
`In accumulation, each input bit is added to the previous input bits to produce
`
`a sequence of running sums, each of which represents the sum of all input bits yet
`
`received. More formally, if an accumulator receives a sequence of input bits i1, i2,
`
`i3, … in, it will produce output bits o1, o2, o3, … on, such that:
`
`
`
`
`
`11
`
`

`

`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`The  symbol denotes modulo-2 addition.2 Accumulators can be
`
`implemented simply, allowing accumulate codes to be encoded quickly and
`
`cheaply. (Ex. 1004, ¶45.)
`
`E. Mathematical Representations of Error-Correcting Codes
`
`1. Linear Transformations
`
`Coding theorists often describe error-correcting codes in linear-algebraic
`
`terms. For instance, a k-bit block of information bits is a k-dimensional vector of
`
`bits and an n-bit codeword is an n-dimensional vector of bits (a “k-dimensional
`
`vector of bits” is a sequence of k bits). The encoding process, which converts
`
`blocks of information bits into codewords, is a linear transformation that maps k-
`
`dimensional bit vectors to n-dimensional bit vectors. This transformation is
`
`represented by an n × k matrix G called a generator matrix. For a vector of
`
`information bits u, the corresponding codeword x is given by:
`
`
`2 Modulo-2 addition (also called “XOR”) is an addition operation that is performed
`
`on bits, defined as follows: 11=0, 10=1, 01=1, and 00=0.
`
`
`
`12
`
`

`

`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`
`
`The n-dimensional vectors that can be written in the form Gu are valid
`
`codewords. (Ex. 1004, ¶46.)
`
`Most n-dimensional vectors are not valid codewords. It is this property that
`
`allows a decoder to determine when there has been an error during transmission.
`
`This determination is made using an (n - k) × n matrix H, called a parity-check
`
`matrix. Using a parity-check matrix, a vector x is a valid codeword if and only if
`
`Hx = 0. As explained in the Davis Declaration, if the parity check shows that Hx
`
`does not equal 0, then the decoder detects a transmission error (like the “010”
`
`example in the “repeat three” code above). In practice, parity-check matrices often
`
`have hundreds or thousands of rows, each of which represents an equation of the
`
`form xa + xb + … + xz = 0. These equations are called parity-check equations. (Ex.
`
`1004, ¶¶47-48.)
`
`2. Repeating and Permuting as Examples of LDGM Codes
`
`As explained above, the encoding process can be represented using a
`
`generator matrix. This is true of all linear codes, including the “repeat,”
`
`
`
`13
`
`

`

`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`“interleave,” and “accumulate” phases of the “repeat-accumulate” codes discussed
`
`above. Specifically, the “repeat” and “interleave” phases can be represented using
`
`generator matrices whose entries are mostly 0s, with a relatively low proportion of
`
`1s. Such generator matrices are called “low-density generator matrices” (LDGMs).
`
`The following is a generator matrix that repeats each information bit twice (Ex.
`
`1004, ¶50):
`
`
`
`Using this generator matrix, encoding a sequence of input bits “101010101”
`
`would result in the encoded sequence “110011001100110011” (the number of
`
`times each input bit is repeated is determined by the number of “1s” in the column
`
`corresponding to that input bit). In the 18×9 matrix above, 11.1% of the entries are
`
`1s, and the rest are 0s, making GREPEAT2 a low-density generator matrix (LDGM).
`
`(Ex. 1004, ¶51.)
`
`
`
`14
`
`

`

`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`Permutation is another linear transform represented by a low-density
`
`generator matrix. For example, the matrix below is a permutation matrix that will
`
`output bits in a different order than the input bits (Ex. 1004, ¶52):
`
`
`Permutation matrices, like GSHUFFLE above, have exactly one 1 per row and
`
`one 1 per column. The order of the output bits is determined by the position of
`
`each 1 within its row/column. The matrix above, for example, would transform
`
`input bits i1, i2, i3, i4, i5, i6, i7, i8 into the output sequence i2, i1, i7, i5, i4, i8, i6, i3.
`
`Permutation matrices are square, with dimensions n×n, because the input sequence
`
`and the output sequence have the same length. With only one 1 per row/column of
`
`permutation matrices, the density of 1s is 1/n (GSHUFFLE has a density of 1/8, or
`
`12.5%). With linear codes, which often operate on blocks of more than a thousand
`
`bits, a permutation matrix might comprise 0.1% 1s or fewer, and, as explained in
`
`the Davis Declaration, are therefore very low-density generator matrices
`
`(LDGMs). (Ex. 1004, ¶53.)
`
`3. Tanner Graphs
`
`
`
`15
`
`

`

`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`Another widespread mathematical representation of error-correcting codes is
`
`the “Tanner graph.” Tanner graphs are bipartite graphs wherein nodes can be
`
`divided into two groups. Every edge connects a node in one group to a node in the
`
`other (i.e., no two nodes in the same group are connected by an edge). A bipartite
`
`graph is shown below (Ex. 1004, ¶54):
`
`
`A Tanner graph includes one group of nodes called variable nodes that
`
`correspond to the information and parity bits, and a second group of nodes called
`
`check nodes that represent constraints that parity and information bits must satisfy.
`
`In particular, when a set of variable nodes are connected to a particular check node,
`
`the information and parity bits corresponding to those variable nodes must sum to
`
`0. (Ex. 1004, ¶55.)
`
`These two mathematical descriptions of linear codes – one using matrices,
`
`one using Tanner graphs – are alternative ways of describing the same thing, in
`
`
`
`16
`
`

`

`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`much the same way that “0.5” and “½”describe the same number. Every generator
`
`matrix corresponds to a Tanner graph, and vice versa. (Ex. 1004, ¶56.)
`
`F.
`
`Irregularity
`
`Irregular LDPC codes were first introduced in a 1997 paper by Luby et al.
`
`See Luby, M. et al., “Practical Loss-Resilient Codes,” STOC ’97, 1997 (Ex. 1008).
`
`The paper showed that irregular codes perform better than regular codes on certain
`
`types of noisy channels. (Ex. 1004, ¶57.)
`
`In regular codes, each information bit contributes to the same number of
`
`parity bits, while in irregular codes different information bits or groups of
`
`information bits contribute to different numbers of parity bits. 3 This is consistent
`
`with the Board’s construction of “irregular” in prior proceedings. (See, e.g.,
`
`IPR2015-00060, Paper 18, p. 12.) Irregularity can also be defined in terms of
`
`Tanner graphs. A regular code has a Tanner graph in which each variable node
`
`corresponding to an information bit is connected to the same number of check
`
`nodes. Irregular codes have Tanner graphs in which some variable nodes
`
`corresponding to information bits are connected to more check nodes than others.
`
`
`3 An information bit contributes to a parity bit if the column in the generator matrix
`corresponding to the information bit has a 1 in the row corresponding to the parity
`bit.
`
`
`
`17
`
`

`

`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`These two formulations of irregularity are alternative (and well-known) ways of
`
`describing the same concept. (Ex. 1004, ¶58.)
`
`After Luby’s initial paper describing irregularity, the same team of
`
`researchers published a second paper, expanding the theory of irregularity in error-
`
`correcting codes. (See Luby, M. et al., “Analysis of Low Density Codes and
`
`Improved Designs Using Irregular Graphs,” STOC ’98, pp. 249-59, published in
`
`1998; Ex. 1009). At the time, both of these papers were widely read by coding
`
`theorists, and motivated extensive research into irregularity. (Ex. 1004, ¶59.)
`
`For example, the 1998 Luby paper was the first reference cited in a paper by
`
`Frey and MacKay titled “Irregular Turbocodes,” presented at the 1999 Allerton
`
`Conference on Communications, Control, and Computing. (See Ex. 1010.) The
`
`second author, MacKay, was the same researcher who had rediscovered LDPC
`
`codes in 1995. In “Irregular Turbocodes,” the authors applied the concept of
`
`irregularity to turbo codes by explaining how to construct irregular turbo codes in
`
`which some information bits connect to more check nodes than others. The
`
`experimental results presented in the paper demonstrated that these irregular
`
`turbocodes perform better than regular turbocodes known in the art. (See Ex. 1010,
`
`p. 7.) (Ex. 1004, ¶60.)
`
`
`
`18
`
`

`

`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`By May 18, 2000, the claimed priority date of the ’781 patent, it was
`
`common knowledge that the performance of any type of error-correcting code
`
`could be improved with irregularity. (Ex. 1004, ¶61.)
`
`V. THE ’781 PATENT
`
`A. Claims
`
`The ’781 patent includes 22 claims, of which claims 1, 13, 19, 20, and 21 are
`
`independent. Claims 3-12 and 19-21 are challenged in this Petition. Claims 1 and 2
`
`are not challenged because the Board invalidated them in IPR2015-00059. (Ex.
`
`1011, p. 43.) Nonetheless, we explain the invalidation of claims 1 and 2 to
`
`demonstrate why claims that depend on claims 1 and 2 are also invalid.
`
`Independent claim 1 is directed to a two-step process for encoding a signal,
`
`where the first encoding step involves a linear transform operation and the second
`
`involves an accumulation operation. Independent claims 13 and 19 are directed to
`
`methods of encoding a signal that generate codewords by summing information
`
`bits and accumulating the resulting sums. Independent claims 20 and 21 are
`
`directed to methods that involve the summation (Σ) of information bits and the
`
`mod-2 summing of parity bits with those summations to generate a portion of an
`
`encoded signal. (Ex. 1004, ¶86.)
`
`B.
`
`Summary of the Specification
`
`
`
`19
`
`

`

`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`The specification of the ’781 patent is generally directed to irregular RA
`
`codes (or “IRA” codes). Figure 2 of the specification, below, shows the structure
`
`of an IRA encoder (Ex. 1004, ¶87):
`
`Ex. 1001, Fig. 2
`
`
`
`Explaining this figure, the patent describes encoding data using an outer
`
`coder 202 connected to an inner coder 206 via an interleaver 204 (labeled “P”) (Ex.
`
`1001, 2:39-46.) (Ex. 1004, ¶88.)
`
`Outer coder 202 receives a block of information bits and duplicates each bit
`
`a given number of times, producing a sequence of repeated bits at its output. (Ex.
`
`1001, 2:57-64.) The outer coder repeats bits irregularly – i.e., it outputs more
`
`duplicates of some information bits than others. (Id.) (Ex. 1004, ¶89.)
`
`The repeated bits are passed to an interleaver 204, where they are scrambled.
`
`(Ex. 1001, 3:29-33.) The scrambled bits are then passed to the inner coder 206,
`
`where they are accumulated to form parity bits. (Id., 3:3-23.) According to the
`
`specification (Ex. 1004, ¶90):
`
`
`
`20
`
`

`

`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`
`Ex. 1001 at 3:7-23
`
`
`
`The specification describes systematic and non-systematic codes. In a
`
`systematic code, the encoder outputs a copy of the information bits in addition to
`
`the parity bits output by inner coder 206 (the systematic output is represented in
`
`Fig. 2. as an arrow running toward the right along the top of the figure). (Ex. 1004,
`
`¶91.)
`
`C. Level of Ordinary Skill in the Art
`
`A person of ordinary skill in the art is a person with a Ph.D. in mathematics,
`
`electrical or computer engineering, or computer science with emphasis in signal
`
`processing, communications, or coding, or a master’s degree in the above area with
`
`at least three years of work experience in this field at the time of the alleged
`
`invention. (Ex. 1004, ¶84.)
`
`D.
`
`Summary of the Prosecution History
`
`On October 28, 2010, the patent examiner issued a first office action
`
`allowing some claims but rejecting claims 13-17 and 20 as anticipated by U.S.
`
`
`
`21
`
`

`

`U.S. Patent 7,916,781
`Petition for Inter Partes Review
`Patent No. 5,181,207 and requiring the applicants to clarify the term “irregular” in
`
`claims 9 and 23.
`
`In a January 27, 2011 response, in order to overcome the rejection, the
`
`applicants canceled claim 21 and incorporated its subject matter into independent
`
`claim 13. As amended, claim 13 requires that “the information bits appear in a
`
`variable number of subsets.” (Ex. 1012, Response dated Jan. 27, 2011, p. 4.)
`
`In accompanying remarks, applicants disagreed with the examiner’s
`
`statement that the term “irregular” was unclear, stating that “[i]t is believed that the
`
`meaning of the term ‘irregular’ in the claims is clear and is well known in the art of
`
`computer coding technology.” (Id., p. 7, emphasis added.) However, to overcome
`
`the rejection, the applicants amended claims 9 and 23 to remove the word
`
`“irregular,” replacing it with the requirement that the information bits appear “in a
`
`variable number of subsets.” (Id., pp. 3, 6.)
`
`VI. CLAIM CONSTRUCTION
`
`A claim

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket