`
`
`
`UNITED STATES PATENT AND TRADEMARK OFFICE
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`
`In Re:
`
`U.S. Patent 7,116,710
`
`:
`
`Attorney Docket No. 082944.0102
`
`Inventor:
`
`Jin, Hui, et al
`
`Filed:
`
`May 18, 2001
`
`Issued:
`
`October 3, 2006
`
`
`
`
`
`
`
`:
`
`:
`
`:
`
`IPR No. Unassigned
`
`Assignee: California Institute of Technology
`
`Title: Serial Concatenation of Interleaved Convolutional Codes Forming
`Turbo-Like Codes
`
`
`
`Mail Stop PATENT BOARD
`Patent Trial and Appeal Board
`U.S. Patent and Trademark Office
`P.O. Box 1450
`Alexandria, Virginia 22313-1450
`
`Submitted Electronically via the Patent Review Processing System
`
`
`
`DECLARATION OF HENRY D. PFISTER
`
`
`
`Active 17009673.6
`
`Hughes, Exh. 1010, p. 1
`
`
`
`TABLE OF CONTENTS
`
`Page
`
`I. Background and Qualifications ............................................................... 1
`II. Legal Understanding ............................................................................... 3
`A. Anticipation ..................................................................................... 3
`B. Obviousness ..................................................................................... 4
`III. Background on the Relevant Technology ............................................... 7
`A. Review of Linear Error-Correcting Codes ...................................... 7
`B. Background on Turbo Codes and Modern Coding Theory .......... 11
`IV. The Patents at Issue ............................................................................... 18
`A. The ‘710 Patent ............................................................................. 18
`V. The Prior Art ......................................................................................... 22
`A. Prior Art Considered ..................................................................... 22
`VI. Construction of Claims ......................................................................... 25
`A. Level of Ordinary Skill in the Art ................................................. 26
`B. “Repeating” Terms ........................................................................ 27
`C. “Irregularly” .................................................................................. 28
`D. “Interleaving” / “Interleaver” / “Scramble” .................................. 28
`E.
`“Rate close to one” ........................................................................ 29
`F.
`“Stream” ........................................................................................ 29
`VII. Invalidity ............................................................................................... 31
`A.
`‘710 Patent, Claim 1: Frey & MacKay – Anticipation ................. 31
`‘710 Patent Claim 1 ..................................................................... 31
`‘710 Patent Claim 1: Frey & MacKay – Obviousness .................. 38
`‘710 Patent Claim 3 ..................................................................... 42
`‘710 Patent Claim 4 ..................................................................... 44
`‘710 Patent Claim 5 ..................................................................... 46
`‘710 Patent Claim 6 ..................................................................... 47
`‘710 Patent Claim 15 ................................................................... 49
`
`B.
`
`Active 17009673.6
`
`i
`
`Hughes, Exh. 1010, p. 2
`
`
`
`C.
`
`‘710 Patent Claim 16 ................................................................... 58
`‘710 Patent Claim 20 ................................................................... 60
`‘710 Patent Claim 21 ................................................................... 64
`‘710 Patent Claim 22 ................................................................... 65
`‘710 Patent Claim 1: Divsalar in Combination with Luby ‘909
`Patent– Obviousness ..................................................................... 66
`‘710 Patent Claim 3 ..................................................................... 79
`‘710 Patent Claim 4 ..................................................................... 80
`‘710 Patent Claim 5 ..................................................................... 81
`‘710 Patent Claim 6 ..................................................................... 81
`‘710 Patent Claim 15 ................................................................... 84
`‘710 Patent Claim 16 ................................................................... 94
`‘710 Patent Claim 20 ................................................................... 97
`‘710 Patent Claim 21 ................................................................. 101
`‘710 Patent Claim 22 ................................................................. 102
`
`
`
`Active 17009673.6
`
`ii
`
`Hughes, Exh. 1010, p. 3
`
`
`
`I, Henry D. Pfister, declare as follows:
`
`1.
`
`I make this declaration based upon my own personal knowledge
`
`and, if called upon to testify, would testify competently to the matters
`
`contained herein.
`
`2.
`
`I have been asked to provide technical assistance in the inter
`
`partes review of U.S. Patent No. 7,116,710 ("the ’710 Patent”). This
`
`declaration is a statement of my opinions on issues related to the
`
`patentability of claims 1, 3, 4, 5, 6, 15, 16, 20, 21, and 22 of the ‘710 Patent.
`
`I have also reviewed and analyzed U.S. Patents 8,284,833 (“‘833 patent”),
`
`7,421,032 (“’032 patent”) and 7,916,781(“’781 patent”).
`
`I.
`
`Background and Qualifications
`
`3. My qualifications are stated more fully in my curriculum vitae.
`
`Here I provide a brief summary of my qualifications. I was an active
`
`contributor and collaborator in the community that included the inventors
`
`and subject matter of the ‘710; ‘032; ‘781; and ‘833 patents before and after
`
`May 2000. From 1997-2003, I was a graduate student at the University of
`
`California, San Diego (UCSD) studying electrical engineering. During
`
`1998-1999, I took my first sequence of graduate courses on error-correcting
`
`codes and my course project for the 1999 winter quarter was on repeat-
`
`accumulate codes. This work led to my first conference paper, which was
`
`Active 17009673.6
`
`1
`
`Hughes, Exh. 1010, p. 4
`
`
`
`entitled “The serial concatenation of rate-1 codes through uniform random
`
`interleavers,” was presented at the Allerton conference in 1999 and may be
`
`referred to herein as “Pfister.”1 This paper generalized the idea of repeat-
`
`accumulate codes to include arbitrary outer codes and multiple inner
`
`accumulate codes. I continued my studies and eventually became a
`
`professor. Now, I am an associate professor in the Department of Electrical
`
`and Computer Engineering at Duke University. Before this appointment, I
`
`taught in the Department of Electrical and Computer Engineering of Texas
`
`A&M University, College Station from 2006-2014. I received my Ph.D. in
`
`electrical engineering from the University of California, San Diego (UCSD)
`
`in 2003 and then spent two years in R&D at Qualcomm, Inc., and one year
`
`as a post-doctoral researcher at the Swiss Federal Institute of Technology,
`
`Lausanne (EPFL). I am a co-author on more than 60 peer-reviewed
`
`publications on information theory, error-correcting codes, and wireless
`
`communication including the IEEE Communications Society 2007 best
`
`paper in Signal Processing and Coding for Data Storage. During my career,
`
`
`1 H. D. Pfister and P. H. Siegel, “The serial concatenation of rate-1 codes
`
`through uniform random interleavers.” Proc. 37th Allerton Conf. on Comm.,
`
`Control and Computing, Monticello, Illinois, pp. 260-269, Sep. 1999, Ex.
`
`1057.
`
`Active 17009673.6
`
`2
`
`Hughes, Exh. 1010, p. 5
`
`
`
`I have been the principal investigator (or co-PI) on seven funded research
`
`grants receiving, in total, more than two million dollars in funding. Five of
`
`these grants have come from the National Science Foundation (NSF)
`
`including the NSF Career Award in 2008. Finally, I am currently an
`
`associate editor for the IEEE Transactions on Information Theory and I
`
`regularly serve as a reviewer of grant proposals submitted to the National
`
`Science Foundation.
`
`II.
`
`Legal Understanding
`
`4. My opinions are also informed by my understanding of the
`
`relevant law. I understand that the patentability analysis is conducted on a
`
`claim-by-claim and element-by-element basis and that there are several
`
`possible reasons that a patent claim may be found to be unpatentable.
`
`5.
`
`I understand that earlier publications and patents may act to
`
`render a patent unpatentable for one of two reasons: (1) anticipation, and (2)
`
`obviousness.
`
`A. Anticipation
`
`6.
`
`First, I understand that a single prior art reference, article,
`
`patent or publication “anticipates” a claim if each and every element of the
`
`claim is disclosed in that prior art. I further understand that, where a claim
`
`element is not explicitly disclosed in a prior art reference, the reference may
`
`Active 17009673.6
`
`3
`
`Hughes, Exh. 1010, p. 6
`
`
`
`nonetheless anticipate a claim if the missing claim element is necessarily
`
`present in the apparatus or a natural result of the method disclosed—i.e. the
`
`missing element is “inherent.”
`
`B. Obviousness
`
`7.
`
`Second, I understand that the prior art may render a patent
`
`claim "obvious." I understand that two or more prior art references, articles,
`
`patents or publications that each disclose fewer than all elements of a patent
`
`claim may nevertheless be combined to render a patent claim obvious if the
`
`combination of the prior art collectively discloses all elements of the claim
`
`and one of ordinary skill in the art at the time would have been motivated to
`
`combine the prior art. I understand that this motivation to combine need not
`
`be explicit in any of the prior art, but may be inferred from the knowledge of
`
`one of ordinary skill in the art at the time the patent was filed. I also
`
`understand that one of ordinary skill in the art is not an automaton, but is a
`
`person having ordinary creativity. I further understand that one or more
`
`prior art references, articles, patents or publications that disclose fewer than
`
`all of the elements of a patent claim may render a patent claim obvious if
`
`including the missing element would have been obvious to one of skill in the
`
`art (e.g., the missing element represents only an insubstantial difference over
`
`the prior art or a reconfiguration of a known system).
`
`Active 17009673.6
`
`4
`
`Hughes, Exh. 1010, p. 7
`
`
`
`8.
`
`Under the doctrine of obviousness, a claim may be invalid if the
`
`differences between the invention and the prior art are such that the subject
`
`matter as a whole would have been obvious at the time the invention was
`
`made to a person having ordinary skill in the art to which the subject matter
`
`pertains.
`
`9.
`
`I understand that obviousness is based on the scope and content
`
`of the prior art, the differences between the prior art and the claim, the level
`
`of ordinary skill in the art, and secondary indicia of obviousness and non-
`
`obviousness to the extent they exist.
`
`10.
`
`I understand that secondary indicia of both obviousness and
`
`non-obviousness should be considered when evaluating whether a claimed
`
`invention would have been obvious to one of ordinary skill at the time of
`
`invention. These secondary indicia of non-obviousness may include, for
`
`example:
`
`• a long felt but unmet need in the prior art that was satisfied by the
`
`claimed invention;
`
`• commercial success of processes claimed by the patent;
`
`• unexpected results achieved by the invention;
`
`• praise of the invention by others skilled in the art;
`
`• the taking of licenses under the patent by others; and
`
`Active 17009673.6
`
`5
`
`Hughes, Exh. 1010, p. 8
`
`
`
`• deliberate copying of the invention.
`
`11.
`
`I also understand that there are second indicia of the
`
`obviousness of an alleged invention.
`
`12.
`
`I understand that there must be a relationship between any such
`
`secondary indicia and the claimed invention. I further understand that if the
`
`claimed invention produced expected results that this is also a secondary
`
`consideration supporting an obviousness determination.
`
`13.
`
`It
`
`is also my understanding
`
`that
`
`there are additional
`
`considerations that may be used as further guidance as to when the above
`
`factors will result in a finding that a claim is obvious, including the
`
`following:
`
`• the claimed invention is simply a combination of prior art elements
`
`according to known methods to yield predictable results;
`
`• the claimed invention is a simple substitution of one known element
`
`for another to obtain predictable results;
`
`• the claimed invention uses known techniques to improve similar
`
`devices or methods in the same way;
`
`• the claimed invention applies a known technique to a known device or
`
`method that is ready for improvement to yield predictable results;
`
`• the claimed invention would have been "obvious to try" choosing
`
`Active 17009673.6
`
`6
`
`Hughes, Exh. 1010, p. 9
`
`
`
`from a finite number of identified, predictable solutions, with a
`
`reasonable expectation of success;
`
`• there is known work in one field of endeavor that may prompt
`
`variations of it for use in either the same field or a different one based
`
`on design incentives or other market forces if the variations would
`
`have been predictable to one of ordinary skill in the art;
`
`• there existed at the time of invention a known problem for which there
`
`was an obvious solution encompassed by the patent's claims; and
`
`• there is some teaching, suggestion, or motivation in the prior art that
`
`would have led one of ordinary skill to modify the prior art reference
`
`or to combine prior art reference teachings to arrive at the claimed
`
`invention.
`
`14. Finally, I understand that a claim may be deemed invalid for
`
`obviousness in light of a single prior art reference, without the need to
`
`combine references, if the elements of the claim that are not found in the
`
`reference can be supplied by the knowledge or common sense of one of
`
`ordinary skill in the relevant art.
`
`III.
`
`Background on the Relevant Technology
`
`A.
`
` Review of Linear Error-Correcting Codes
`
`15. Channel coding is the practice of adding redundancy to a
`
`Active 17009673.6
`
`7
`
`Hughes, Exh. 1010, p. 10
`
`
`
`message or repeating it in order to make it more robust against noise. It is
`
`used because noise and errors are essentially unavoidable in many systems
`
`(e.g., wireless communications and magnetic storage). Coding allows one to
`
`reduce the rate of information transfer in exchange for increased reliability
`
`and usually provides large gains in overall system efficiency. Channel
`
`coding works by first designing a set of codewords that can be reliably
`
`transmitted over a noisy channel and then associating each codeword with a
`
`message.2
`
`16. A length- (cid:2) binary code is simply a subset of the 2(cid:4) binary-
`
`vectors of length- (cid:2). Each vector in this subset of called a codeword. A
`
`binary linear code is a binary code where the modulo-2 sum of any two
`
`codewords is also a codeword. In practice, most systems use linear codes
`
`for two reasons: simplicity and performance. First, linear codes are much
`
`simpler to describe, encode, and analyze. Second, there are very few cases
`
`where non-linear codes are better than linear codes. So, there is no real price
`
`to be paid for this simplicity.
`
`
`2 In the District Court Action (identified below), the Judge construed
`
`“codeword” to mean “a discrete encoded sequence of data elements.”
`
`Corrected Claim Construction Order (D.I. 105), Ex 1024, at 9-10.
`
`Statements about “codewords” herein are made in view of that construction.
`
`Active 17009673.6
`
`8
`
`Hughes, Exh. 1010, p. 11
`
`
`
`17. Linear codes can be described using linear algebra. In
`
`particular, they are defined by matrices and vectors of elements that can be
`
`added, subtracted, multiplied, and divided. A set of numbers, which obey all
`
`the standard rules of arithmetic, is an algebraic object known as a field. For
`
`example, real and complex numbers are both fields. The binary alphabet
`
`{0,1} with standard multiplication and modulo-2 addition is also a field
`
`(more specifically, a finite field). Either a generator matrix or parity-check
`
`matrix can be used to define a linear code.
`
`18. The generator matrix (cid:10) of a binary linear code is a (cid:11) × (cid:2) binary
`
`matrix whose row space equals the code. In particular, any codeword can be
`
`written as the modulo-2 sum of some rows in (cid:10) and each row of (cid:10) is a
`
`codeword. If (cid:10) has full-rank, then it naturally defines an encoder from (cid:11)
`
`information bits to (cid:2) code bits. In this case, we say that the code has length
`
`(cid:2) , dimension (cid:11) , and rate (cid:13) = (cid:11)/(cid:2) .
`
` For an information vector
`
`((cid:17)(cid:18), (cid:17)(cid:19), … , (cid:17)(cid:21)), the codeword ((cid:23)(cid:18), (cid:23)(cid:19), … , (cid:23)(cid:4)) is defined by the modulo-2
`
`sum (cid:23)(cid:24) = ∑ (cid:10)(cid:26),(cid:24)(cid:26)
`
`
`
`(cid:17)(cid:26).
`
`19. The parity-check matrix (cid:27) of a binary linear code is an (cid:28) × (cid:2)
`
`binary matrix where each row defines a parity-check equation satisfied by all
`
`codewords. For example, let (cid:23)(cid:24) be the (cid:29)-th bit of a codeword and assume
`
`that the first row of (cid:27) is all-zero except for ones in positions (cid:30), (cid:31), , ! .
`
`Active 17009673.6
`
`9
`
`Hughes, Exh. 1010, p. 12
`
`
`
`Then, the first row represents a parity-check equation, (cid:23)" + (cid:23)$ + (cid:23)% + (cid:23)& =
`
`0 modulo 2, that all codewords must satisfy. In fact, all codewords in the
`
`code must satisfy all of the parity-check equations defined by the parity-
`
`check matrix. If the code has dimension (cid:11) and (cid:27) is full rank, then (cid:28) = (cid:2) −
`
`(cid:11) and (cid:13) = 1 − (cid:28)/(cid:2).
`
`20.
`
`In general, the ideal decoding process for a code consists of
`
`finding the most-likely codeword given the observed channel outputs.
`
`Suppose the noisy channel has binary outputs and, during transmission, it
`
`introduces an error in each position independently with probability ( < 1/2.
`
`Then, the ideal decoding process consists of finding the codeword that most
`
`closely matches the received sequence.
`
`21. A rate- */+ binary convolutional encoder is a finite-state
`
`mapping (or transducer) that, during each time step, accepts * input bits and
`
`produces + output bits. The internal state of the encoder is also updated
`
`during each step. If + > * , then the encoder can introduce structured
`
`redundancy in the output stream and a suitably designed decoder (e.g., a
`
`Viterbi decoder) can exploit this redundancy to correct errors in that bit
`
`stream.
`
`22. Since a convolutional encoder continuously receives inputs and
`
`generates outputs, it defines a code whose codewords are infinitely long. In
`
`Active 17009673.6
`
`10
`
`Hughes, Exh. 1010, p. 13
`
`
`
`practice, convolutional codes are truncated or terminated to generate
`
`codewords of finite length. If the sum of any two input sequences is
`
`encoded into the sum of their associated output sequences and the zero-
`
`padded right shift by * of an input sequence generates the zero-padded right
`
`shift by + of its associated output sequence, then the convolutional code is
`
`linear. Also, the block code formed by truncating (or terminating) a linear
`
`convolutional code is also linear.
`
`B.
`
`Background on Turbo Codes and Modern Coding Theory
`
`23. The introduction of turbo codes in 1993 by Berrou, Glavieux,
`
`and Thitimajshima started a revolution in coding theory that initiated great
`
`advances in error-correcting codes. 3 Turbo codes were a new class of error-
`
`correcting codes formed by a parallel concatenation of two convolutional
`
`encoders through a pseudo-random interleaver (or permutation).
`
`24. The main drawback of convolutional codes is that they only
`
`produce local redundancy in the output stream. Thus, they cannot handle
`
`error patterns with a small number of errors that fall very close together in
`
`the data stream. Turbo codes overcome this deficiency by encoding the
`
`input bits twice. First, the input bits are feed to a convolutional encoder in
`
`
`3 C. Berrou, A. Glavieux, and P. Thitimajshima, “Near Shannon Limit Error
`
`Correcting Coding and Decoding.” (1993), Ex. 1061.
`
`Active 17009673.6
`
`11
`
`Hughes, Exh. 1010, p. 14
`
`
`
`their normal order. Then, the input bits are reordered by pseudo-random
`
`interleaver and encoded again by a convolutional encoder. With this setup, a
`
`small number of errors will cause a decoding error only if they are close
`
`together in both the original data stream and the permuted data stream.
`
`Since this is much less likely, the performance is improved.
`
`25.
`
`In 1995, David J. C. MacKay rediscovered Robert Gallager’s
`
`long-forgotten, 1963
`
`low-density parity-check
`
`(LDPC) codes and
`
`demonstrated their outstanding performance.4 He also observed that turbo
`
`codes have much in common with LDPC codes. Like turbo codes, the
`
`construction of LDPC codes is based on pseudo-random permutations and
`
`the decoding is based on an iterative process. In fact, it was shown by
`
`McEliece, MacKay, and Cheng in 1998 that turbo decoding and LDPC
`
`decoding can both be seen as an instance of the same unified principle.5
`
`26. LDPC codes are most easily understood in terms of their parity-
`
`check matrix and its associated Tanner graph. The parity-check matrix (cid:27)
`
`
`4 MacKay and Neal, “Near Shannon Limit Performance of Low Density
`
`Parity Check Codes”, Ex. 1062
`
`5 McElice, MacKay, and Cheng, “Turbo Decoding as an Instance of Pearl’s
`
`‘Belief Propagation’ Algorithm, IEEE Journal on Selected Areas in
`
`Communication, Vol 16, No. 2, Feb. 1998 Ex. 1033.
`
`Active 17009673.6
`
`12
`
`Hughes, Exh. 1010, p. 15
`
`
`
`was described earlier and the Tanner graph associated with an (cid:28) × (cid:2) parity-
`
`check matrix is a bipartite graph with (cid:2) variable nodes, (cid:28) check nodes, and
`
`an edge connecting the --th variable node to the (cid:29)-th check node if and only
`
`if (cid:27)(cid:24)(cid:26) = 1 . The standard iterative decoder for LDPC codes works by
`
`passing messages along these edges.
`
`27.
`
`In 1997, Gallager’s LDPC codes were modified to have Tanner
`
`graphs with an irregular degree structure and were shown to achieve near-
`
`Shannon performance on the binary erasure channel (BEC) by Luby,
`
`Mitzenmacher, Shokrollahi, Spielman, and Stemann. 6 This work also
`
`showed that irregular low-density generator-matrix (LDGM) codes benefit
`
`from irregularity on the BEC. A little later, MacKay and others (e.g.,
`
`Richardson and Urbanke) were also experimenting with irregular LDPC
`
`codes on general binary channels. Their results showed that irregularity
`
`provided valuable performance gains in practice.
`
`28. Richardson, Urbanke,
`
`and Shokrollahi
`
`extended
`
`the
`
`optimization of irregular LDPC codes in Luby, Mitzenmacher, Shokrollahi,
`
`Spielman, and Stemann’s to general channels in a paper that was released on
`
`the Internet on April 5th 1999 and submitted to the IEEE Transactions on
`
`6 Luby, Mitzenmacher, Shokrollahi, Spielman, and Stemann, “Practical
`
`loss-resilient codes”, Ex. 1028,
`
`Active 17009673.6
`
`13
`
`Hughes, Exh. 1010, p. 16
`
`
`
`Information Theory in the same year. I received an invitation to download a
`
`copy of the submitted paper by email from a publically-accessible website
`
`on April 5, 1999.7 I downloaded a copy of the paper on the same day.8 I
`
`was aware that the authors sent emails to others in the field to distribute the
`
`“Design of Provably Good Low-Density Parity Check Codes” paper.
`
`Around the time of this email, this was a conventional way to make others in
`
`the field aware of new publications. The authors stated that “[i]n the
`
`present paper we present result indicating the remarkable performance that
`
`can be achieved by properly chosen irregular codes.”9 This led to codes that
`
`approach capacity in practice. 10 Thus, it was well known by 1999 that
`
`making LDPC code structures “irregular” was beneficial. In 1999, this
`
`irregular structure was extended and generalized to all turbo codes in
`
`“Irregular Turbocodes” by Frey and MacKay (“Frey & MacKay”), by
`
`
`7 Ex. 1066 (Email from Paul Siegel to me an others forwarding April 5, 1999
`
`email from Tom Richardson, Amin Shokrollahi, and Ruediger Urbanke
`
`inviting download of “Design of provably good low-density party check
`
`codes.”)
`
`8 A copy of the paper that I downloaded on April 5, 1999 is Ex. 1029.
`
`9 Ex. 1029, at 4.
`
`10 Ex. 1029, at 5 and Fig. 2.
`
`Active 17009673.6
`
`14
`
`Hughes, Exh. 1010, p. 17
`
`
`
`encoding irregularly repeated symbols.11 This paper was generally presented
`
`at the 1999 Allerton Conference on Communications, Control and
`
`Computing. The Allerton Conference is generally regarded as one of the
`
`main conferences in the field of information theory and communications and
`
`generally occurs in September. In 1999, the conference occurred from
`
`September 22-24, 1999 with the paper being published on the authors
`
`websites in October of 1999. The proceedings were published later.
`
`29.
`
`In 1996, Benedetto and Montorsi provided one explanation for
`
`the excellent performance of turbo codes by showing that, on average, turbo
`
`codes have good distance properties.12 In particular, they computed the
`
`average weight enumerator (averaged over all possible interleavers) of a
`
`turbo code and showed that the bit-error rate due to low-weight codewords
`
`decreases as the block length increases. This led them to introduce the
`
`interleaver gain as a way to estimate the performance of parallel
`
`concatenated codes. Later, with Divsalar and Pollara, this approach was
`
`
`11 Frey & MacKay, Ex 1012.
`
`12 Ex. 1063.
`
`Active 17009673.6
`
`15
`
`Hughes, Exh. 1010, p. 18
`
`
`
`extended to serial concatenated codes.13 Together, these two works define
`
`the interleaver gain exponent (IGE), which is numerical parameter that
`
`estimates the rate at which the bit (or block) error rate decreases as the block
`
`length increases. The IGE conjecture, which was defined formally in
`
`“Coding Theorems for Turbo-like Codes” by Divsalar, Jin, and McEliece
`
`(“the Divsalar reference”)14, asserts that these estimates are mathematically
`
`correct.15
`
`30.
`
`It is worth noting that in Section VI of Benedetto & Montorsi,
`
`one example focuses on turbo codes constructed with rate-1 accumulators.16
`
`While this predates the introduction of repeat-accumulate codes, it is not
`
`particularly surprising. This is because the rate-1 accumulator is actually a
`
`standard element in the simplest non-trivial rate-1/2 recursive systematic
`
`convolutional code. Thus, the “accumulate” code is really a new name for
`
`an well-known mapping.
`
`
`13 S. Benedetto , D. Divsalar , G. Montorsi , F. Pollara, “Serial
`
`Concatenation of Interleaved Codes: Performance Analysis, Design, and
`
`Iterative Decoding” (1996), Ex. 1032.
`
`14 Divsalar, Ex. 1011
`
`15 Divsalar, Ex. 1011, at 3-5.
`
`16 Benedetto & Montorsi, Ex. 1063, at 422-426.
`
`Active 17009673.6
`
`16
`
`Hughes, Exh. 1010, p. 19
`
`
`
`31. The Divsalar reference had both theoretical and practical
`
`relevance here. Theoretically, the Divsalar reference showed the IGE
`
`conjecture applied to the special case of repeat-accumulate codes. This gave
`
`researchers confidence that the performance of turbo codes was now
`
`understood and it also suggests that the more general performance estimates
`
`based on the IGE are also corrects. Practically, the Divsalar reference
`
`demonstrated that the recursive rate-1 “accumulate” convolutional encoder
`
`could be used to construct useful codes with very simple decoders. Very
`
`soon afterwards, multiple researchers began experimenting with codes based
`
`on accumulators.17
`
`32. By the end of 1999, it was commonly known within the field
`
`
`17 L. Ping, W. K. Leung, N. Phamdo, “Low Density Parity Check Codes
`
`with Semi-random Parity Check Matrix.” Electron. Letters, Vol. 35, No. 1,
`
`pp. 38-39, Jan. 7th, 1999 (“Ping”), Ex 1014; H. D. Pfister and P. H. Siegel,
`
`“The serial concatenation of rate-1 codes through uniform random
`
`interleavers.” Proc. 37th Allerton Conf. on Comm., Control and Computing,
`
`Monticello, Illinois, pp. 260-269, Sep. 1999 (“Pfister”), Ex. 1057; Viterbi
`
`and Viterbi, “New results on serial concatenated and accumulated-
`
`convolutional turbo code performance” in Annales Des Télécommunications
`
`1999, Ex. 1031.
`
`Active 17009673.6
`
`17
`
`Hughes, Exh. 1010, p. 20
`
`
`
`that adding irregularity to any code structure could improve its iterative-
`
`decoding performance. In the same year, it was also shown in "Low density
`
`parity check codes with semi-random parity check matrix" by Ping, Leung,
`
`and Phamdo ("Ping") 18 that the accumulate structure could be used to
`
`construct LDPC codes with linear-time encoders. A very similar structure
`
`with linear-time encoding was also proposed in “Constructing Low-Density
`
`Parity-Check Codes with Circulant Matrices” by Bond, Hui, and Schmidt. 19
`
`Combining these constructions with irregular LDPC codes naturally leads to
`
`codes that are expected to perform well and have low-complexity encoding..
`
`IV.
`
`The Patents at Issue
`
`A. The ‘710 Patent
`
`33.
`
`I have reviewed the ‘710 patent, which is entitled “Serial
`
`Concatenation of Interleaved Convolutional Codes Forming Turbo-Like
`
`Codes.” I understand that the application that later issued as the ‘710 patent
`
`was filed on May 18, 2001 and then issued on October 3, 2006. The ‘710
`
`patent claims priority to a provisional application filed on May 18, 2000.
`
`Based on a certificate of correction issued July 22, 2208, the ‘710 patent
`
`states that it is a continuation-in-part of U.S. Application Serial No.
`
`
`18 Ex. 1014.
`
`19 Ex. 1030.
`
`Active 17009673.6
`
`18
`
`Hughes, Exh. 1010, p. 21
`
`
`
`09/922,852, which was filed on August 18, 2000.
`
`34. The ‘710 patent is directed to irregular repeat accumulate
`
`codes and largely follows the repeat accumulate (“RA”) codes discussed
`
`above. As discussed, the coder used by the authors of the Divsalar reference
`
`was a repeat accumulate (RA) coder. 20 The RA coder of the Divsalar
`
`reference had the following structure, as shown in Figure 3 of the paper:
`
`
`
`35. The Divsalar reference explains the operation of its RA code:
`
`An information block of length N is repeated q times, scrambled by an
`
`interleaver of size qN, and then encoded by a rate 1 accumulator. The
`
`accumulator can be viewed as a
`
`truncated rate-1 recursive
`
`convolutional encoder with transfer function 1/(1 +D), but we prefer
`
`to think of it as a block code whose input block [x1, . . . , xn] and
`
`output block [y1, . . . , yn] are related by the formula
`
`
`20 Divsalar, Ex. 1011, at 5.
`
`Active 17009673.6
`
`19
`
`Hughes, Exh. 1010, p. 22
`
`
`
`
`
`Divsalar, Ex. 1011, 5
`36. The disclosure of the ‘710 patent and the claims are merely
`
`directed to an irregular version of the code of the Divsalar reference.
`
`Specifically, where the repeater of the Divsalar reference has a constant rate
`
`1/q repetition, in the ‘710 patent, the rate of repetition will simply be
`
`different for different input bits instead of the same. This is described in
`
`greater detail with respect to the ‘710 patent’s description of Fig. 2 of the
`
`‘710 patent.
`
`37. Figure 2 of the ‘710 patent shows a coder of the form:
`
`38.
`
` The “outer coder” 202 is described at 2:41-56 as:
`
`The outer coder 202 receives the uncoded data. The data may be
`
`Active 17009673.6
`
`20
`
`
`
`Hughes, Exh. 1010, p. 23
`
`
`
`partitioned into blocks of fixed size, say k bits. The outer coder may
`
`be an (n,k) binary linear block coder, where n>k. The coder accepts as
`
`input a block u of k data bits and produces an output block v of n data
`
`bits. The mathematical relatio