throbber
UNITED STATES PATENT AND TRADEMARK OFFICE
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`
`
`
`In Re:
`
`U.S. Patent No. 7,916,781: Attorney Docket No. 082944.0102
`
`Inventor:
`
`Jin, Hui, et al.
`
`Filed:
`
`June 30, 2008
`
`Issued:
`
`March 29, 2011
`
`
`
`
`
`
`
`:
`
`:
`
`:
`
`IPR No. Unassigned
`
`Assignee: California Institute of Technology
`
`Title: Serial Concatenation of Interleaved Convolutional Codes Forming Turbo-
`Like Codes
`
`Mail Stop PATENT BOARD
`Patent Trial and Appeal Board
`U.S. Patent and Trademark Office
`P.O. Box 1450
`Alexandria, Virginia 22313-1450
`
`
`Submitted Electronically via the Patent Review Processing System
`
`
`
`DECLARATION OF HENRY D. PFISTER
`
`Hughes, Exh. 1010, p. 1
`
`

`
`Table of Contents
`
`I. Background and Qualifications ............................................................... 1
`II. Legal Understanding ................................................................................ 3
`A. Anticipation ........................................................................................ 3
`B. Obviousness ....................................................................................... 4
`III. Background on the Relevant Technology ................................................ 8
`A. Review of Linear Error-Correcting Codes ........................................ 8
`B. Background on Turbo Codes and Modern Coding Theory ............. 11
`IV. The Patents at Issue ................................................................................ 19
`A. The ‘781 Patent ................................................................................ 19
`V. The Prior Art .......................................................................................... 23
`A. Prior Art Considered ........................................................................ 23
`VI. Construction of Claims .......................................................................... 26
`A. Level of Ordinary Skill in the Art .................................................... 27
`B. “Codeword” ..................................................................................... 28
`C. Summing Terms ............................................................................... 29
`VII. Invalidity ................................................................................................ 31
`A. Anticipation/Obviousness by Ping................................................... 31
`‘781 Patent Claim 1 ......................................................................... 31
`‘781 Patent Claim 2 ......................................................................... 39
`‘781 Patent Claim 3 ......................................................................... 42
`‘781 Patent Claim 4 ......................................................................... 44
`‘781 Patent Claim 5 ......................................................................... 48
`‘781 Patent Claim 6 ......................................................................... 49
`‘781 Patent Claim 7 ......................................................................... 51
`‘781 Patent Claim 13 ....................................................................... 53
`‘781 Patent Claim 14 ....................................................................... 65
`‘781 Patent Claim 15 ....................................................................... 67
`‘781 Patent Claim 16 ....................................................................... 70
`
`
`
`i
`
`Hughes, Exh. 1010, p. 2
`
`

`
`‘781 Patent Claim 19 ....................................................................... 72
`B. Anticipation by Divsalar .................................................................. 78
`‘781 Patent Claim 1 ......................................................................... 78
`‘781 Patent Claim 2 ......................................................................... 82
`
`ii
`
`
`
`
`
`Hughes, Exh. 1010, p. 3
`
`

`
`I, Henry D. Pfister, declare as follows:
`
`1.
`
`I make this declaration based upon my own personal knowledge
`
`and, if called upon to testify, would testify competently to the matters
`
`contained herein.
`
`2.
`
`I have been asked to provide technical assistance in the inter
`
`partes review of U.S. Patent No. 7,916,781 ("the ’781 Patent").1 I have also
`
`reviewed and analyzed U.S. Patent Nos. 7,116,710 ("the ’710 Patent”);2
`
`7,421,032 (“’032 patent”);3 and 8,284,833 ("the ’833 Patent").4
`
`3.
`
`This declaration is a statement of my opinions on issues related
`
`to the patentability of claims 1, 2, 3, 4, 5, 6, 7, 13, 14, 15, 16, and 19 of the
`
`‘781 Patent.
`
`I.
`
`Background and Qualifications
`
`4. My qualifications are stated more fully in my curriculum vitae.
`
`Here I provide a brief summary of my qualifications. I was an active
`
`contributor and collaborator in the community that included the inventors
`
`
`
`1 Ex. 1006.
`
`2 Ex. 1001.
`
`3 Ex. 1004.
`
`4 Ex. 1008.
`
`
`
`1
`
`Hughes, Exh. 1010, p. 4
`
`

`
`and subject matter of the ‘710; ‘032; ‘781; and ‘833 patents before and after
`
`May 2000. From 1997-2003, I was a graduate student at the University of
`
`California, San Diego (UCSD) studying electrical engineering. During
`
`1998-1999, I took my first sequence of courses on error-correcting codes
`
`and my course project for the 1999 winter quarter was on repeat-accumulate
`
`codes. This work led to my first conference paper, which was entitled “The
`
`serial concatenation of rate-1 codes through uniform random interleavers,”
`
`was presented at the Allerton conference in 1999 and may be referred to
`
`herein as “Pfister & Siegel.”5 This paper generalized the idea of repeat-
`
`accumulate codes to include arbitrary outer codes and multiple inner
`
`accumulate codes. I continued my studies and eventually became a
`
`professor. Now, I am an associate professor in the Department of Electrical
`
`and Computer Engineering at Duke University. Before this appointment, I
`
`taught in the Department of Electrical and Computer Engineering of Texas
`
`A&M University, College Station from 2006-2014. I received my Ph.D. in
`
`electrical engineering from the University of California, San Diego (UCSD)
`
`in 2003 and then spent two years in R&D at Qualcomm, Inc., and one year
`
`as a post-doctoral researcher at the Swiss Federal Institute of Technology,
`
`Lausanne (EPFL). I am a co-author on more than 60 peer-reviewed
`
`
`
`5 Ex. 1057.
`
`
`
`2
`
`Hughes, Exh. 1010, p. 5
`
`

`
`publications on information theory, error-correcting codes, and wireless
`
`communication including the IEEE Communications Society 2007 best
`
`paper in Signal Processing and Coding for Data Storage. During my career,
`
`I have been the principal investigator (or co-PI) on seven funded research
`
`grants receiving, in total, more than two million dollars in funding. Five of
`
`these grants have come from the National Science Foundation (NSF)
`
`including the NSF Career Award in 2008. Finally, I am currently an
`
`associate editor for the IEEE Transactions on Information Theory and I
`
`regularly serve as a reviewer of grant proposals submitted to the National
`
`Science Foundation.
`
`II. Legal Understanding
`
`5. My opinions are also informed by my understanding of the
`
`relevant law. I understand that the patentability analysis is conducted on a
`
`claim-by-claim basis and that there are several possible reasons that a patent
`
`claim may be found to be unpatentable.
`
`6.
`
`I understand that earlier publications and patents may act to
`
`render a patent unpatentable for one of two reasons: (1) anticipation, and (2)
`
`obviousness.
`
`A. Anticipation
`
`7.
`
`First, I understand that a single prior art reference, article,
`
`
`
`3
`
`Hughes, Exh. 1010, p. 6
`
`

`
`patent, or publication “anticipates” a claim if each and every element of the
`
`claim is disclosed in that prior art. I further understand that, where a claim
`
`element is not explicitly disclosed in a prior art reference, the reference may
`
`nonetheless anticipate a claim if the missing claim element is necessarily
`
`present in the apparatus or a natural result of the method disclosed—i.e. the
`
`missing element is “inherent.”
`
`B. Obviousness
`
`8.
`
`Second, I understand that the prior art may render a patent
`
`claim "obvious." I understand that two or more prior art references, articles,
`
`patents, or publications that each disclose fewer than all elements of a patent
`
`claim may nevertheless be combined to render a patent claim obvious if the
`
`combination of the prior art collectively discloses all elements of the claim
`
`and one of ordinary skill in the art at the time would have been motivated to
`
`combine the prior art. I understand that this motivation to combine need not
`
`be explicit in any of the prior art, but may be inferred from the knowledge of
`
`one of ordinary skill in the art at the time the patent was filed. I also
`
`understand that one of ordinary skill in the art is not an automaton, but is a
`
`person having ordinary creativity. I further understand that one or more
`
`prior art references, articles, patents, or publications that disclose fewer than
`
`all of the elements of a patent claim may render a patent claim obvious if
`
`
`
`4
`
`Hughes, Exh. 1010, p. 7
`
`

`
`including the missing element would have been obvious to one of skill in the
`
`art (e.g., the missing element represents only an insubstantial difference over
`
`the prior art or a reconfiguration of a known system).
`
`9.
`
`Under the doctrine of obviousness, a claim may be invalid if the
`
`differences between the invention and the prior art are such that the subject
`
`matter as a whole would have been obvious at the time the invention was
`
`made to a person having ordinary skill in the art to which the subject matter
`
`pertains.
`
`10.
`
`I understand that obviousness is based on the scope and content
`
`of the prior art, the differences between the prior art and the claim, the level
`
`of ordinary skill in the art, and secondary indicia of obviousness and non-
`
`obviousness to the extent they exist.
`
`11.
`
`I understand that secondary indicia of both obviousness and
`
`non-obviousness should be considered when evaluating whether a claimed
`
`invention would have been obvious to one of ordinary skill at the time of
`
`invention. These secondary indicia of non-obviousness may include, for
`
`example:
`
`• a long felt but unmet need in the prior art that was satisfied by the
`
`claimed invention;
`
`• commercial success of processes claimed by the patent;
`
`
`
`5
`
`Hughes, Exh. 1010, p. 8
`
`

`
`• unexpected results achieved by the invention;
`
`• praise of the invention by others skilled in the art;
`
`• the taking of licenses under the patent by others; and
`
`• deliberate copying of the invention.
`
`12.
`
`I also understand that there are second indicia of the
`
`obviousness of an alleged invention.
`
`13.
`
`I understand that there must be a relationship between any such
`
`secondary indicia and the claimed invention. I further understand that if the
`
`claimed invention produced expected results that this is also a secondary
`
`consideration supporting an obviousness determination.
`
`14.
`
`It
`
`is also my understanding
`
`that
`
`there are additional
`
`considerations that may be used as further guidance as to when the above
`
`factors will result in a finding that a claim is obvious, including the
`
`following:
`
`• the claimed invention is simply a combination of prior art elements
`
`according to known methods to yield predictable results;
`
`• the claimed invention is a simple substitution of one known element
`
`for another to obtain predictable results;
`
`• the claimed invention uses known techniques to improve similar
`
`devices or methods in the same way;
`
`
`
`6
`
`Hughes, Exh. 1010, p. 9
`
`

`
`• the claimed invention applies a known technique to a known device or
`
`method that is ready for improvement to yield predictable results;
`
`• the claimed invention would have been "obvious to try" choosing
`
`from a finite number of identified, predictable solutions, with a
`
`reasonable expectation of success;
`
`• there is known work in one field of endeavor that may prompt
`
`variations of it for use in either the same field or a different one based
`
`on design incentives or other market forces if the variations would
`
`have been predictable to one of ordinary skill in the art;
`
`• there existed at the time of invention a known problem for which there
`
`was an obvious solution encompassed by the patent's claims; and
`
`• there is some teaching, suggestion, or motivation in the prior art that
`
`would have led one of ordinary skill to modify the prior art reference
`
`or to combine prior art reference teachings to arrive at the claimed
`
`invention.
`
`15. Finally, I understand that a claim may be deemed invalid for
`
`obviousness in light of a single prior art reference, without the need to
`
`combine references, if the elements of the claim that are not found in the
`
`reference can be supplied by the knowledge or common sense of one of
`
`ordinary skill in the relevant art.
`
`
`
`7
`
`Hughes, Exh. 1010, p. 10
`
`

`
`III. Background on the Relevant Technology
`
`A. Review of Linear Error-Correcting Codes
`
`16. Channel coding is the practice of adding redundancy to a
`
`message or repeating it in order to make it more robust against noise. It is
`
`used because noise and errors are essentially unavoidable in many systems
`
`(e.g., wireless communications and magnetic storage). Coding allows one to
`
`reduce the rate of information transfer in exchange for increased reliability
`
`and usually provides large gains in overall system efficiency. Channel
`
`coding works by first designing a set of codewords that can be reliably
`
`transmitted over a noisy channel and then associating each codeword with a
`
`message.6
`
`17. A length- (cid:2) binary code is simply a subset of the 2(cid:4) binary-
`
`vectors of length- (cid:2). Each vector in this subset of called a codeword. A
`
`binary linear code is a binary code where the modulo-2 sum of any two
`
`codewords is also a codeword. In practice, most systems use linear codes
`
`
`
`6 In the District Court Action (identified below), the Judge construed
`
`“codeword” to mean “a discrete encoded sequence of data elements.”
`
`Corrected Claim Construction Order (D.I. 105), Ex 1024, at 9-10.
`
`Statements about “codewords” herein are made in view of that construction.
`
`
`
`8
`
`Hughes, Exh. 1010, p. 11
`
`

`
`for two reasons: simplicity and performance. First, linear codes are much
`
`simpler to describe, encode, and analyze. Second, there are very few cases
`
`where non-linear codes are better than linear codes. So, there is no real price
`
`to be paid for this simplicity.
`
`18. Linear codes can be described using linear algebra. In
`
`particular, they are defined by matrices and vectors of elements that can be
`
`added, subtracted, multiplied, and divided. A set of numbers, which obey all
`
`the standard rules of arithmetic, is an algebraic object known as a field. For
`
`example, real and complex numbers are both fields. The binary alphabet
`
`{0,1} with standard multiplication and modulo-2 addition is also a field
`
`(more specifically, a finite field). Either a generator matrix or parity-check
`
`matrix can be used to define a linear code.
`
`19. The generator matrix (cid:10) of a binary linear code is a (cid:11) × (cid:2) binary
`
`matrix whose row space equals the code. In particular, any codeword can be
`
`written as the modulo-2 sum of some rows in (cid:10) and each row of (cid:10) is a
`
`codeword. If (cid:10) has full-rank, then it naturally defines an encoder from (cid:11)
`
`information bits to (cid:2) code bits. In this case, we say that the code has length
`
`(cid:2) , dimension (cid:11) , and rate (cid:13) = (cid:11)/(cid:2) .
`
` For an information vector
`
`((cid:17)(cid:18), (cid:17)(cid:19), … , (cid:17)(cid:21)), the codeword ((cid:23)(cid:18), (cid:23)(cid:19), … , (cid:23)(cid:4)) is defined by the modulo-2
`
`
`
`9
`
`Hughes, Exh. 1010, p. 12
`
`

`
`sum (cid:23)(cid:24) = ∑ (cid:10)(cid:26),(cid:24)(cid:26)
`
`
`
`(cid:17)(cid:26).
`
`20. The parity-check matrix (cid:27) of a binary linear code is an (cid:28) × (cid:2)
`
`binary matrix where each row defines a parity-check equation satisfied by all
`
`codewords. For example, let (cid:23)(cid:24) be the (cid:29)-th bit of a codeword and assume
`that the first row of (cid:27) is all-zero except for ones in positions (cid:30), (cid:31), , ! .
`
`Then, the first row represents a parity-check equation, (cid:23)" + (cid:23)$ + (cid:23)% + (cid:23)& =
`0 modulo 2, that all codewords must satisfy. In fact, all codewords in the
`
`code must satisfy all of the parity-check equations defined by the parity-
`
`check matrix. If the code has dimension (cid:11) and (cid:27) is full rank, then (cid:28) = (cid:2) −
`
`(cid:11) and (cid:13) = 1 − (cid:28)/(cid:2).
`
`21.
`
`In general, the ideal decoding process for a code consists of
`
`finding the most-likely codeword given the observed channel outputs.
`
`Suppose the noisy channel has binary outputs and, during transmission, it
`
`introduces an error in each position independently with probability ( < 1/2.
`
`Then, the ideal decoding process consists of finding the codeword that most
`
`closely matches the received sequence.
`
`22. A rate- */+ binary convolutional encoder is a finite-state
`
`mapping (or transducer) that, during each time step, accepts * input bits and
`
`produces + output bits. The internal state of the encoder is also updated
`
`
`
`10
`
`Hughes, Exh. 1010, p. 13
`
`

`
`during each step. If + > * , then the encoder can introduce structured
`
`redundancy in the output stream and a suitably designed decoder (e.g., a
`
`Viterbi decoder) can exploit this redundancy to correct errors in that bit
`
`stream.
`
`23. Since a convolutional encoder continuously receives inputs and
`
`generates outputs, it defines a code whose codewords are infinitely long. In
`
`practice, convolutional codes are truncated or terminated to generate
`
`codewords of finite length. If the sum of any two input sequences is
`
`encoded into the sum of their associated output sequences and the zero-
`
`padded right shift by * of an input sequence generates the zero-padded right
`
`shift by + of its associated output sequence, then the convolutional code is
`
`linear. Also, the block code formed by truncating (or terminating) a linear
`
`convolutional code is also linear.
`
`B. Background on Turbo Codes and Modern Coding Theory
`
`24. The introduction of turbo codes in 1993 by Berrou, Glavieux,
`
`and Thitimajshima started a revolution in coding theory that initiated great
`
`advances in error-correcting codes. 7 Turbo codes were a new class of error-
`
`
`
`7 C. Berrou, A. Glavieux, and P. Thitimajshima, “Near Shannon Limit Error
`
`Correcting Coding and Decoding.” (1993), Ex. 1061.
`
`
`
`11
`
`Hughes, Exh. 1010, p. 14
`
`

`
`correcting codes formed by a parallel concatenation of two convolutional
`
`encoders through a pseudo-random interleaver (or permutation).
`
`25. The main drawback of convolutional codes is that they only
`
`produce local redundancy in the output stream. Thus, they cannot handle
`
`error patterns with a small number of errors that fall very close together in
`
`the data stream. Turbo codes overcome this deficiency by encoding the
`
`input bits twice. First, the input bits are feed to a convolutional encoder in
`
`their normal order. Then, the input bits are reordered by pseudo-random
`
`interleaver and encoded again by a convolutional encoder. With this setup, a
`
`small number of errors will cause a decoding error only if they are close
`
`together in both the original data stream and the permuted data stream.
`
`Since this is much less likely, the performance is improved.
`
`26.
`
`In 1995, David J. C. MacKay rediscovered Robert Gallager’s
`
`long-forgotten, 1963
`
`low-density parity-check
`
`(LDPC) codes and
`
`demonstrated their outstanding performance.8 He also observed that turbo
`
`codes have much in common with LDPC codes. Like turbo codes, the
`
`construction of LDPC codes is based on pseudo-random permutations and
`
`
`
`8 MacKay and Neal, “Near Shannon Limit Performance of Low Density
`
`Parity Check Codes”, Ex. 1062
`
`
`
`12
`
`Hughes, Exh. 1010, p. 15
`
`

`
`the decoding is based on an iterative process. In fact, it was shown by
`
`McEliece, MacKay, and Cheng in 1998 that turbo decoding and LDPC
`
`decoding can both be seen as an instance of the same unified principle.9
`
`27. LDPC codes are most easily understood in terms of their parity-
`
`check matrix and its associated Tanner graph. The parity-check matrix (cid:27)
`
`was described earlier and the Tanner graph associated with an (cid:28) × (cid:2) parity-
`
`check matrix is a bipartite graph with (cid:2) variable nodes, (cid:28) check nodes, and
`
`an edge connecting the --th variable node to the (cid:29)-th check node if and only
`
`if (cid:27)(cid:24)(cid:26) = 1 . The standard iterative decoder for LDPC codes works by
`
`passing messages along these edges.
`
`28.
`
`In 1997, Gallager’s LDPC codes were modified to have Tanner
`
`graphs with an irregular degree structure and were shown to achieve near-
`
`Shannon performance on the binary erasure channel (BEC) by Luby,
`
`Mitzenmacher, Shokrollahi, Spielman, and Stemann. 10 This work also
`
`
`
`9 McElice, MacKay, and Cheng, “Turbo Decoding as an Instance of Pearl’s
`
`‘Belief Propagation’ Algorithm, IEEE Journal on Selected Areas in
`
`Communication, Vol 16, No. 2, Feb. 1998 Ex. 1033.
`
`10 Luby, Mitzenmacher, Shokrollahi, Spielman, and Stemann, “Practical
`
`loss-resilient codes”, Ex. 1028,
`
`
`
`13
`
`Hughes, Exh. 1010, p. 16
`
`

`
`showed that irregular low-density generator-matrix (LDGM) codes benefit
`
`from irregularity on the BEC. A little later, MacKay and others (e.g.,
`
`Richardson and Urbanke) were also experimenting with irregular LDPC
`
`codes on general binary channels. Their results showed that irregularity
`
`provided valuable performance gains in practice.
`
`29. Richardson, Urbanke,
`
`and Shokrollahi
`
`extended
`
`the
`
`optimization of irregular LDPC codes in Luby, Mitzenmacher, Shokrollahi,
`
`Spielman, and Stemann’s to general channels in a paper that was released on
`
`the Internet on April 5th 1999 and submitted to the IEEE Transactions on
`
`Information Theory in the same year.11 I received an invitation to download a
`
`copy of the submitted paper by email from a publically-accessible website on April 5,
`
`1999.12 I downloaded a copy of the paper on the same day.13 I was aware that the
`
`
`
`11 Richardson, Shokrollahi, and Urbanke, “Design of Provably Good Low-
`
`Density Parity Check Codes,” Ex. 1029, at 4.
`
`12 Ex. 1066 (Email from Paul Siegel to me an others forwarding April 5,
`
`1999 email from Tom Richardson, Amin Shokrollahi, and Ruediger Urbanke
`
`inviting download of “Design of provably good low-density party check
`
`codes.”)
`
`13 A copy of the paper that I downloaded on April 5, 1999 is Ex. 1029.
`
`
`
`14
`
`Hughes, Exh. 1010, p. 17
`
`

`
`authors sent emails to others in the field to distribute the “Design of
`
`Provably Good Low-Density Parity Check Codes” paper. Around the time
`
`of this email, this was a conventional way to make others in the field aware
`
`of new publications. The authors stated that “[i]n the present paper we
`
`present result indicating the remarkable performance that can be achieved by
`
`properly chosen irregular codes.”14 This led to codes that approach capacity
`
`in practice.15 Thus, it was well known by 1999 that making LDPC code
`
`structures “irregular” was beneficial. In 1999, this irregular structure was
`
`extended and generalized to all turbo codes in “Irregular Turbocodes” by
`
`Frey and MacKay (“Frey & MacKay”), by encoding irregularly repeated
`
`symbols. 16 This paper was generally presented at the 1999 Allerton
`
`Conference on Communications, Control and Computing. The Allerton
`
`Conference is generally regarded as one of the main conferences in the field
`
`of information theory and communications and generally occurs in
`
`September. In 1999, the conference occurred from September 22-24, 1999
`
`with the paper being published on the authors websites in October of 1999.
`
`The proceedings were published later.
`
`
`
`14 Ex. 1029, at 4.
`
`15 Ex. 1029, at 5 and Fig. 2.
`
`16 Frey & MacKay, Ex 1012.
`
`
`
`15
`
`Hughes, Exh. 1010, p. 18
`
`

`
`30.
`
`In 1996, Benedetto and Montorsi provided one explanation for
`
`the excellent performance of turbo codes by showing that, on average, turbo
`
`codes have good distance properties.17 In particular, they computed the
`
`average weight enumerator (averaged over all possible interleavers) of a
`
`turbo code and showed that the bit-error rate due to low-weight codewords
`
`decreases as the block length increases. This led them to introduce the
`
`interleaver gain as a way to estimate the performance of parallel
`
`concatenated codes. Later, with Divsalar and Pollara, this approach was
`
`extended to serial concatenated codes.18 Together, these two works define
`
`the interleaver gain exponent (IGE), which is numerical parameter that
`
`estimates the rate at which the bit (or block) error rate decreases as the block
`
`length increases. The IGE conjecture, which was defined formally in
`
`“Coding Theorems for Turbo-like Codes” by Divsalar, Jin, and McEliece
`
`(“the Divsalar reference”)19, asserts that these estimates are mathematically
`
`
`
`17 Ex. 1063.
`
`18 S. Benedetto , D. Divsalar , G. Montorsi , F. Pollara, “Serial
`
`Concatenation of Interleaved Codes: Performance Analysis, Design, and
`
`Iterative Decoding” (1996), Ex. 1032.
`
`19 Divsalar, Ex. 1011
`
`
`
`16
`
`Hughes, Exh. 1010, p. 19
`
`

`
`correct.20
`
`31.
`
`It is worth noting that in Section VI of Benedetto & Montorsi,
`
`one example focuses on turbo codes constructed with rate-1 accumulators.21
`
`While this predates the introduction of repeat-accumulate codes, it is not
`
`particularly surprising. This is because the rate-1 accumulator is actually a
`
`standard element in the simplest non-trivial rate-1/2 recursive systematic
`
`convolutional code. Thus, the “accumulate” code is really a new name for
`
`an well-known mapping.
`
`32. The Divsalar reference had both theoretical and practical
`
`relevance here. Theoretically, the Divsalar reference showed the IGE
`
`conjecture applied to the special case of repeat-accumulate codes. This gave
`
`researchers confidence that the performance of turbo codes was now
`
`understood and it also suggests that the more general performance estimates
`
`based on the IGE are also corrects. Practically, the Divsalar reference
`
`demonstrated that the recursive rate-1 “accumulate” convolutional encoder
`
`could be used to construct useful codes with very simple decoders. Very
`
`soon afterwards, multiple researchers began experimenting with codes based
`
`
`
`20 Divsalar, Ex. 1011, at 3-5.
`
`21 Benedetto & Montorsi, Ex. 1063, at 422-426.
`
`
`
`17
`
`Hughes, Exh. 1010, p. 20
`
`

`
`on accumulators.22
`
`33. By the end of 1999, it was commonly known within the field
`
`that adding irregularity to any code structure could improve its iterative-
`
`decoding performance. In the same year, it was also shown in "Low density
`
`parity check codes with semi-random parity check matrix" by Ping, Leung,
`
`and Phamdo ("Ping") 23 that the accumulate structure could be used to
`
`construct LDPC codes with linear-time encoders. A very similar structure
`
`with linear-time encoding was also proposed in “Constructing Low-Density
`
`
`
`22 L. Ping, W. K. Leung, N. Phamdo, “Low Density Parity Check Codes
`
`with Semi-random Parity Check Matrix.” Electron. Letters, Vol. 35, No. 1,
`
`pp. 38-39, Jan. 7th, 1999 (“Ping”), Ex 1014; H. D. Pfister and P. H. Siegel,
`
`“The serial concatenation of rate-1 codes through uniform random
`
`interleavers.” Proc. 37th Allerton Conf. on Comm., Control and Computing,
`
`Monticello, Illinois, pp. 260-269, Sep. 1999 (“Pfister”), Ex. 1057; Viterbi
`
`and Viterbi, “New results on serial concatenated and accumulated-
`
`convolutional turbo code performance” in Annales Des Télécommunications
`
`1999, Ex. 1031.
`
`23 Ex. 1014.
`
`
`
`18
`
`Hughes, Exh. 1010, p. 21
`
`

`
`Parity-Check Codes with Circulant Matrices” by Bond, Hui, and Schmidt. 24
`
`Combining these constructions with irregular LDPC codes naturally leads to
`
`codes that are expected to perform well and have low-complexity encoding.
`
`IV. The Patents at Issue
`
`A. The ‘781 Patent
`
`34.
`
`I have reviewed the ‘781 patent, which is entitled “Serial
`
`Concatenation of Interleaved Convolutional Codes Forming Turbo-Like
`
`Codes.” I understand that the application that later issued as the ‘781 patent
`
`was filed on June 30, 2008. I understand that the ‘781 patent claims priority
`
`to U.S. Patent No. 7,421,032 (the ‘032 patent), which, was filed on October
`
`3, 2006. The ‘032 patent, in turn, claims priority to U.S. Patent No.
`
`7,116,710 (the ‘710 patent), which was filed on May 18, 2001. I understand
`
`that the ‘710 patent I understand that the application that later issued as the
`
`‘710 patent was filed on May 18, 2001 and then issued on October 3, 2006.
`
`The ‘710 patent claims priority to a provisional application filed on May 18,
`
`2000. I further understand that the ‘710 patent is a continuation-in part of
`
`U.S. Patent No. 7,089,477 (the ‘477 patent), which was filed on August 18,
`
`2000
`
`
`
`24 Ex. 1019.
`
`
`
`19
`
`Hughes, Exh. 1010, p. 22
`
`

`
`35. The ‘781 patent is directed to Irregular Repeat Accumulate
`
`codes. I have discussed the Divsalar reference above. The coder used by
`
`the authors of the Divsalar reference was a repeat accumulate (RA) coder.25
`
`The RA coder of the Divsalar reference had the following structure, as
`
`shown in Figure 3 of the paper:
`
`
`
`36. The Divsalar reference explains the operation of its RA code:
`
`An information block of length N is repeated q times, scrambled by an
`
`interleaver of size qN, and then encoded by a rate 1 accumulator. The
`
`accumulator can be viewed as a
`
`truncated rate-1 recursive
`
`convolutional encoder with transfer function 1/(1 +D), but we prefer
`
`to think of it as a block code whose input block [x1, . . . , xn] and
`
`output block [y1, . . . , yn] are related by the formula
`
`
`
`25 Divsalar, Ex. 1011, at 5.
`
`
`
`20
`
`Hughes, Exh. 1010, p. 23
`
`

`
`Divsalar, Ex 1011, 5
`
`
`
`37. The specification of the ‘781 patent discusses an irregular
`
`version of the code of the Divsalar reference. Specifically, whereas the
`
`repeater of the Divsalar reference has a constant rate 1/q repetition, in the
`
`‘781 patent, the rate reputation of reputation that will be different for
`
`different input blocks. This is described in greater detail with respect to the
`
`‘781 patent’s description of Fig. 2 of the ‘781 patent.
`
`38. Figure 2 of the ‘781 patent shows a coder of the form:
`
`39.
`
` The “outer coder” 202 is described at 2:48-65 as:
`
`
`
`21
`
`
`
`Hughes, Exh. 1010, p. 24
`
`

`
`The outer coder 202 receives the uncoded data. The data
`
`may be partitioned into blocks of fixed size, say k bits.
`
`The outer coder may be an (n,k) binary linear block
`
`coder, where n>k. The coder accepts as input a block u
`
`of k data bits and produces an output block v of n data
`
`bits. The mathematical relationship between u and v is
`
`v=T0u, where T0 is an n×k matrix, and the rate of the
`coder is k/n.
`
`The rate of the coder may be irregular, that is, the value
`
`of T0 is not constant, and may differ for sub-blocks of
`bits in the data block. In an embodiment, the outer coder
`
`202 is a repeater that repeats the k bits in a block a
`
`number of times q to produce a block with n bits, where
`
`n=qk. Since the repeater has an irregular output,
`
`different bits in the block may be repeated a different
`
`number of times. For example, a fraction of the bits in
`
`the block may be repeated two times, a fraction of bits
`
`may be repeated three times, and the remainder of bits
`
`may be repeated four times. These fractions define a
`
`degree sequence, or degree profile, of the code.
`40. Simply, the outer coder 202

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket