`
`
`
`
`
`UNITED STATES PATENT AND TRADEMARK OFFICE
`_____________________________
`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`_____________________________
`
`HUGHES NETWORK SYSTEMS, LLC and
`HUGHES COMMUNICATIONS, INC.,
`Petitioner,
`
`v.
`
`CALIFORNIA INSTITUTE OF TECHNOLOGY,
`Patent Owner.
`_____________________________
`
`Case IPR2015-00059
`Patent 7,916,781
`_____________________________
`
`
`
`DECLARATION OF SOLOMON W. GOLOMB, PH.D. IN SUPPORT OF
`PATENT OWNER’S RESPONSE TO INTER PARTES REVIEW OF
`U.S. PATENT NO. 7,916,781
`
`CALTECH - EXHIBIT 2024
`
`
`
`
`
`I.
`
`II.
`
`TABLE OF CONTENTS
`TABLE OF CONTENTS
`
`QUALIFICATIONS ........................................................................................ 1
`QUALIFICATIONS ...................................................................................... ..1
`
`SCOPE OF WORK.......................................................................................... 4
`SCOPE OF WORK........................................................................................ ..4
`
`III. LEGAL STANDARDS ................................................................................... 4
`III.
`LEGAL STANDARDS ................................................................................. ..4
`
`IV. LEVEL OF ORDINARY SKILL AND RELEVANT TIME ......................... 5
`IV.
`LEVEL OF ORDINARY SKILL AND RELEVANT TIME ....................... ..5
`
`V.
`
`CLAIM CONSTRUCTION ............................................................................ 5
`CLAIM CONSTRUCTION .......................................................................... ..5
`
`VI. CLAIMS 1 AND 2 ARE NOT ANTICIPATED BY DIVSALAR ............... 21
`VI.
`CLAIMS 1 AND 2 ARE NOT ANTICIPATED BY DIVSALAR ............. ..21
`
`VII. CONCLUDING STATEMENTS .................................................................. 27
`VII.
`CONCLUDING STATEMENTS ................................................................ ..27
`
`
`-i-
`
`
`
`
`
`I, Solomon W. Golomb, declare as follows:
`
`I.
`
`QUALIFICATIONS
`
`1. My name is Solomon Golomb. I am a Distinguished and University
`
`Professor of Electrical Engineering and Mathematics at the University of Southern
`
`California (“USC”). I received my B.A. in Mathematics from Johns Hopkins
`
`University in 1951 and my M.S. in Mathematics from Harvard University in 1953.
`
`I received my Ph.D. in Mathematics from Harvard University in 1957.
`
`2.
`
`For over sixty years I have conducted research in the field of
`
`communication systems and signals. Following completion of my Ph.D. thesis, I
`
`worked at NASA’s Jet Propulsion Laboratory (“JPL”) in Pasadena, California from
`
`1956 to 1963, initially as a Senior Research Engineer and eventually as a Deputy
`
`Section Chief of the Telecommunications Research Section.1 At JPL, I was one of
`
`the leaders of their space communications efforts and I played a key role in
`
`formulating the design of deep-space communications for subsequent lunar and
`
`planetary explorations.
`
`3.
`
`In 1963 I joined USC as a professor. Throughout the past five
`
`decades I have taught courses and conducted research in Electrical Engineering
`
`and Mathematics. My areas of research have included communication systems and
`
`signals, including space-communications technology and radar and sonar signals,
`
`
`
`1 NASA’s Jet Propulsion Laboratory (JPL) is a division of the California
`
`Institute of Technology.
`
`-1-
`
`
`
`
`
`as well as coding for reliability, security, data compression, and synchronization. I
`
`am recognized for my work with shift register sequences, or pseudorandom
`
`sequences, which have extensive applications in radar, space communications,
`
`cryptography, and now cell phone communications. I also developed what came to
`
`be known as Golomb Coding, a lossless data compression method using codes I
`
`developed in the 1960s. My efforts in these fields have helped make USC a center
`
`for communications research.
`
`4.
`
`From 1995 to 1998 I was the Director of Technology for the
`
`Annenberg Center of Communication at USC. I received the USC Presidential
`
`Medallion in 1985 and was awarded the title of University Professor in 1993. In
`
`1999 I was appointed as the first holder of the Andrew and Erna Viterbi Chair in
`
`Communications for the USC Viterbi School of Engineering. In 2008 I was named
`
`a USC Distinguished University Professor.
`
`5.
`
`I am the author or co-author of a number of books on communications
`
`and coding theory, including Digital Communications with Space Applications
`
`(originally published in 1964, revised edition published in 1982), Shift Register
`
`Sequences (originally published in 1967, revised edition published in 1982), Basic
`
`Concepts in Information Theory and Coding (1994), and Signal Design for Good
`
`Correlation: For Wireless Communication, Cryptography, and Radar (2005). In
`
`1987 I contributed a chapter on error correcting codes to the TIME-LIFE book
`
`Understanding Computers: Memory and Storage.
`
`6.
`
`In addition to these books, I have authored or co-authored hundreds of
`
`other publications on communications systems and signals, coding theory, and
`
`-2-
`
`
`
`
`
`mathematics, including works specifically directed to error correcting codes. I
`
`have presented lectures regarding error correcting codes at academic conferences
`
`and symposiums, including the National Computer Conference and the
`
`Symposium on Applications of Algebra to Error-Correcting Codes.
`
`7.
`
`In addition to my posts at USC, I have received numerous awards and
`
`recognition for my contributions to the field of communication systems and
`
`signals. In 2013, I received the National Medal of Science, the highest honor
`
`bestowed by the United States for scientific innovation, from President Obama for
`
`my advances in mathematics and communications. I was elected to the National
`
`Academy of Engineering in 1976 and to the National Academy of Sciences in
`
`2003. I was named a Fellow of the Institute of Electrical and Electronics
`
`Engineers (“IEEE”) in 1982 and subsequently received the IEEE Shannon Award
`
`of the Information Theory Society. In 2000 I received the Richard W. Hamming
`
`Medal of the IEEE. The Hamming Medal is named after a pioneer in the
`
`development of computer science and error correcting codes in particular, and is
`
`awarded for exceptional contributions to information sciences, systems, and
`
`technology. I have been a Foreign Member of the Russian Academy of Natural
`
`Science since 1994. I was appointed a Fellow of the American Mathematical
`
`Society in 2012 and a Fellow of the Society for Industrial and Applied
`
`Mathematics in 2013.
`
`8.
`
`A copy of my curriculum vitae is attached as Exhibit 2029.
`
`-3-
`
`
`
`
`
`II.
`
`SCOPE OF WORK
`
`9.
`
`I understand that a petition (Paper 1) was filed with the United States
`
`Patent and Trademark Office for inter partes review of U.S. Patent No. 7,916,781
`
`(“’781 patent,” Ex. 1005).
`
`10.
`
`I further understand that the Patent Trial and Appeal Board (“PTAB”
`
`or the “Board”) has decided to institute inter partes review (Paper 18) of claims 1
`
`and 2 of the ’781 patent under 35 U.S.C. § 102 based on Divsalar (Ex. 1011).
`
`11.
`
`I have been specifically asked to provide my expert opinions on the
`
`validity of claims 1 and 2 of the ’781 patent. In connection with this analysis, I
`
`have reviewed the ’781 patent as well as the Divsalar reference. I have also
`
`reviewed and considered Board’s Decision on Institution of Inter Partes Review,
`
`and may cite to them in this declaration.
`
`III. LEGAL STANDARDS
`
`12.
`
`I understand that a claim is not patentable under 35 U.S.C. §102, for
`
`lack of novelty, if each and every element of the claim is found, either expressly or
`
`inherently described, in a single prior art reference. I understand that such a claim
`
`can be considered “anticipated” by the prior art reference.
`
`13.
`
`I understand that to establish anticipation by inherency, the inherent
`
`teaching must necessarily result from the description in the prior art reference. The
`
`mere possibility that a certain thing may result from a given set of circumstances is
`
`not sufficient to demonstrate inherency.
`
`-4-
`
`
`
`
`
`14.
`
`I understand that the claims of a patent are analyzed from the
`
`perspective of a “person of ordinary skill in the art” at the time the invention was
`
`made.
`
`IV. LEVEL OF ORDINARY SKILL AND RELEVANT TIME
`
`15.
`
`I understand that the application that led to the ’781 patent was filed
`
`on June 30, 2008. I understand that the ’781 patent claims priority to provisional
`
`application No. 60/205,095, filed on May 18, 2000.
`
`16.
`
`I have been advised that “a person of ordinary skill in the relevant
`
`field” is a hypothetical person to whom one could assign a routine task with
`
`reasonable confidence that the task would be successfully carried out. I have been
`
`advised that the relevant timeframe is prior to May 18, 2000.
`
`17. By virtue of my education, experience, and training, I am familiar
`
`with the level of skill in the art of the ’781 patent prior to May 18, 2000.
`
`18.
`
`In my opinion, a person of ordinary skill in the relevant field prior to
`
`May 18, 2000 would include someone who had, through education or practical
`
`experience, the equivalent of a master’s degree in electrical engineering or
`
`mathematics, and three to four years of work or research experience in the field of
`
`error control coding, or a Ph.D. in electrical engineering or mathematics, with at
`
`least two years of experience in error control coding.
`
`V. CLAIM CONSTRUCTION
`
`19.
`
`I have been advised that, in the present proceeding, the ’781 patent
`
`claims are to be given their broadest reasonable interpretation in view of the
`
`specification. I also understand that, at the same time, absent some reason to the
`
`-5-
`
`
`
`
`
`contrary, claim terms are typically given their ordinary and accustomed meaning as
`
`would be understood by one of ordinary skill in the art. I further understand that
`
`the terminology in the specification and the claims need not be the same. I have
`
`followed these principles in my analysis throughout this declaration. I discuss
`
`some terms below and what I understand as constructions of these terms.
`A.
`“first encoding operation being a linear transform operation that
`generates L transformed bits”
`
`20. Claim 1 of the ’781 patent recites the term “the first encoding
`
`operation being a linear transform operation that generates L transformed bits.” A
`
`person of ordinary skill would look to the specification of the ’781 patent to
`
`understand what this term means.
`
`21. The specification consistently refers to the invention as comprising
`
`two main aspects: an “outer coder” and an “inner coder.” For example, the
`
`Abstract begins by stating that “[a] serial concatenated coder includes an outer
`
`coder and an inner coder.” Ex. 1005 at Abstract. Similarly, the Summary section
`
`of the ’781 patent describes the claimed coding system as made up of the same two
`
`“outer coder” and “inner coder” components. Ex. 1005 at 1:63-2:10. In the coding
`
`system described in the ’781 patent, it is clear that the signal to be encoded is first
`
`operated on by the outer coder, whose output is then further processed by the inner
`
`coder. For example, both the Abstract and Summary of the invention state that the
`
`repeated and scrambled bits generated by the outer coder are then input to the inner
`
`coder. Accordingly, a person of ordinary skill in the art would understand that the
`
`“first encoding operation . . .” recited in claim 1 refers to the “outer coder”
`
`-6-
`
`
`
`
`
`discussed throughout the specification, while the “second encoding operation . . .”
`
`discussed below corresponds to the “inner coder.”
`
`22. A person of ordinary skill, upon reading the specification, would
`
`understand that the outer coder must include irregular repetition of input bits.
`
`Indeed, this is an essential aspect of the invention described in the ’781 patent, as
`
`the specification consistently explains. For example, the Abstract contains just
`
`three sentences. As mentioned above, the first sentence explains that the invention
`
`“includes an outer coder and an inner coder.” Id. at Abstract. The second sentence
`
`describes the outer coder while the last sentence describes the inner coder. Id.
`
`With respect to the outer coder, the Abstract explains that irregular repetition is
`
`one of two key functions it performs, along with bit scrambling: “[t]he outer coder
`
`irregularly repeats bits in a data block according to a degree profile and scrambles
`
`the repeated bits.” Id.
`
`23. The Summary of the invention provides a bit more detail regarding
`
`the repetition and scrambling operations performed by the outer coder, and
`
`similarly shows that irregular repetition is an essential aspect. It explains that
`
`“[t]he data block is apportioned into two or more sub-blocks, and bits in different
`
`sub-blocks are repeated a different number of times according to a selected degree
`
`profile.” Id. at 1:67-2:2. Nothing in the Summary mentions regular
`
`repetition(such as repeating all bits the same number of times) or would lead a
`
`person of ordinary skill in the art to believe that irregular repetition is an optional
`
`aspect of the invention. Rather, it is clear that irregular repetition is a central
`
`feature of the outer coder and of the invention itself.
`
`-7-
`
`
`
`
`
`24. The remaining portion of the specification supports the conclusion
`
`that the “first encoding operation” must utilize irregular repetition. In fact, the
`
`invention is referred to throughout the specification as an “Irregular Repeat and
`
`Accumulate” (IRA) coder. For example:
`• “FIG. 3 is a Tanner graph for an irregular repeat and accumulate
`
`(IRA) coder.” (2:24-25);
`• “FIG. 4 is a schematic diagram of an IRA coder according to an
`
`embodiment.” (2:26-27);
`• “The serial concatenation of the interleaved irregular repeat code and
`
`the accumulate code produces an irregular repeat and accumulate
`
`(IRA) code.” (3:34-36);
`• “Two types of IRA codes are represented in FIG. 3, a nonsystematic
`
`version and a systematic version.” (4:25-26);
`• “The IRA code may be represented by an alternate notation” (4:50-
`
`51);
`• “‘Belief propagation’ on the Tanner Graph realization may be used to
`
`decode IRA codes.” (5:13-14); and
`• “IRA codes may be implemented in a variety of channels, including
`
`memoryless channels, such as the BEC, BSC, and AWGN, as well as
`
`channels having a non-binary input, non-symmetric and fading
`
`channels, and/or channels with memory.” (7:4-18).
`
`25.
`
`In light of the patentee’s decision to refer to the invention as an
`
`Irregular Repeat and Accumulate code, a person of ordinary skill in the art would
`
`-8-
`
`
`
`
`
`understand that irregular repetition is a core component of the invention and would
`
`not interpret the claims in a way that did not require irregular repetition.
`
`26. Moreover, the described embodiments, Tanner graph representation,
`
`and discussion of decoding the information after it is encoded all consistently refer
`
`to the invention as utilizing irregular repetition in the outer encoder. For example,
`
`the Detailed Description describes two primary embodiments of the coding system:
`
`one where the outer coder includes a repeater and an interleaver (discussed at 2:39-
`
`3:33) and an alternative one where those two elements are replaced by a low-
`
`density generator matrix (discussed at 3:62-4:3). Each of these is discussed purely
`
`in terms of requiring irregular repetition.
`
`27. With respect to the repeater/interleaver embodiment, depicted in
`
`Figure 2, the Summary states that the repeater repeats bits irregularly, explaining
`
`that the outer coder in this embodiment utilizes “a repeater with a variable rate.”
`
`Id. at 2:3-4. The specification also states that the repeater in this embodiment
`
`“may be irregular, that is, the value of T0 is not constant, and may differ for sub-
`
`blocks of bits in the data block.” (2:54-56). The specification goes on to explain
`
`the irregular repetition operation in more detail:
`Since the repeater has an irregular output, different bits in the block
`may be repeated a different number of times. For example, a fraction
`of the bits in the block may be repeated two times, a fraction of bits
`may be repeated three times, and the remainder of bits may be
`repeated four times. These fractions define a degree sequence, or
`degree profile, of the code.”
`
`Id. at 2:58-64.
`
`-9-
`
`
`
`
`
`28. While the discussion of the repeater/interleaver embodiment does
`
`mention that the repeater “may” be irregular, this would not indicate to a person of
`
`ordinary skill in the art that the invention claimed in the ’781 patent encompasses a
`
`repeater lacking irregular repetition. Taking the specification as a whole into
`
`consideration, it is clear that irregular repetition is a required aspect. Moreover, I
`
`note that the block quote above states that a repeater with an irregular output
`
`“may” repeat different bits in the block a different number of times. The
`
`specification uses “may” to indicate that variable numbers of repeats are possible,
`
`for example that two repeats may be possible, three repeats maybe possible, etc.
`
`It does not appear to me that the specification uses “may” to indicate that the
`
`irregular repetition aspect of the described invention itself is optional.
`
`29. Turning to the embodiment that replaces the repeater and interleaver
`
`with a low-density generator matrix, the specification again describes irregular
`
`repetition as required, stating “[i]n an alternate embodiment, the outer coder 202
`
`may be a low-density generator matrix (LDGM) coder that performs an irregular
`
`repeat of the k bits in the block, as shown in FIG. 4.” Id. at 3:62-64. In the Brief
`
`Description of the Drawings, the specification refers to this Figure 4 as “a
`
`schematic diagram of an IRA coder.” Id. at 2:26-27. The specification does not
`
`describe this invention as including, or even possibly including, regular repetition
`
`instead of irregular repetition.
`
`30.
`
`In addition to the two primary embodiments, the specification
`
`provides a Tanner graph, seen in Figure 3, in which the “IRA code is . . .
`
`represented as a set of parity checks . . . in a bipartite graph.” Id. at 3:34-40. A
`
`-10-
`
`
`
`
`
`person of ordinary skill in the art would be familiar with Tanner graphs and their
`
`use to depict the constraints that define a class of error correction codes. The ’781
`
`patent presents a single Tanner graph for its invention, and that graph is for an
`
`irregular repeat code where fractions of the overall number of information bits are
`
`repeated different times:
`FIG. 3 shows a Tanner graph 300 of an IRA code with parameters (f1,
`. . . , fj; a) where fi≥0, ∑ifi=1 and “a” is a positive integer. . . . Each
`information node 302 is connected to a number of check nodes 304.
`The fraction of information nodes connected to exactly i check nodes
`is fi. For example, in the Tanner graph 300, each of the f2 information
`nodes are connected to two check nodes, corresponding to a repeat of
`q=2, and each of the f3 information nodes are connected to three check
`nodes, corresponding to q=3.
`
`Id. at 3:40-55.
`
`31. This irregular repetition is further seen in Figure 3 itself, which I have
`
`annotated below:
`
`-11-
`
`
`
`
`
`
`As can be seen in this example, the figure depicts many example subgroups of
`
`input bits corresponding to information nodes, called “variable nodes.” One such
`
`example subgroup is designated as the fraction f2, in which the input bits are
`
`repeated twice, such that each of the information nodes in the fraction f2 set are
`
`connected to two check nodes. In another of the example subgroups, designated as
`
`the fraction f3, each of the input bits corresponding to the information nodes in the
`
`fraction set are repeated three times, such that each of those information nodes is
`
`connected to three check nodes. This optionally continues on through additional
`
`-12-
`
`
`
`
`
`fractions of increased repetitions, as represented by the fraction fj whose nodes are
`
`connected to “j” check nodes. The ’781 patent contains no other Tanner graphs,
`
`nor does the description of Figure 3 mention utilizing regular repetition, which is
`
`unsurprising given that the Figure presents “a Tanner graph 300 of an IRA code.”
`
`Id. at 3:40-41.
`
`32. The ’781 patent also refers exclusively to irregular repetition when
`
`discussing decoding the output of the invention. The decoding discussion appears
`
`at 5:13-7:4. The introduction to this section of the specification plainly states that
`
`it pertains to irregular repetition codes: “‘Belief propagation’ on the Tanner graph
`
`realization may be used to decode IRA codes.” The specification then goes on to
`
`explain this belief propagation decoding method, without ever mentioning the
`
`prospect of decoding a code utilizing regular repetition.
`
`33. Similarly, the specification mentions an “iterative decoding” in the
`
`context of a number of various “memoryless channel models,” including “a binary
`
`erasure channel (BEC); a binary symmetric channel (BSC); and an additive white
`
`Gaussian noise (AWGN) channel.” Id. at 5:61-65. The specification never
`
`indicates the invention encompasses regular repetition when utilized with respect
`
`to any of these channels, and in fact reiterates the irregular repetition aspect
`
`regardless of channel: “IRA codes may be implemented in a variety of channels,
`
`including memoryless channels, such as the BEC, BSC, and AWGN, as well as
`
`channels having non-binary input, nonsymmetric and fading channels, and/or
`
`channels with memory.” Id. at 7:14-18.
`
`-13-
`
`
`
`
`
`34. While the specification refers throughout to the irregular repetition
`
`component of the invention, it mentions regular repetition just once, stating at
`
`4:44-49 that “regular repeat and accumulate (RA) codes can be considered
`
`nonsystematic IRA codes with a=1 and exactly one fi equal to say fq=1, and the rest
`
`zero, in which case Rnsys simplifies to R=1/q.” A person of ordinary skill in the art
`
`would not take this to be an indication that the invention described in the patent
`
`extends to outer coders utilizing regular repetition. Instead, this passage serves to
`
`highlight and explain the differences between IRA codes and a similar—but
`
`significantly different—code lacking irregularity. This passage explains a
`
`fundamental distinction between IRA and RA codes, by pointing out that
`
`eliminating all information node fractions but one would necessarily result in a
`
`code that repeats all information bits the same number of times (i.e., exactly one fi
`
`equal to 1”).
`
`35.
`
`In sum, the ’781 patent specification consistently refers to the
`
`invention as an Irregular Repeat and Accumulate (IRA) code and, consistent with
`
`its name, describes the invention in a way that makes it clear to a person of
`
`ordinary skill in the art that irregular repetition is a necessary core component of
`
`the coding scheme. Specifically, the outer coder is described exclusively in terms
`
`of utilizing irregular repetition. A person of ordinary skill in the art would not
`
`interpret the claims in a way that did not include this essential component of the
`
`invention. Accordingly, it is my opinion that the term “the first encoding operation
`
`being a linear transform operation that generates L transformed bits” should be
`
`-14-
`
`
`
`
`
`construed to require an encoding operation that irregularly repeats bits in a data
`
`block and scrambles the repeated bits.
`B.
`“the second encoding operation including an accumulation
`operation in which the L transformed bits generated by the first
`encoding operation are accumulated”
`
`36. Claim 1 of the ’781 patent recites the term “the second encoding
`
`operation including an accumulation operation in which the L transformed bits
`
`generated by the first encoding operation are accumulated.” A person of ordinary
`
`skill would look to the specification of the ’781 patent to understand what this term
`
`means.
`
`37. As discussed above (see ¶¶ 21-22), the specification consistently
`
`refers to the invention as being comprised of an outer coder and an inner coder.
`
`While the outer coder carries out the irregular repeat and bit scrambling functions
`
`described in the specification, the Summary of the invention explains that “[t]he
`
`repeated and scrambled bits are input to an inner coder that has a rate substantially
`
`close to one.” Id. at 2:7-8. The Summary further describes the inner coder by
`
`stating that it “may include one or more accumulators that perform recursive
`
`modulo two addition operations on the input bit stream.” Id. at 2:8-10. This
`
`description is consistent with the claim limitation requirement that the “L
`
`transformed bits” generated by the first encoding operation (i.e., outer coder) be
`
`“accumulated.” Accordingly, a person of ordinary skill would understand that this
`
`limitation corresponds to the inner coder described in the specification.
`
`-15-
`
`
`
`
`
`38. The specification explains that the inner coder “is an accumulator,
`
`which produces outputs that are the modulo two (mod-2) partial sums of its
`
`outputs.” Id. at 3:3-5. It further explains that “[a]n advantage of this system is that
`
`only mod-2 addition is necessary for the accumulator. The accumulator may be
`
`embodied using only XOR gates, which may simplify the design.” Id. at 3:25-28.
`
`A person of ordinary skill would understand, therefore, that the specific
`
`accumulator design the specification discloses is a key element of the invention.
`
`39. As with the outer coder, Figure 3 and the corresponding description
`
`provides an example of the inner coder/accumulator aspect of the IRA code
`
`invention. As I discussed above, the specification states that “FIG. 3 shows a
`
`Tanner graph 300 of an IRA code with parameters (f1, . . . , fj; a) where fi≥0, ∑ifi=1
`
`and ‘a’ is a positive integer.” Id. at 3:40-42. The accumulation operation is
`
`represented by the “check nodes” depicted in Figure 3. These check nodes are
`
`“connected between the information nodes and parity nodes” (3:46-48) and,
`
`notably, each check node is connected to multiple information nodes:
`Each check node 304 is connected to exactly “a” information nodes
`302. In FIG. 3, a=3. These connections can be made in many ways,
`as indicated by the arbitrary permutation of the ra edges in joining
`information nodes 302 and check nodes 304 in permutation block 310.
`These connections correspond to the scrambling performed by the
`interleaver 204.
`
`3:56-61. In other words, the accumulation operation disclosed in the representative
`
`example in Figure 3 involves an accumulation of the values of, for example, three
`
`information nodes at each respective check node.
`
`-16-
`
`
`
`
`
`40. This can be further seen in Figure 3 itself, which I have further
`
`annotated below:
`
`
`41. As can be seen in this example, the first step of the accumulation
`
`operation involves a mod-2 or exclusive-OR addition of the values of the input bits
`
`corresponding to the information nodes connected to check node v1 (represented by
`
`the lines on the left side of the filled circle labeled v1), resulting in an output bit
`
`corresponding to parity node x1. Note that while no lines are drawn directly
`
`between the information nodes and check nodes due to the “arbitrary permutation
`
`of the ra edges in joining information nodes 302 and check nodes 304,” (3:58-60)
`
`provided by the “random permutation” element, the check nodes are in fact
`
`-17-
`
`
`
`
`
`connected to “a” information nodes, depicted here in this example as a=3. I also
`
`note that the elipses between the edges connected to the check nodes indicate to a
`
`person of ordinary skill that other values of “a” are possible, as discussed below
`
`with respect to Table 1of the ’781 patent, below.
`
`42. The second step of the accumulation operation performs an addition
`
`of the earlier output x1 and the values of the input bits corresponding to the
`
`additional information nodes connected to check node v2. This results in an output
`
`bit corresponding to parity node x2. The accumulation operation continues on in
`
`this fashion, each time adding the values of the bits corresponding to the
`
`information nodes connected to a particular check node to the previously generated
`
`output value.
`
`43. Aside from Figure 3 and its description, the specification elsewhere
`
`describes the accumulation operation as involving addition of multiple information
`
`bit values in each accumulation step. For example, Table 1 “shows degree profiles
`
`that have been found to produce good results for an AWGN channel model” of the
`
`IRA code (id. at 6:41-42):
`
`-18-
`
`
`
`
`
`
`44. The specification explains that Table 1 illustrates degree profiles
`
`involving connections between two, three, and four information nodes and a check
`
`node: “Table 1 shows degree profiles yielding codes of rate approximately 1/3 for
`
`the AWGN channel and with a=2, 3, 4.” Id. at 6:62-63. Importantly, the
`
`specification goes on to state that “[a]s the parameter ‘a’ is increased, the
`
`performance improves. For example, for a=4, the best code found has an iterative
`
`decoding threshold of Eb/No=-0.371 dB, which is only 0.12 dB above the Shannon
`
`limit.” Id. at 7:1-4.
`
`45. The specification’s disclosure that performance of the IRA code
`
`improves as the value of “a” increases demonstrates the benefit of adding multiple
`
`information bits at each step of the accumulation operation. Increasing the value
`
`of “a,” and thereby increasing the number of information bits involved in each
`
`addition operation during the accumulation, improves the distribution of potential
`
`errors in a way that makes it much more likely the error correction will work. In
`
`-19-
`
`
`
`
`
`practice, it is more likely that errors will occur in adjacent bits rather than spread
`
`evenly throughout the signal. The combination of the random permutation and
`
`multi-bit accumulation disclosed in the specification increases the likelihood that
`
`those errors will be detected.
`
`46. Not only does the specification state that higher values of “a” are
`
`preferable to lower values, but Table 1 does not even present a=1 as an option. In
`
`fact, the ’781 patent does not discuss an IRA code embodiment utilizing a=1
`
`anywhere in the specification. The only instance where the ’781 patent
`
`specification discusses a code utilizing a=1 is when it distinguishes the IRA code
`
`accumulation operation as described with respect to Figure 3 from an accumulation
`
`operation used in regular repeat and accumulate codes. An RA code would not
`
`utilize connections to multiple information nodes at each check node. Rather,
`
`“regular repeat and accumulate (RA) codes can be considered nonsystematic IRA
`
`codes with a=1 and exactly one fi equal to say fq=1, and the rest zero, in which case
`
`Rnsys simplifies to R=1/q.” Id. at 4:46-49. Having the parity bits used for error
`
`correction randomly scattered throughout a block of data helps to insure that a
`
`block or cluster of errors will be successfully detected and corrected. In other
`
`words, because in an RA code “a=1,” such a code differs from the IRA code
`
`described in the ’781 patent because each parity bit is merely the sum of the prior
`
`parity bit and a single information bit, rather than multiple information bits as in
`
`the IRA code.
`
`47. The specification consistently describes the IRA code invention as
`
`including an accumulation operation involving connections between multiple
`
`-20-
`
`
`
`
`
`information nodes and a single check node. It distinguishes this from an RA code,
`
`where the accumulation operation involves a single connection to an information
`
`node for each check node. Accordingly, a person of ordinary skill would
`
`understand the term “the second encoding operation including an accumulation
`
`operation in which the L transformed bits generated by the first encoding operation
`
`are accumulated,” in view of the specification, to require addition of a previously
`
`generated parity bit and more than one input bit in order to generate a second parity
`
`bit.
`
`VI. CLAIMS 1 AND 2 ARE NOT ANTICIPATED BY DIVSALAR
`
`48. As explained below, it is my opinion that neither claim 1 nor claim 2
`
`of the ’781 patent is anticipated by Divsalar because Divsalar does not disclose
`
`each and every feature of claim 1. In par