throbber

`
`
`
`
`
`
`
`UNITED STATES PATENT AND TRADEMARK OFFICE
`_____________________________
`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`
`_____________________________
`
`
`APPLE INC.,
`Petitioner,
`
`v.
`
`CALIFORNIA INSTITUTE OF TECHNOLOGY,
`Patent Owner.
`
`_____________________________
`
`Case IPR2017-00700
`Patent No. 7,421,032
`_____________________________
`
`
`DECLARATION OF DR. R. MICHAEL TANNER
`
`
`
`CALTECH - EXHIBIT 2001
`Apple Inc. v. California Institute of Technology
`IPR2017-00700
`
`

`

`I, Robert Michael Tanner, declare as follows:
`
`I.
`
`QUALIFICATIONS
`
`1.
`
`I am Chief Academic Counsel at the Association of Public and Land-
`
`grant Universities (APLU), in which position I have served since my retirement in
`
`2015. I had previously served as APLU’s Vice President for Academic Affairs and
`
`Chief Academic Officer since 2011.
`
`2.
`
`Prior to my work at APLU, I taught as a professor of Computer
`
`Science and Electrical Engineering at the University of Illinois at Chicago and at
`
`the University of California, Santa Cruz. I also served in Vice Chancellor positions
`
`at both institutions, as well as other administrative roles. I have also been a visiting
`
`professor at institutions including the California Institute of Technology
`
`(“Caltech”), the Massachusetts Institute of Technology, and Stanford University,
`
`and I worked in the 1980s as a consultant to companies in the disk and
`
`telecommunication industries.
`
`3.
`
`In 1981, I authored a paper titled “A Recursive Approach to Low
`
`Complexity Codes,” in which I proposed a new method of graphically representing
`
`the constraint equations governing the codeword elements in sparse check matrix
`
`codes. This graphical representation, called a “Tanner graph,” is named after me.
`
`My contributions to the field of coding have led me to be recognized as the founder
`
`of the subfield “codes on graphs.”
`
`1
`
`
`
`

`

`4. My professional research interests have concentrated on topics
`
`involving information and communication theory, and on the theory of algorithms
`
`and computational complexity. Significant areas on which I have focused have
`
`included: a) development of highly efficient error-correcting codes that are
`
`amenable to decoding with ultra-concurrent iterative algorithms; and b) the
`
`theoretical and algorithmic reconciliation of block and convolutional codes.
`
`5.
`
`I received a Ph.D. in Electrical Engineering with Specialization in
`
`Information Theory from Stanford in 1971. I also hold a Master’s Degree from
`
`Stanford in Electrical Engineering and a Bachelor’s Degree from Stanford in
`
`Electrical Engineering, which I received in 1967 and 1966, respectively.
`
`6.
`
`A copy of my Curriculum Vitae, provided as Exhibit 2002, contains
`
`further details on my education, experience, publications, patents, and other
`
`qualifications to render an expert opinion in this matter.
`
`II.
`
`SCOPE OF WORK
`
`7.
`
`I was asked by counsel for Caltech to review U.S. Patent No.
`
`7,421,032 (“the ’032 patent”) and related applications, including Provisional
`
`Application No. 60/205,095 (“the ’095 provisional application”). I receive $400
`
`per hour for my services. No part of my compensation is dependent on my
`
`opinions or on the outcome of this proceeding.
`
`
`
`2
`
`

`

`8.
`
`I also reviewed the papers “Comparison of constructions of irregular
`
`Gallager codes,” by David J. C. MacKay et al. (“MacKay,” Ex. 1002), “Coding
`
`Theorems for ‘Turbo-Like’ Codes” by Dariush Divsalar et al. (“Divsalar,” Ex.
`
`1017), and “Low Density Parity Check Codes” by Li Ping et al. (“Ping,” Ex.
`
`1003).
`
`9.
`
`I was asked to provide my understanding of whether “irregularity” as
`
`discussed in MacKay indicates that the underlying code includes irregularity with
`
`respect to message bits (or “information bits”), specifically. In my opinion, it does
`
`not.
`
`10. MacKay describes “irregular” LDPC codes, but does not thereby
`
`disclose encoding with irregular repetition as in the ’032 patent claims. Ping
`
`provides an example that illustrates this point. As described in further detail below,
`
`Ping discloses a coding scheme with regular information bit column weights,
`
`corresponding to information bits with regular degrees in a Tanner graph, yet
`
`nonetheless has nonuniform column weights overall, due to the parity bit columns
`
`of its parity check matrix. Thus, Ping’s code could be represented as an “irregular”
`
`Tanner graph as MacKay defines the term, yet fails to include irregular repetition
`
`of message bits as required by the claims of the ’032 patent.
`
`
`
`3
`
`

`

`III. OVERVIEW
`
`11. The ’032 patent discloses a serially-concatenated interleaved
`
`convolutional code with an outer code and an inner code. Ex. 1001, Title, Abstract.
`
`The ’032 patent describes embodiments in which the outer code irregularly repeats
`
`bits in a data block, which are then scrambled by a permuter. Id. at Abstract, 1:60-
`
`2:3. Alternatively, the outer coder may be a low-density generator matrix encoder.
`
`Id.
`
`12. The specification of the ’032 patent identifies the outer coder as
`
`irregular, meaning that it could be a repeater that repeats different bits in a block a
`
`different number of times. See id. at 2:50-60.
`
`13. The scrambled, or interleaved, bits are then subjected to a second
`
`encoding, which includes one or more accumulators that perform modulo two
`
`addition. Id. at 2:4-12, 2:66-3:20.
`
`14. Modulo two addition is an exclusive-or operation, often called XOR,
`
`below:
`
`and denoted with the symbol ⊕. The modulo-two addition of two bits is shown
`A⊕B
`
`A
`0
`0
`1
`1
`
`B
`0
`1
`0
`1
`
`
`
`
`
`4
`
`0
`1
`1
`0
`
`

`

`15. As indicated in the chart above, the modulo-two addition of a value
`
`with itself is 0.
`
`16. The ’032 patent provides an exemplary accumulator as a block coder
`
`whose input block [x1, x2, … , xn] and output block [y1, . . . , yn] are related by the
`
`following formula:
`
`y1 = x1
`
`y2 = x1⊕x2
`y3 = x1⊕x2⊕x3
`
`. . .
`
`17. This relationship can be simplified as follows:
`
`y1 = x1
`
`y2 = y1⊕ x2
`y3 = y2⊕ x3
`
`. . .
`
`18. A code may be represented as a set of parity checks, which may in
`
`turn be represented in a bipartite graph known as a Tanner graph, which I
`
`introduced as a graphical representation of the constraint equations governing
`
`codeword elements in sparse check matrix codes, including low density parity
`
`check (“LDPC”) codes.
`
`
`
`5
`
`

`

`19. A Tanner graph contains multiple sets of nodes, with message nodes
`
`shown as open circles and check nodes shown as filled circles. Fig. 3 of the ’032
`
`patent includes a Tanner graph, which is reproduced below. In that Tanner graph
`
`embodiment, open circles on the left side represent information nodes, each
`
`corresponding to an information bit, filled circles near the middle represent check
`
`nodes, and open circles on the far right represent parity nodes, each corresponding
`
`to a parity bit. Each check node represents a constraint on the information and
`
`parity nodes to which it is connected, such that the modulo-two addition of the bit
`
`values of the information and parity nodes connected to a particular check node
`
`must equal zero.
`
`
`
`6
`
`
`
`

`

`Tanner Graph from Fig. 3 of the ’032 Patent
`
`
`
`
`20. A similar Tanner graph to that shown above is included in claims 11
`
`and 18 of the ’032 patent. Claim 11 refers to the information bits of the Tanner
`
`graph as “message bits,” describing “an encoder configured to receive a collection
`
`of message bits and encode the message bits to generate a collection of parity bits
`
`in accordance with” a Tanner graph corresponding to the one reproduced above.
`
`Claim 18 is directed to a decoder using the same Tanner graph.
`
`
`
`7
`
`

`

`21. Each of the message (or information) bits (labeled u1 through uk
`
`above) and parity bits (labeled x1 through xr above) can be referred to as a
`
`“codeword bit.” In the above Tanner graph, each codeword bit has a node
`
`connected by one or more edges to one or more check nodes (labeled v1 through
`
`vr). Each codeword bit’s node has a “degree” corresponding to the number of edges
`
`connected to that node. For example, on the left side, the information nodes in the
`
`group labeled “f2” have degree 2, the information nodes in the group labeled “f3”
`
`have degree 3, and the information nodes in the group labeled “fj” have degree j.
`
`Likewise, the parity nodes each have degree 2, except for the last parity node, xr,
`
`which has degree 1.
`
`22. Because the information nodes on the left have varying degrees, the
`
`Tanner graph above demonstrates what is referred to in the ’032 patent as
`
`“irregular repetition.” Furthermore, as discussed below, since not all codeword
`
`nodes (including both information and parity nodes) have the same degree, the
`
`code also exhibits “irregularity” as defined by the MacKay reference.
`
`23. The MacKay reference describes “irregularity” of LDPC codes in
`
`terms of a parity check matrix. Ex. 1002 at 1449. In particular, MacKay defines
`
`“irregular codes” as codes “whose parity check matrices have nonuniform weight
`
`per column.” Id.
`
`
`
`8
`
`

`

`24. Tanner graphs and parity check matrices are two ways of representing
`
`the same thing. Each column of a parity check matrix corresponds to a codeword
`
`bit (either an information bit or a parity bit), and each row of a parity check matrix
`
`corresponds to a check node. An entry in a parity check matrix is a 1 if and only if
`
`there is an edge between the corresponding codeword node and check node in a the
`
`code’s Tanner graph; otherwise, the entry is a 0. See Ex. 1003 at 38 (describing
`
`parity check matrices). Accordingly, the weight of a column in a parity check
`
`matrix corresponds to the degree (that is, the number of edges) of the
`
`corresponding information or parity node—if every node in the graph has the same
`
`degree, the graph is regular; otherwise, it is an irregular graph under MacKay’s
`
`definition.
`
`25. Accordingly, the term “irregularity” as used by MacKay refers to a
`
`Tanner graph in which some codeword nodes have different degrees than others.
`
`For example, a Tanner graph in which all information nodes had one degree while
`
`all parity nodes had a different degree would be “irregular,” as MacKay uses the
`
`term, even though it would still have “regular” repetition of information bits.
`
`Similarly, a Tanner graph in which all information nodes had one degree while
`
`different parity nodes had a different degrees would also be “irregular,” even
`
`though it would still have “regular” repetition of information bits.
`
`
`
`9
`
`

`

`26. Thus, many codes with only regular repetition (i.e., information nodes
`
`of uniform degree) are also “irregular” codes as MacKay defines the term. As I
`
`discuss below, the “irregularity” in an irregular graph as MacKay describes it does
`
`not represent irregular repetition in a code.
`
`IV. PING’S PARITY CHECK MATRIX ALREADY HAS NONUNIFORM WEIGHT PER
`COLUMN
`
`27. The code of Ping is an “irregular” code in the manner discussed in
`
`MacKay, in that it has nonuniform weight per column. Ping, however, does not
`
`disclose the irregular repetition recited in the ’032 patent claims.
`
`28. Ping discloses a code with a parity check matrix H that is composed
`
`of two submatrices, Hp and Hd. H has the following form:
`
`Ex. 1003 at 38. Ping defines Hp and Hd in more detail. See Ex. 1003 at 38. Hd is a
`
`(cid:2)=(cid:4)(cid:2)(cid:5) (cid:2)(cid:6)(cid:7).
`
`randomly generated matrix of ones and zeros in which each column has exactly t
`
`ones and each row has exactly kt/(n-k) ones, where k is the number of information
`
`bits and (n-k) is the number of parity bits. Id. Because Hd has t ones per column,
`
`Ping states that it has a “column weight of t.” Id. In particular, Ping discloses that t
`
`= 4. Id. at 39. Hd is thus a part of a parity check matrix, and that part has a uniform
`
`column weight of 4.
`
`29. Ping sets out Hp as follows:
`
`10
`
`
`
`

`

`0
`
`
`(cid:2)(cid:5)=(cid:10)1
`
`
`1 1
`1 1(cid:14).
` ⋱ ⋱
`0
`
`
`Id. at 38. The blank spaces in the upper right and lower left corners are filled with
`
`zeros. Hp has two ‘1s’ in the left n-k-1 columns, while the last column has one ‘1.’
`
`This pattern occurs because Hp has zeros everywhere except the diagonal and the
`
`entries below the diagonal. Thus, each column has one ‘1’ on the diagonal and one
`
`‘1’ below the diagonal; the last column does not have an entry below the diagonal,
`
`so it has just one ‘1’. Hp thus has column weights of 2 and 1, both of which are
`
`different from the column weight of Hd. This is illustrated below:
`
`30. The parity check matrix H results from the combination of Hp and
`
`Hd.H has k columns with weight 4, one column with weight 1, and (n-k-1) columns
`
`
`
`with weight 2, as shown below:
`
`
`
`11
`
`

`

`
`
`In other words, Ping discloses a parity check matrix with different numbers of ones
`
`per column. Thus, Ping’s parity check matrix has “nonuniform weight per column”
`
`despite only having uniform weight per column in submatrix Hd.
`
`31.
`
`If one were to use the correspondence between Tanner graphs and
`
`parity check matrices described above (see ¶ 24), Ping’s parity check matrix could
`
`be drawn as an equivalent Tanner graph. Such a Tanner graph would be “irregular”
`
`in the sense of MacKay because different nodes would have different degrees. But
`
`such a graph would fail to include the irregular repetition of message bits shown in
`
`Fig. 3 of the ’032 patent. This is true since “nonuniform weight per column” and
`
`“irregular repetition of information bits” are not the same thing.
`
`V.
`
`PING’S CODE ALREADY EXHIBITS NONUNIFORM ROW WEIGHT
`
`32.
`
`In addition to the nonuniform column weight discussed in the
`
`previous section, Ping’s parity check matrix H also exhibits nonuniform row
`
`weight.
`
`
`
`12
`
`

`

`33. Ping describes Hd as having a uniform row weight of kt/(n-k).
`
`Because each 1 in Hd corresponds to an edge between an information node and a
`
`check node in a Tanner graph, the uniform row weight of Hd indicates that in a
`
`Tanner graph of Ping’s code, the number of information bit input into the check
`
`nodes (vi) is constant (each check node connects to kt/(n-k) information bits).
`
`34. However, the same is not true of Hp. The first row of Hp has only a
`
`single 1, whereas each subsequent row has two 1s. Accordingly, the complete
`
`parity check matrix H has nonuniform row weight.
`
`35. The nonuniform row weight of Ping’s parity check matrix H can
`
`clearly be seen by counting the number of 1s in each row. The row weight of Hd is
`
`given by Ping: each row will have a weight of kt/(n-k). Putting that together with
`
`the row weights of Hp discussed above, we see that the row weights of H will be
`
`
`(cid:15)(cid:16)
`
`(cid:4)(cid:17)(cid:18)(cid:15)(cid:7)(cid:19)1 (for the first row) and (cid:15)(cid:16)(cid:4)(cid:17)(cid:18)(cid:15)(cid:7)(cid:19)2 (for subsequent rows) as illustrated
`
`below:
`
`
`
`13
`
`
`
`

`

`The weight of the first row differs by 1 from the weights of the subsequent rows;
`
`accordingly, Ping’s parity check matrix has nonuniform row weight.
`
`36. This variable row weight corresponds to MacKay’s disclosure of a
`
`parity check matrix with “nonuniform row weight.” Ex. 1002 at 1449.
`
`Accordingly, Ping’s parity check matrix already possesses the property of
`
`nonuniform row weight, as discussed by MacKay, despite having a constant
`
`number of information bit inputs into the check nodes (vi) of the corresponding
`
`Tanner graph.
`
`VI. CONCLUDING STATEMENTS
`
`37.
`
`In signing this declaration, I understand that the declaration will be
`
`filed as evidence in a contested case before the Patent Trial and Appeal Board of
`
`the United States Patent and Trademark Office. I acknowledge that I may be
`
`subject to cross-examination in this case and that cross-examination will take place
`
`within the United States. If cross-examination is required of me, I will appear for
`
`cross-examination within the United States during the time allotted for cross-
`
`examination.
`
`38.
`
`I declare that all statements made herein of my knowledge are true,
`
`and that all statements made on information and belief are believed to be true, and
`
`that these statements were made with the knowledge that willful false statements
`
`
`
`14
`
`

`

`and the like so made are punishable by fine or imprisonment, or both, under
`
`Section 1001 of Title 18 of the United States Code.
`
`
`
`Dated: May 8, 2017
`
`
`
`
`
`By: / R. Michael Tanner /
`
`R. Michael Tanner, Ph.D.
`
`
`
`15
`
`

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket