throbber
Trials@uspto.gov
`571-272-7822
`
`Paper 53
`Entered: March 1, 2022
`
`UNITED STATES PATENT AND TRADEMARK OFFICE
`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`
`INTEL CORPORATION and XILINX, INC.,1
`Petitioner,
`v.
`FG SRC LLC,
`Patent Owner.
`
`IPR2020-01449
`Patent 7,149,867 B2
`
`
`
`
`
`
`
`
`
`
`
`
`
`Before KALYAN K. DESHPANDE, GREGG I. ANDERSON, and
`KARA L. SZPONDOWSKI, Administrative Patent Judges.
`SZPONDOWSKI, Administrative Patent Judge.
`
`
`
`JUDGMENT
`Final Written Decision
`Determining All Challenged Claims Unpatentable
`Denying Patent Owner’s Motion to Amend
`35 U.S.C. § 318(a)
`
`
`1 Xilinx, Inc. filed a motion for joinder and a petition in IPR2021-00633,
`which were granted, and, therefore, Xilinx, Inc. has been joined as petitioner
`in this proceeding.
`
`
`
`

`

`IPR2020-01449
`Patent 7,149,867 B2
`
`INTRODUCTION
`I.
`We instituted an inter partes review of claims 1–19 of U.S. Patent
`7,149,867 B2 (Ex. 1001, “the ’867 patent”), in response to a Petition (Paper
`1, “Pet”) filed by Intel Corporation (“Petitioner”). Paper 13 (“Dec.”).
`During the trial, FG SRC LLC (“Patent Owner”) filed a Response (Paper 34,
`“PO Resp.”), Petitioner filed a Reply (Paper 40, “Reply”), and Patent Owner
`filed a Sur-reply (Paper 44, “Sur-reply”).
`Patent Owner also filed a Motion to Amend the claims of the ’867
`patent. Paper 26. After considering Petitioner’s Opposition to the Motion to
`Amend (Paper 36), we issued Preliminary Guidance on Patent Owner’s
`Motion (Paper 38). Patent Owner subsequently filed a Revised Motion to
`Amend the claims of the ’867 patent that includes proposed substitute claims
`20–38. Paper 41 (“Mot. Amend”). Petitioner opposed Patent Owner’s
`Revised Motion to Amend (Paper 45, “Opp. Amend”), Patent Owner replied
`(Paper 49, “Reply Amend”), and Petitioner filed a Sur-reply (Paper 50, “Sur-
`reply Amend”).
`An oral hearing was held on January 6, 2022, and a copy of the
`transcript was entered into the record. Paper 52 (“Tr.).
`We have jurisdiction under 35 U.S.C. § 6. This Decision is a Final
`Written Decision under 35 U.S.C. § 318(a) as to the patentability of the
`claims on which we instituted trial. Based on the complete record, Petitioner
`has shown, by a preponderance of the evidence, that claims 1–19 of the ’867
`patent are unpatentable. We also deny Patent Owner’s Revised Motion to
`Amend, because Patent Owner has not met its burden in asserting that
`proposed substitute claims 20–38 have written description support in the
`original application that issued as the ’867 patent.
`
`2
`
`

`

`IPR2020-01449
`Patent 7,149,867 B2
`
`II. BACKGROUND
`Real Parties in Interest
`A.
`Petitioner identifies Intel Corporation as the sole real party in interest.
`Pet. 2. Patent Owner identifies FG SRC LLC as the sole real party in
`interest. Paper 4, 2.
`Related Matters
`B.
`The parties advise that the ’867 patent is the subject of the following
`district court litigations:
`FG SRC LLC v. Intel Corporation, 6:20-cv-00315-ADA (W.D. Tex.),
`filed April 24, 2020 (“the co-pending district court litigation”);
`FG SRC LLC v. Xilinx, Inc., 1:20-cv-00601-LPS (D. Del.), filed April
`30, 2020; and
`SRC Labs, LLC et al. v. Amazon Web Services, Inc., et al., 2:18-cv-
`00317-JLR (W.D. Wash.), filed February 26, 2018.
`Pet. 2; Paper 4, 2. Petitioner also advises that the ’867 patent was the
`subject of IPR2019-00103 (institution denied on May 10, 2019). Pet. 2.
`The ’867 Patent (Ex. 1001)
`C.
`The ’867 patent issued from Application No. 10/869,200 filed June
`16, 2004, and claims the benefit of Provisional Application No. 60/479,339,
`filed June 18, 2003. Ex. 1001, codes (21), (22), (60). The ’867 patent is
`titled “System and Method of Enhancing Efficiency and Utilization of
`Memory Bandwidth in Reconfigurable Hardware” and is generally directed
`to “enhancing the efficiency and utilization of memory bandwidth in
`reconfigurable hardware” and “implementing explicit memory hierarchies in
`reconfigurable processors that make efficient use of off-board, on-board,
`on-chip storage and available algorithm locality.” Id. at code (57), 1:15–24.
`
`3
`
`

`

`IPR2020-01449
`Patent 7,149,867 B2
`According to the ’867 patent, there was a growing need to develop
`improved memory hierarchies that limited overhead of a memory hierarchy
`without also reducing bandwidth efficiency and utilization. Ex. 1001, 3:57–
`60. The ’867 patent describes a system including a memory hierarchy and a
`reconfigurable processor that includes a data prefetch unit. Id. at 4:4–10,
`5:60–62, 6:9–13, 7:34–48. The ’867 patent states that a “Reconfigurable
`Processor” is “a computing device that contains reconfigurable components
`such as FPGAs [(field programmable gate arrays)] and can, through
`reconfiguration, instantiate an algorithm as hardware.” Id. at 5:26–29. The
`’867 patent states that a “Data prefetch Unit” is “a functional unit [a set of
`logic that performs a specific operation] that moves data between members
`of a memory hierarchy [a collection of memories],” where such “movement
`may be as simple as a copy, or as complex as an indirect indexed strided
`copy into a unit stride memory.” Id. at 5:34–43.
`Figure 1 of the ’867 patent, reproduced below, shows a reconfigurable
`processor (RP) 100 of the claimed invention. Id. at 4:38–40.
`
`4
`
`

`

`IPR2020-01449
`Patent 7,149,867 B2
`
`
`Figure 1 depicts a reconfigurable processor (RP) 100. Id. at 4:38–40.
`Figure 1 depicts reconfigurable processor 100, which “may be
`implemented using field programmable gate arrays (FPGAs) or other
`reconfigurable logic devices, that can be configured and reconfigured to
`contain functional units and interconnecting circuits, and a memory
`hierarchy comprising on-board memory banks 104, on-chip block RAM 106,
`registers wires, and a connection 108 to external memory.” Id. at 6:5–11. In
`addition, “[o]n-chip reconfigurable components 102 create memory
`structures such as registers, FIFOs, wires and arrays using block RAM.” Id.
`at 6:11–14. “Dual-ported memory 106 is shared between on-chip
`reconfigurable components 102.” Id. at 6:14–15. “The reconfigurable
`processor 100 also implements user-defined computational logic . . .
`constructed by programming an FPGA to implement a particular
`interconnection of computational functional units.” Id. at 6:15–19. “In a
`
`5
`
`

`

`IPR2020-01449
`Patent 7,149,867 B2
`particular implementation, a number of RPs 100 are implemented within a
`memory subsystem of a conventional computer, such as on devices that are
`physically installed in dual inline memory module (DIMM) sockets of a
`computer.” Id. at 6:19–23. “In this manner the RPs 100 can be accessed by
`memory operations and so coexist well with a more conventional hardware
`platform.” Id. at 6:23–25. The ’867 patent explains that “[u]nlike
`conventional static hardware platforms . . . the memory hierarchy provided
`in a RP 100 is reconfigurable” and “through the use of data access units and
`associated memory hierarchy components, computational demands and
`memory bandwidth can be matched.” Id. at 7:17–22.
`One or more data prefetch units are used to improve the memory
`hierarchy and bandwidth efficiency and utilization. Id. at 3:58–60, 8:62–65.
`Fig. 4 of the ’867 patent, reproduced below, depicts a logic block 300 with
`an addition of a data prefetch unit 401. Id. at 4:44–46.
`
`6
`
`

`

`IPR2020-01449
`Patent 7,149,867 B2
`
`
`Figure 4 illustrates a logic block 300 (a block composed of computational
`functional units capable of taking data and producing results with each clock
`pulse) with the addition of a data prefetch unit 401. Id. at 7:6–8, 7:34–35.
`
`Logic block 300 includes computational functional units
`(computational logic) 301, 302, and 303, a control, and data access
`functional units 403 that present data to computational logic 301, 302, and
`303. Id. at 7:25–48, Fig. 4. Data prefetch unit 401 moves data from one
`member of the memory hierarchy 305 to another 308 (a block RAM
`memory). Id. at 7:34–37, Fig. 4. Data prefetch unit 401 operates
`“independently of other functional units 301, 302, and 303 and can therefore
`operate prior to, in parallel with, or after computational logic.” Id. at 7:37–
`40. In addition, data prefetch unit 401 may be “operated independently of
`logic block 300 that uses prefetched data.” Id. at 7:45–48. Data prefetch
`
`7
`
`

`

`IPR2020-01449
`Patent 7,149,867 B2
`unit 401 deposits data into the memory hierarchy, where computational logic
`301, 302, and 303 can access it through data access units. Id. at 7:42–44.
`The ’867 patent explains:
`An important feature of the present invention is that many
`types of data prefetch units can be defined so that the
`prefetch hardware can be configured to conform to the
`needs of the algorithms currently implemented by the
`computational logic. The specific characteristics of the
`prefetch can be matched with
`the needs of
`the
`computational logic and the format and location of data in
`the memory hierarchy.
`Id. at 7:49–55. The ’867 patent provides examples of configuring a data
`prefetch unit depending on the needs of the computational logic. Id. at 7:52–
`62, 8:3–21, Figs. 9A–9B.
`Illustrative Claims
`D.
`Among the challenged claims, claims 1, 9, and 13 are independent.
`Independent claims 1, 9, and 13 are reproduced below, with brackets noting
`Petitioner’s identifiers.
`1. [preamble] A reconfigurable processor that instantiates
`an algorithm as hardware comprising:
`[1(a)] a first memory having a first characteristic memory
`bandwidth and/or memory utilization; and
`[1(b)] a data prefetch unit coupled to the first memory,
`[1(c)] wherein the data prefetch unit retrieves only computational
`data required by the algorithm from a second memory of second
`characteristic memory bandwidth and/or memory utilization and
`places the retrieved computational data in the first memory [1(d)]
`wherein the data prefetch unit operates independent of and in
`parallel with logic blocks using the computional [sic] data, and
`[1(e)] wherein at least the first memory and data prefetch unit are
`configured to conform to needs of the algorithm, and [1(f)] the
`data prefetch unit is configured to match format and location of
`data in the second memory.
`
`8
`
`

`

`IPR2020-01449
`Patent 7,149,867 B2
`
`
`
`
`
`9. [preamble] A reconfigurable hardware system,
`comprising:
`[9(a)] a common memory; and
`[9(b)] one or more reconfigurable processors that can
`instantiate an algorithm as hardware coupled to the common
`memory, [9(c)] wherein at least one of the reconfigurable
`processors includes a data prefetch unit to read and write only
`data required for computations by the algorithm between the data
`prefetch unit and the common memory [9(d)] wherein the data
`prefetch unit operates independent of and in parallel with logic
`blocks using the computational data, and [9(e)] wherein the data
`prefetch unit is configured to conform to needs of the algorithm
`and [9(f)] match format and location of data in the common
`memory.
`
`13. [preamble] A method of transferring data comprising:
`[13(a)] transferring data between a memory and a data
`prefetch unit in a reconfigurable processor; and
`[13(b)] transferring the data between a computational unit
`and a data access unit, [13(c)] wherein the computational unit
`and the data access unit, and the data prefetch unit are configured
`to conform to needs of an algorithm implemented on the
`computational unit and transfer only data necessary for
`computations by the computational unit, and [13(d)] wherein the
`prefetch unit operates independent of and in parallel with the
`computational unit.
`Ex. 1001, 12:39–54; 13:13–26; 14:1–11.
`Evidence
`E.
`Petitioner relies on the following references (see Pet. 4–5).
`Reference Exhibit
`Patent/Printed Publication
`Zhang
`1003
`Xingbin Zhang et al., Architectural Adaptation
`of Application-Specific Locality Optimizations,
`published in the Proceedings of the International
`Conference on Computer Design - VLSI in
`Computers and Processors (IEEE, October 12–
`
`9
`
`

`

`IPR2020-01449
`Patent 7,149,867 B2
`
`Gupta
`
`1004
`
`Chien
`
`1005
`
`
`
`15, 1997), 150–156
`Rajesh Gupta, Architectural Adaptation in
`AMRM Machines, Proceedings of the IEEE
`Computer Society Workshop on VLSI 2000
`(IEEE, April 27–28, 2000), 75–79
`Andrew A. Chien et al., MORPH: A System
`Architecture for Robust High Performance
`Using Customization (An NSF 100 TeraOps
`Point Design Study), Proceedings of Frontiers
`’96 – The Sixth Symposium on the Frontiers of
`Massively Parallel Computing (IEEE, October
`27–31, 1996), 336–345
`
`In addition, Petitioner relies on the Declarations of Rajesh K. Gupta,
`Ph.D. (Exs. 1010, 1030), Declarations of Jacob Robert Munford (Exs. 1012,
`1031), Declaration of Gordon MacPherson (Ex. 1027), Declaration of Eileen
`D. McCarrier (Ex. 1028), Declaration of Austin Schnell (Ex. 1029), and
`Declaration of Dr. Stanley Shanfield (Ex. 1006).
`Patent Owner relies on the Declaration of Dr. William Mangione-
`Smith (Ex. 2028) and Declaration of Ryan Kastner, Ph.D. (Ex. 2010).
`Deposition transcripts have been entered into the record for Dr. Gupta
`(Ex. 1039), Mr. MacPherson (Ex. 1040), Dr. Shanfield (Exs. 1043, 2029),
`and Dr. Mangione-Smith (Ex. 1044).
`
`10
`
`

`

`IPR2020-01449
`Patent 7,149,867 B2
`Prior Art and Asserted Grounds
`F.
`Petitioner asserts that claims 1–19 are unpatentable on the following
`grounds (Pet. 5):
`
`Claims Challenged
`1, 2, 4–8, 13–19
`3, 9–12
`
`35 U.S.C.
`§2
`
`103
`103
`
`References
`Zhang, Gupta
`Zhang, Gupta, Chien
`
`
`
`III. ANALYSIS
`Legal Standards
`A.
`A claim is unpatentable under 35 U.S.C. § 103(a) if “the differences
`between the subject matter sought to be patented and the prior art are such
`that the subject matter as a whole would have been obvious at the time the
`invention was made to a person having ordinary skill in the art to which said
`subject matter pertains.” KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 406
`(2007). The question of obviousness is resolved on the basis of underlying
`factual determinations, including: (1) the scope and content of the prior art;
`(2) any differences between the claimed subject matter and the prior art; (3)
`the level of skill in the art; and (4) objective evidence of nonobviousness,
`i.e., secondary considerations. See Graham v. John Deere Co., 383 U.S. 1,
`17–18 (1966).
`A patent claim “is not proved obvious merely by demonstrating that
`each of its elements was, independently, known in the prior art.” KSR, 550
`
`
`2 The Leahy-Smith America Invents Act (“AIA”), Pub. L. No. 112-29, 125
`Stat. 284 (2011), amended 35 U.S.C. § 103, effective March 16, 2013.
`Because the application from which the ‘867 patent issued was filed before
`this date, the pre-AIA version of § 103 applies.
`
`
`
`11
`
`

`

`IPR2020-01449
`Patent 7,149,867 B2
`U.S. at 418. An obviousness determination requires finding “both ‘that a
`skilled artisan would have been motivated to combine the teachings of the
`prior art references to achieve the claimed invention, and that the skilled
`artisan would have had a reasonable expectation of success in doing so.’”
`Intelligent Bio-Sys., Inc. v. Illumina Cambridge Ltd., 821 F.3d 1359,
`1367–68 (Fed. Cir. 2016) (citation omitted); see KSR, 550 U.S. at 418.
`Further, an assertion of obviousness “cannot be sustained by mere
`conclusory statements; instead, there must be some articulated reasoning
`with some rational underpinning to support the legal conclusion of
`obviousness.” KSR, 550 U.S. at 418; In re NuVasive, Inc., 842 F.3d 1376,
`1383 (Fed. Cir. 2016) (a finding of a motivation to combine “must be
`supported by a ‘reasoned explanation’” (citation omitted)).
`Level of Ordinary Skill in the Art
`B.
`Petitioner asserts a person of ordinary skill in the art “would have had
`an undergraduate degree in electrical engineering or related field with at
`least three years of experience in computer processor architecture and
`FPGAs, a master’s degree with two or more years of experience in those
`fields, or an equivalent combination of education and experience.” Pet. 14
`(citing Ex. 1006 ¶ 67).
`Patent Owner does not dispute Petitioner’s proposed level of skill.
`See generally PO Resp.
`We find Petitioner’s proposal is consistent with the level of ordinary
`skill in the art reflected by the prior art of record, and, therefore, adopt
`Petitioner’s proposed level of ordinary skill in the art for purposes of this
`Decision. See Okajima v. Bourdeau, 261 F.3d 1350, 1355 (Fed. Cir. 2001).
`
`12
`
`

`

`IPR2020-01449
`Patent 7,149,867 B2
`C. Claim Construction
`We construe each claim “in accordance with the ordinary and
`customary meaning of such claim as understood by one of ordinary skill in
`the art and the prosecution history pertaining to the patent,” the same
`standard used to construe the claim in a civil action. 37 C.F.R. § 42.100(b)
`(2020).
`Agreed Constructions
`1.
`The parties agree to the construction of the following terms:
`Claim term
`Proposed Construction
`Reconfigurable
`a computing device that contains reconfigurable
`Processor
`components such as FPGAs and can, through
`reconfiguration, instantiate an algorithm as hardware
`Data Prefetch Unit a functional unit that moves data between members of
`a memory hierarchy. The movement may be as
`simple as a copy, or as complex as an indirect
`indexed strided copy into a unit stride memory
`a functional unit that accesses a component of a
`memory hierarchy, and delivers data directly to the
`computational logic
`a set of logic that performs a specific operation. The
`operation may for example be arithmetic, logical,
`control, or data movement. Functional units are used
`as building blocks of reconfigurable logic
`a collection of memories
`
`Data Access Unit
`
`Functional Unit3
`
`Memory
`Hierarchy4
`
`
`
`
`3 Petitioner states that although not directly recited in the claims, this term is
`used in the ’867 patent’s definition of “data prefetch unit” and “data access
`unit.” Pet. 15; Ex. 1001, 5:40–46.
`4 Petitioner states although not directly recited in the claims, this term is
`used in the ’867 patent’s definition of “data prefetch unit” and “data access
`unit.” Pet. 15; Ex. 1001, 5:40–46.
`
`13
`
`

`

`IPR2020-01449
`Patent 7,149,867 B2
`Pet. 14–16 (citing Ex. 1001; Ex. 1006); PO Resp. 29–30. We adopt these
`agreed constructions for purposes of this Decision.
`Patent Owner also states that in the co-pending district court litigation,
`the parties have agreed to the following constructions:
`Claim term
`Agreed Construction
`(Preamble) A
`Preamble is limiting
`reconfigurable processor
`that instantiates an
`algorithm as hardware
`Common Memory
`
`Computational Unit
`
`the Data Prefetch Unit
`Receives Processed Data
`Configured To Conform to
`Needs of The Algorithm
`Reconfigurable Logic
`
`an external memory shared by processors in
`a multiprocessor system
`a functional unit of a reconfigurable
`processor that performs a computation
`the data prefetch unit receives the results of
`the algorithm
`configured in reconfigurable logic to
`conform to the needs of the algorithm
`reconfigurable logic is composed of an
`interconnection of functional units, control,
`and storage that implements an algorithm
`and can be loaded into a Reconfigurable
`Processor
`
`
`PO Resp. 30. We further adopt these agreed constructions for purposes of
`this Decision.
`Patent Owner’s Proposed Constructions
`2.
`Patent Owner proposes construction for two terms.
`“retrieves only computational data required by the
`a)
`algorithm from a second memory . . . and places the
`retrieved computational data in the first memory”
`(limitation 1(c))
`Patent Owner argues that this limitation should be construed as
`“retrieves from a second memory that computational data which is required
`by the algorithm and no other computational data … and places the
`
`14
`
`

`

`IPR2020-01449
`Patent 7,149,867 B2
`retrieved computational data in the first memory.” PO Resp. 31–32
`(emphasis added).
`Patent Owner contends that “[t]he plain meaning requires that no
`superfluous computational data is transferred to the first memory.” Id. at 31.
`In support of its proposed construction, Patent Owner identifies column 9,
`lines 1–5 of the ’867 patent, which states “an important feature of the
`present invention is the ability to implement various kinds or styles of
`prefetch units to meet the needs of a particular algorithm being implemented
`by computational elements.” Id. Patent Owner also relies on column 9,
`lines 8–10, which states “in most cases the function being implemented by
`components 301 would change and therefore alter the decision as to which
`prefetch strategy is most appropriate.” Id. at 31–32. Patent Owner also
`relies on column 9, lines 35–40, which states “[g]ains are made by
`delivering only requested data from transfer buffer 1305 (not the remainder
`of a data block as in cache line oriented systems) by eliminating the need to
`transfer an index array either to the processor or to the memory controller.”
`Id. at 32.
`Petitioner contends that Patent Owner does not rely on this
`construction to overcome the prior art, and Patent Owner’s expert does not
`rely on (or even recite) this construction. Reply 2 (citing PO Resp. 40–44;
`Ex. 1044, 28:18–29:5, 32:10–14, 107:19–108:7). Petitioner also argues that
`Patent Owner’s proposed construction is unsupported by intrinsic evidence.
`Id. at 3–4.
`Patent Owner has not explained the impact of this proposed claim
`construction on its arguments supporting patentability of the claims. We
`disagree with Patent Owner that its proposed construction is the “plain and
`ordinary meaning,” because the claim language recites “retrieves only
`
`15
`
`

`

`IPR2020-01449
`Patent 7,149,867 B2
`computational data required by the algorithm from a second memory,” and
`does not require the additional limitation “and no other computational data.”
`Under these circumstances, we decline to rewrite the claim language to
`include additional words that are not present in the claim, e.g., “and no other
`computational data.” See Hoganas AB v. Dresser Indus., Inc., 9 F.3d 948,
`950 (Fed. Cir. 1993) (quoting E.I. Du Pont de Nemours & Co. v. Phillips
`Petroleum Co., 849 F.2d 1430, 1433 (Fed. Cir. 1988)) (“It is improper for a
`court to add ‘extraneous’ limitations to a claim, that is, limitations added
`‘wholly apart from any need to interpret what the patentee meant by
`particular words or phrases in the claim.’”).
` Moreover, in light of the parties’ arguments, we determine that we
`need not expressly construe this term. See Nidec Motor Corp. v. Zhongshan
`Broad Ocean Motor Co., 868 F.3d 1013, 1017 (Fed. Cir. 2017) (“[W]e need
`only construe terms ‘that are in controversy, and only to the extent necessary
`to resolve the controversy.’” (quoting Vivid Techs., Inc. v. Am. Sci. & Eng’g,
`Inc., 200 F.3d 795, 803 (Fed. Cir. 1999))). We agree with Petitioner that
`Patent Owner does not rely on this construction to overcome the prior art;
`for example, Dr. Mangione-Smith testified that he had not been given or
`applied any particular claim construction. See Ex. 1044, 28:18–32:14;
`107:19–109:4; see PO Resp. 40–44; Ex. 2028 ¶ 74.
`“read and write only data required for
`b)
`computations by the algorithm between the data prefetch
`unit and the common memory” (claim 9)
`Patent Owner proposes that this term should be construed as “read,
`using the data prefetch unit, only data required for computations by the
`algorithm from common memory and write, using the data prefetch unit,
`only data required for computations by the algorithm.” PO Resp. 33. In
`
`16
`
`

`

`IPR2020-01449
`Patent 7,149,867 B2
`support of its construction, Patent Owner states that “[t]his term should be
`accorded its plain and ordinary meaning.” Id.
`Petitioner contends that Patent Owner does not rely on this
`construction to overcome the prior art, and Patent Owner’s expert does not
`rely on (or even recite) this construction. Reply 2 (citing PO Resp. 40–44;
`Ex. 1044, 28:18–29:5, 32:10–14, 107:19–108:7). Petitioner also argues that
`Patent Owner’s proposed construction is unsupported by intrinsic evidence.
`Id. at 3–4.
`Patent Owner has not explained the impact of this proposed claim
`construction on its arguments supporting patentability of the claims.
`Moreover, Patent Owner does not explain why we should adopt this
`proposed construction, or provide any supporting intrinsic evidence.
`Moreover, given that Patent Owner relies on the same arguments for claim 9
`as it presented for claim 1 (which does not recite this limitation), we agree
`with Petitioner that Patent Owner does not rely on this construction in order
`to overcome the prior art. See PO Resp. 53–55; see, e.g., Ex. 1044, 28:18–
`32:14 (Dr. Mangione-Smith testifying that he had not been given or applied
`any particular claim construction). Accordingly, we determine that we need
`not expressly construe this term to resolve the parties’ dispute.
`The Asserted Prior Art References
`D.
`1.
`Zhang (Ex. 1003)
`Zhang is a paper published by the Institute of Electrical and
`Electronics Engineers, Inc. (hereafter “IEEE”) as part of the Proceedings of
`the International Conference on Computer Design - VLSI in Computers and
`Processors. Ex. 1003, 1–2, 4. 5 Zhang describes “a machine architecture
`
`
`5 Citations to Exhibit 1003 are to the page numbering provided by Petitioner.
`
`17
`
`

`

`IPR2020-01449
`Patent 7,149,867 B2
`that integrates programmable logic into key components of the system with
`the goal of customizing architectural mechanisms and policies to match an
`application,” using application-specific hardware assists. Id. at 12
`(Abstract). Zhang “demonstrate[s] that application-specific hardware assists
`and policies can provide substantial improvements in performance on a per
`application basis.” Id. Zhang’s architecture “integrates small blocks of
`programmable logic into key elements of a baseline architecture, including
`processing elements, components of the memory hierarchy, and the scalable
`interconnect, to provide architectural adaptation—the customization of
`architectural mechanisms and policies to match an application.” Id. at 13.
`Zhang explains that architectural adaptation provides mechanisms for
`application-specific hardware assists to overcome rigid architectural choices
`that do not work well across different applications, as “integration of
`programmable logic with memory components enables application-specific
`locality optimizations.” Id. at 13–14.
`Zhang’s architecture is depicted in Figure 2 below:
`
`18
`
`

`

`IPR2020-01449
`Patent 7,149,867 B2
`
`
`Figure 2 of Zhang depicts programmable logic integrated with CPU,
`cache, network interface, and memory.
`
`
`Zhang presents “two case studies of architectural adaption using
`application-specific knowledge to enhance latency tolerance and efficiently
`utilize network bisection on multiprocessors.” Ex. 1003, 14. The first case
`study uses architectural adaptation for prefetching and exploits application
`access pattern information. Id. at 15. Figure 4 of Zhang, reproduced below,
`depicts a prefetcher implementation for Zhang’s first case study, using
`programmable logic integrated with the L1 cache. Id.
`
`19
`
`

`

`IPR2020-01449
`Patent 7,149,867 B2
`
`
`Figure 4 shows a prefetcher implementation using programmable logic
`integrated with the L1 cache.
`The prefetcher in Figure 4 requires two pieces of application-specific
`information: address ranges and memory layout of the target data structures.
`Id. at 15. The address range, which is application dependent, is needed to
`indicate memory bounds where prefetching is likely to be useful. Id.
`Prefetching can be enabled or disabled, and is triggered only by read misses.
`Id. Once the prefetcher is enabled, it determines what and when to prefetch
`by checking virtual addresses of cache lookups to check whether a matrix
`element is being accessed. Id. In one example, records spanning multiple
`cache lines are targeted to “prefetch[] all fields of a matrix element structure
`whenever some field of the element is accessed.” Id. Because each matrix
`element (which is padded to 64 bytes) spans two cache lines (in a cache with
`cache line size of 32 bytes), the prefetcher generates an additional L2 cache
`lookup address from the given physical address that prefetches the other
`cache line not yet referenced. Id. In a second example, particular pointer
`
`20
`
`

`

`IPR2020-01449
`Patent 7,149,867 B2
`fields (those likely to be traversed when their parent structures are accessed)
`are targeted. Id. at 15–16. For example, in a sparse matrix-vector multiply,
`the record pointed to by the nextRow field is accessed close in time with the
`current matrix element. Id. A prefetcher generates an additional address
`after the initial cache miss is satisfied using the nextRow pointer value
`embedded in the data returned by the L2 cache. Id. at 16.
`Zhang’s second case study uses a sparse matrix-matrix multiply
`routine to show architectural adaptation that improves data reuse and
`reduces data traffic between the memory unit and the processor. Id. at 15–
`16. “The architectural customization aims to send only used fields of matrix
`elements during a given computation to reduce bandwidth requirement using
`dynamic scatter and gather.” Id. at 16. “The two main ideas are prefetching
`of whole rows or columns using pointer chasing in the memory module and
`packing/gathering of only the used fields of the matrix element structure.”
`Id. Figure 5 of Zhang, reproduced below, illustrates an architecture
`including a cache and a main memory module, and containing two units of
`logic, an address translation logic and a gather logic.
`
`21
`
`

`

`IPR2020-01449
`Patent 7,149,867 B2
`
`
`Figure 5 shows a scatter and gather logic using two units of logic, an address
`translation logic and a gather logic. Id. at 16.
`
`
`
`Gupta (Ex. 1004)
`2.
`Gupta is a paper published by IEEE as part of the Proceedings of the
`IEEE Computer Society Workshop on VLSI 2000. Ex. 1004, 1, 3, 6. 6 Gupta
`describes an Adaptive Memory Reconfiguration Management (AMRM)
`prototype architecture that implements a board-level prototype designed to
`“simulate a range of memory hierarchies for applications running on a host
`processor” and “supports configurability of [a] cache memory via an on-
`board FPGA-based memory controller.” Id. at 8–10. The goals of Gupta’s
`AMRM prototype are that “it be adaptable to many different memory
`hierarchy architectures,” “be useful for running real time program execution
`or even memory simulations,” and demonstrate “a specific mechanism for
`
`
`6 Citations to Exhibit 1004 are to the page numbering provided by Petitioner.
`
`22
`
`

`

`IPR2020-01449
`Patent 7,149,867 B2
`latency management . . . to provide significant performance boost for the
`class of applications characterized by frequent accesses to linked data
`structures scattered in the physical memory.” Id. at 9–10.
`Figure 1 of Gupta, reproduced below, illustrates an AMRM prototype
`board.
`
`
`Figure 1 illustrates the main components of an AMRM prototype board.
`
`As shown in Figure 1, an AMRM prototype board includes a general
`3-level memory hierarchy plus support for an AMRM ASIC chip
`implementing architectural assists within a CPU-L1 datapath. Id. at 10. The
`FPGAs on the board contain controllers for the SRAM, DRAM and L1
`cache. Id. The AMRM chip is positioned between the L1 cache and the rest
`
`23
`
`

`

`IPR2020-01449
`Patent 7,149,867 B2
`of the system and can be accessed in parallel with the L2 cache. Id. at 11.
`The AMRM chip can thus accept and supply data coming from or going to
`the L1 cache. Id. The AMRM chip may contain a write buffer or a prefetch
`unit to access L2, and also has access to the memory interface and can
`prefetch from memory. Id.
`Chien (Ex. 1005)
`3.
`Chien is a paper published by the IEEE Computer Society Press as
`part of the Proceedings of Frontiers ‘96–The Sixth Symposium on the
`Frontiers of Massively Parallel Computing. Ex. 1005, 1–2, 5. 7 Chien
`describes a design and architecture of a MultiprocessOr with Reconfigurable
`Parallel Hardware (MORPH) that “uses reconfigurable logic blocks
`integrated with the system core to control policies, interactions, and
`interconnections” and has “configurability [th

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket