throbber
Case 1:20-cv-00834-ADA Document 45 Filed 11/17/20 Page 1 of 25
`
`IN THE UNITED STATES DISTRICT COURT
`FOR THE WESTERN DISTRICT OF TEXAS
`AUSTIN DIVISION
`
`
`
`Plaintiff,
`
`FG SRC LLC,
`
`
`
`v.
`
`INTEL CORPORATION,
`
`
`
`
`
`
`
`Case No. 1:20-cv-00834-ADA
`
`JURY TRIAL DEMANDED
`
`Defendant.
`
`PLAINTIFF FG SRC LLC’S OPENING CLAIM CONSTRUCTION BRIEF
`
`
`
`
`
`
`
`
`
`
`
`
`Intel Exhibit 1038 - 1
`
`

`

`Case 1:20-cv-00834-ADA Document 45 Filed 11/17/20 Page 2 of 25
`
`TABLE OF CONTENTS
`
`GENERAL TECHNICAL BACKGROUND ................................................................. 1
`
`A. Processor Types ...................................................................................................... 1
`
`B. Memory Hierarchies ................................................................................................ 2
`
`C. Prefetching ............................................................................................................. 4
`
`LEVEL OF ORDINARY SKILL IN THE ART ............................................................. 7
`
`AGREED TERMS ...................................................................................................... 7
`
`DISPUTED TERMS .................................................................................................... 8
`
`A. “retrieves only computational data required by the algorithm from a second
`memory… and places the retrieved computational data in the first memory” .......... 8
`
`B. “read and write only data required for computations by the algorithm between the
`data prefetch unit and the common memory”......................................................14
`
`C. “operates independent of and in parallel with logic blocks using the [computational
`data / computional [sic] data]” ..........................................................................15
`
`CONCLUSION ..........................................................................................................20
`
`i
`
`
`I.
`
`II.
`
`III.
`
`IV.
`
`V.
`
`
`
`
`
`
`Intel Exhibit 1038 - 2
`
`

`

`Case 1:20-cv-00834-ADA Document 45 Filed 11/17/20 Page 3 of 25
`
`TABLE OF AUTHORITIES
`
`
`
`CASES:
`
`Comark Commc’ns, Inc. v. Harris Corp.,
`156 F.3d 1182 (Fed. Cir. 1998)................................................................................................. 16
`
`
`Electro Med. Sys., S.A. v. Cooper Life Scis., Inc.,
`34 F.3d 1048 (Fed. Cir. 1994)..................................................................................................... 1
`
`
`SciMed Life Sys. v. Advanced Cardiovascular Sys.,
`242 F.3d 1337 (Fed. Cir. 2001)................................................................................................. 15
`
`
`Super Interconnect Tech. LLC v. Huawei Device Co. Ltd.,
`No. 2:18-CV-462-JRG-RSP, 2020 WL 60145 (E.D. Tex. Jan. 6, 2020) .................................. 16
`
`
`Thorner v. Sony Computer Entm't Am. LLC,
`669 F.3d 1362 (Fed. Cir. 2012)................................................................................................. 13
`
`
`
`
`
`
`
`
`
`ii
`
`Intel Exhibit 1038 - 3
`
`

`

`Case 1:20-cv-00834-ADA Document 45 Filed 11/17/20 Page 4 of 25
`
`INDEX OF EXHIBITS
`
`EXHIBIT
`A
`B
`
`C
`
`DESCRIPTION
`U.S. Patent No. 7,149,867
`Declaration of Ryan Kastner, Ph.D., dated November 17, 2020, referred to
`herein as “Kastner Dec.”
`Excerpt from Ryan Kastner, Ph.D., et al. Parallel Programming for FPGAs
`18 (2020), available at http://kastner.ucsd.edu/hlsbook/.
`
`
`
`
`iii
`
`Intel Exhibit 1038 - 4
`
`

`

`Case 1:20-cv-00834-ADA Document 45 Filed 11/17/20 Page 5 of 25
`
`Plaintiff FG SRC LLC (“SRC”) submits its opening claim construction brief which includes
`
`proper constructions and related argument for the disputed terms of U.S. Patent No. 7,149,867
`
`(“’867 patent”).
`
`A. Processor Types
`
`I. GENERAL TECHNICAL BACKGROUND
`
`The ’867 patent relates to the use of reconfigurable processors, such as Field Programmable
`
`Gate Arrays (“FPGAs”). Ex. A 1:16-24, 5:26-29. An FPGA is an integrated circuit that contains
`
`an array of programmable logic blocks and memory elements connected via programmable
`
`interconnect. Kastner Dec. ¶ 14. A user can program an FPGA to perform a specific function by
`
`configuring the logic blocks and interconnect. Id. This enables the user to create a hardware
`
`accelerated implementation of an algorithm by programming the FPGA in a manner that efficiently
`
`executes the algorithm. Id. In other words, with a reconfigurable processor such as an FPGA, the
`
`hardware adapts to the algorithm.
`
`This can be contrasted with implementing the algorithm with software on a CPU or
`
`microprocessor. Id. ¶ 15. A CPU executes the algorithm by performing a sequence of instructions
`
`(e.g., arithmetic, logical, memory (load/store)) that implement the algorithm. Id. A different
`
`algorithm can be implemented on the CPU by changing the instructions. Id. The CPU is flexible;
`
`it can implement almost any algorithm. Id. Because the CPU hardware is fixed, it cannot be
`
`customized towards the algorithm like an FPGA implementation. Id. These customizations allow
`
`FPGA implementations to be orders of magnitude more efficient than implementing that algorithm
`
`as software on a CPU. Id.
`
`In addition to FPGAs and CPUs, Application-Specific Integrated Circuits (“ASICs”) can also
`
`be used to execute algorithms. Id. ¶ 16. ASICs use custom logic and are manufactured specifically
`
`to perform one application. Id. Because an ASIC is purpose-built for one application, it is very
`
`1
`
`Intel Exhibit 1038 - 5
`
`

`

`Case 1:20-cv-00834-ADA Document 45 Filed 11/17/20 Page 6 of 25
`
`efficient. Id. However, since the customizations are hard-coded in the integrated circuit during
`
`manufacturing, an ASIC cannot be repurposed for another application. Id. FPGAs, on the other
`
`hand, provide a great deal more flexibility and can be used in any number of applications. Id. Thus,
`
`as shown in the figure below, FPGAs provide an appealing middle ground between CPUs and
`
`ASICs. Id.
`
`An FPGA can be configured by providing it with a bitstream, which describes how the
`
`configurable logic present in the FPGA should be programmed in order to execute a particular
`
`
`
`algorithm. Id. ¶ 17.
`
`B. Memory Hierarchies
`
`The ’867 patent describes and claims moving data between members of a “memory hierarchy,”
`
`which the parties agree is “a collection of memories.” For example, Claim 1 requires that a data
`
`prefetch unit “retrieves only computational data required by the algorithm from a second
`
`memory… and places the retrieved computational data in the first memory.” An understanding
`
`of memory hierarchies is useful as this is the first disputed term briefed herein. As described in
`
`more detail in § IV.A, Intel’s construction is overly restrictive regarding information that can be
`
`read from the second memory.
`
`By way of background, computing systems including CPUs, FPGAs, and ASICs typically
`
`employ a memory hierarchy, which combines different types of memories in an attempt to ensure
`
`2
`
`Intel Exhibit 1038 - 6
`
`

`

`Case 1:20-cv-00834-ADA Document 45 Filed 11/17/20 Page 7 of 25
`
`that data required for computation is immediately available when it is needed. Id. ¶ 18. There is a
`
`general trade-off between memory size and bandwidth. Id. In general, larger memories have lower
`
`bandwidth, i.e., they can store a lot of data but the rate at which they can transfer this data
`
`(bits/second) is low. Id. Smaller memories have much higher bandwidth. Id. Thus, memory
`
`systems commonly use hierarchies of progressively faster (higher bandwidth) but smaller
`
`memories. Id. Indeed, the patent describes this concept with respect to traditional—rather than
`
`reconfigurable—processors:
`
`One approach to improving bandwidth efficiency and utilization in memory hierarchies
`has been to develop ever more powerful processor caches. These caches are high-speed
`memories (typically SRAM) in close proximity to the microprocessor that try to keep
`copies of instructions and data the microprocessor may soon need. The microprocessor
`can store and retrieve data from the cache at a much higher rate than from a slower,
`more distant main memory.
`
`Ex. A 1:51-55 (emphasis added). The patent additionally states that, for such a traditional
`
`implementation, “small caches are typically much faster than larger caches, but store less data . . .
`
`.” Id. 3:10-11. Thus, memory systems commonly use hierarchies of progressively faster but smaller
`
`memories.
`
`The above figure, from Dr. Kastner’s book “Parallel Programming for FPGAs” further
`
`
`
`3
`
`Intel Exhibit 1038 - 7
`
`

`

`Case 1:20-cv-00834-ADA Document 45 Filed 11/17/20 Page 8 of 25
`
`describes this. Kastner Dec. ¶ 19; Ex. C. It shows that external memory, e.g., Dynamic Random
`
`Access Memory (“DRAM”) may be quite large, and may in fact be several gigabytes, with a
`
`bandwidth of gigabytes per second. Kastner Dec. ¶ 19. On-chip memories like block RAMs
`
`(“BRAMs”) can provide terabytes per second of total bandwidth but such memories have
`
`significantly less storage capability. Id. And flip-flops (“FFs”) have even more bandwidth but
`
`lower storage capability. Id.
`
`Data located in larger external memory has limited bandwidth. Id. ¶ 20. This can become the
`
`bottleneck for the computation on the reconfigurable processor in cases where the computational
`
`unit is stalling (not performing any useful execution) while waiting for the data to be retrieved
`
`from the external memory. Id. Instead of accessing the external memory each time data is needed,
`
`portions of the memory that are actively being worked on can be copied to on-chip memories (e.g.,
`
`into BRAMs or FFs). Id. On-chip memory bandwidth is significantly faster and thus can provide
`
`substantial overall speedups in executing the algorithm. CPUs, ASICs, and FPGAs are all subject
`
`to the performance impact of distant or slow memory. Kastner Dec. ¶ 20.
`
`The ’867 patent discusses memory throughout the claims and specification. For example,
`
`Claim 1 recites moving data from a “second memory” to a “first memory” within a memory
`
`hierarchy. This is akin to the concepts described above, in which data can be moved from slower,
`
`larger memory (e.g., a second memory) to quicker, smaller memory (e.g., a first memory). Kastner
`
`Dec. ¶¶ 20-21.
`
`C. Prefetching
`
`Prefetching is a key concept in the patent as every asserted claim requires a “data prefetch
`
`unit,” which prefetches data from a second or common memory. This term is particularly important
`
`for the third disputed term briefed herein, which requires that the data prefetch unit “operates
`
`independent of and in parallel with logic blocks using the [computational data / computional [sic]
`
`4
`
`Intel Exhibit 1038 - 8
`
`

`

`Case 1:20-cv-00834-ADA Document 45 Filed 11/17/20 Page 9 of 25
`
`data].” As described in detail in § IV.C herein, Intel’s construction runs counter to the very purpose
`
`of the claimed invention, which is to prefetch data so that it is available when it is needed in order
`
`to reduce latency. This concept is also important to understanding why Intel’s construction of the
`
`first two terms is erroneous: In order for a data prefetch unit to operate it must be configured to
`
`know in advance what data it is prefetching, and Intel’s construction ignores this basic tenet.
`
`A simple (unoptimized) memory system would have a processor that requests data when it is
`
`required for computation. Kastner Dec. ¶ 22. This can be problematic especially if the data resides
`
`in off-chip memory, which has a large latency or large number of cycles (e.g., hundreds or more)
`
`to retrieve the data. Id. This requires the computational unit to stall or wait while the data is being
`
`loaded. Id.
`
`A more efficient memory system employs techniques to transfer data from slower memory into
`
`the faster memory closer to the processor that requires the data.. Id. ¶ 23. The patent provides
`
`“[t]wo measures of the gap between the [processor] and memory hierarchy are bandwidth
`
`efficiency and bandwidth utilization.” ’867 patent 1:34-36. The patent further states that
`
`“[b]andwidth efficiency refers to “the percentage of contributory data transferred between two
`
`points. Contributory data is data that actually participates in the recipients processing.” Id. 5:51-
`
`54. It additionally states that “[b]andwidth utilization refers to the amount of memory bandwidth
`
`that is utilized during a calculation. Maximum bandwidth utilization occurs when all available
`
`memory bandwidth is utilized.” Id. 1:39-43. If optimized well, the memory system will provide
`
`the necessary data as required by the processor and dictated by the algorithm. Kastner Dec. ¶ 23.
`
`And it will optimize the bandwidth utilization and/or bandwidth efficiency as it will transfer only
`
`the data required by the algorithm, i.e., it would not transfer data into memory that is never
`
`subsequently used for computation. Id. There are different ways of optimizing a memory system
`
`5
`
`Intel Exhibit 1038 - 9
`
`

`

`Case 1:20-cv-00834-ADA Document 45 Filed 11/17/20 Page 10 of 25
`
`for microprocessors, including caching and prefetching.
`
`Caching takes advantage of the fact that data requests typically exhibit spatial and temporal
`
`locality. Id. ¶ 24. To exploit spatial locality, caching will transfer the currently requested data and
`
`additional data that is stored nearby the requested data. Id. Caches attempt to exploit temporal
`
`locality by keeping that data in on-chip (first) memory even after it is used (in hopes that it will be
`
`used again in the near future). Id. Caching is a common optimization technique for CPUs. Id.
`
`Different levels of cache (L0, L1, L2, …) exist depending on the number of processors and the
`
`size of the on-chip memory. Id.
`
`Reconfigurable processors can use caching, but often they leverage more customized memory
`
`hierarchies and optimizations tailored more towards the algorithm being executed. Id. ¶ 25. The
`
`key concepts and ideas in the ’867 patent relate to algorithm-specific memory optimizations for
`
`reconfigurable processors. Id.
`
`Prefetching initiates a request for data before that data is required. In an ideal case, the prefetch
`
`data arrives no later than when it is required. Id. ¶ 26. Generally speaking, there are two ways of
`
`prefetching data: 1) dynamically and 2) statically. Id. Dynamic prefetching attempts to guess what
`
`future data is required by looking at past data access requests. Id. For example, a dynamic prefetch
`
`unit may see a request for some data and prefetch the next N data elements located spatially nearby
`
`to the initial data (with the hopes that the algorithm will request this data in the future). Id. Static
`
`prefetching techniques insert explicit prefetch instructions into the computer system, e.g., a
`
`compiler will analyze the algorithm and insert prefetch data fetches before the data is computed
`
`upon. Id. There are many types of prefetching techniques, and customizing the prefetching
`
`technique to the algorithm can provide significant overall performance benefits. Id.
`
`6
`
`Intel Exhibit 1038 - 10
`
`

`

`Case 1:20-cv-00834-ADA Document 45 Filed 11/17/20 Page 11 of 25
`
`The ’867 patent specifically discusses and claims a data prefetch unit of a reconfigurable
`
`processor. E.g., Ex. A Claims 1, 9. The patent describes a “data prefetch unit” as a specialized
`
`functional unit on a reconfigurable processor that initiates “a data transfer in advance of the
`
`requirement for data by computational logic.” Ex. A 8:1-2.
`
`This data prefetch unit specifically seeks to reduce the overhead involved in prefetching data
`
`by avoiding transferring unnecessary data between memories, i.e., the prefetch unit copies only
`
`the data which are to be used in upcoming computations. E.g., Ex. A Claim 1. The patent is clear
`
`in that the data prefetching unit moves computational data between two memories in a memory
`
`hierarchy. E.g., Ex. A Claim 1. The data prefetch unit “conforms to the needs of the algorithm” to
`
`improve the performance of reconfigurable processor and overall computing system.
`
`II. LEVEL OF ORDINARY SKILL IN THE ART
`
`A person of ordinary skill in the art (“POSITA”) at the time of the filing of the ’867 patent
`
`would typically have at least an MS Degree in Computer Engineering, Computer Science, or
`
`Electrical Engineering, or equivalent work experience, along with at least three years of experience
`
`related specifically to computer architecture, hardware design, and reconfigurable processors.
`
`Kastner Dec. ¶ 13. In addition, a POSITA would be familiar with hardware description languages
`
`and design tools and methodologies used to program a reconfigurable processor. Id.
`
`The parties agreed that the following terms have the following meanings:
`
`III. AGREED TERMS
`
`Term
`
`“reconfigurable processor”
`
`Asserted
`Claims
`1, 3, 4, 9, 11
`
`Agreed Construction
`
`contains
`that
`device
`computing
`A
`reconfigurable components such as FPGAs
`and can, through reconfiguration, instantiate
`an algorithm as hardware.
`
`7
`
`Intel Exhibit 1038 - 11
`
`

`

`Case 1:20-cv-00834-ADA Document 45 Filed 11/17/20 Page 12 of 25
`
`Term
`
`Preamble “A reconfigurable
`processor that instantiates an
`algorithm as hardware”
`“data prefetch unit”
`
`Asserted
`Claims
`1
`
`1, 3, 4, 9
`
`“functional unit”
`
`“memory hierarchy”
`
`“common memory”
`
`Term is used in
`agreed
`constructions.
`
`Term is used in
`agreed
`constructions.
`9
`
`“computational unit”
`
`11, 12
`
`prefetch
`data
`“the
`receives processed data”
`“data access unit”
`
`unit
`
`3
`
`11, 12
`
`to conform
`“configured
`needs of the algorithm”
`“reconfigurable logic”
`
`to
`
`1, 9
`
`Term is used in
`agreed
`constructions
`
`Agreed Construction
`
`Preamble is limiting.
`
`A functional unit that moves data between
`members of a memory hierarchy. The
`movement may be as simple as a copy, or as
`complex as an indirect indexed strided copy
`into a unit stride memory
`A set of logic that performs a specific
`operation. The operation may for example be
`arithmetic,
`logical,
`control,
`or
`data
`movement. Functional units are used as
`building blocks of reconfigurable logic.
`A collection of memories.
`
`An external memory shared by processors in
`a multiprocessor system.
`reconfigurable
`A
`functional unit of a
`processor that performs a computation.
`The data prefetch unit receives the results of
`the algorithm.
`A functional unit that accesses a component
`of a memory hierarchy, and delivers data
`directly to computational logic.
`logic
`Configured
`in
`reconfigurable
`conform to the needs of the algorithm.
`Reconfigurable logic is composed of an
`interconnection of functional units, control,
`and storage that implements an algorithm and
`can be
`loaded
`into a Reconfigurable
`Processor.
`
`to
`
`IV. DISPUTED TERMS
`
`A. “retrieves only computational data required by the algorithm from a second memory…
`and places the retrieved computational data in the first memory”
`
`Asserted
`Claims
`1
`
`SRC’s Construction
`
`Intel’s Construction
`
`Retrieves from a second memory that
`computational data which is required
`by
`the algorithm and no other
`computational data … and places the
`
`Retrieves the data input to the algorithm
`implemented in the computational logic
`and no other data or instruction from a
`
`8
`
`Intel Exhibit 1038 - 12
`
`

`

`Case 1:20-cv-00834-ADA Document 45 Filed 11/17/20 Page 13 of 25
`
`retrieved computational data in the first
`memory.
`
`second memory … and places the retrieved
`data in the first memory
`
`The ’867 patent describes a data prefetch unit, which is configured to retrieve the
`
`computational data required by an algorithm from a second memory and place it in a first memory
`
`so that it is available when needed. E.g., Ex. A Claim 1, Figs. 5-7. SRC’s construction provides
`
`clarity to this term, while Intel’s construction introduces the new, unnecessary and undefined term
`
`“instruction,” and could be read to exclude configuring a data prefetch unit with data from a second
`
`memory so that it knows the information it needs to retrieve data for a given algorithm.
`
`As an initial matter, an understanding of the term “computational data” is helpful in construing
`
`this term. The term “computational data” is not explicitly defined in the patent, but the patent
`
`provides several examples of it. For example, the patent states “Figure 2 shows computational
`
`logic as might be loaded into a reconfigurable processor.” Ex. A. 4:40-41. As described below,
`
`computational logic utilizes computational data.
`
`
`
`Figure 2 computes two results, A+B, and A+B-(B*C) from three input variables or operands,
`
`A, B, and C. Id. 6:60-63. Thus, the values of the input variables A, B, and C represent the
`
`computational data referenced in the claim. Kastner Dec. ¶ 32.
`
`9
`
`Intel Exhibit 1038 - 13
`
`

`

`Case 1:20-cv-00834-ADA Document 45 Filed 11/17/20 Page 14 of 25
`
`The claim recites a data prefetch unit which retrieves computational data from a second
`
`memory and places computational data in a first memory. Claim 1 itself requires that the two
`
`memories each have a different memory bandwidth and/or memory utilization. As Dr. Kastner
`
`explains, typically the farther memory resides from computational elements, the larger and/or
`
`slower the memory is, while closer memory is smaller and faster. Kastner Dec. ¶¶ 19-21. A
`
`substantive description of memory types appears in § I.B and is incorporated by reference herein.
`
`The two memory types are illustrated in Figures 5 and 6, which show that data prefetch units
`
`501 and 601 retrieve computational data from an external (second memory), and place
`
`computational data into memory banks A, B, and C (first memory) so that they can be used by
`
`computational functional units 301 of logic block 300.
`
`
`
`10
`
`Intel Exhibit 1038 - 14
`
`

`

`Case 1:20-cv-00834-ADA Document 45 Filed 11/17/20 Page 15 of 25
`
`The parties’ constructions of this term differ significantly in that SRC’s construction requires
`
`only that, of the computational data that resides in the second memory (for example, A, B, C, D,
`
`E, and F), only the computational data that is actually used by the algorithm (i.e. A, B, C) is moved
`
`to the first memory.
`
`Intel’s construction attempts to incorporate this concept through its limitation that “data input
`
`to the algorithm” is retrieved and placed into the first memory, but Intel’s construction improperly
`
`adds an additional limitation that is found nowhere in the claim term itself, as it excludes any
`
`“instruction from a second memory.”
`
`Intel’s addition of this term adds ambiguity to the scope of this claim. While the patent
`
`discusses “instructions,” it does so with respect to microprocessors, not reconfigurable processors.
`
`Examples follow:
`
`Over the past 30 years, microprocessors have enjoyed annual performance gains averaging
`about 50% per year. Most of the gains can be attributed to higher processor clock speeds,
`more memory bandwidth and increasing utilization of instruction level parallelism (ILP)
`at execution time. Ex. A 1:26-30.
`
`These caches are high-speed memories (typically SRAM) in close proximity to the
`microprocessor that try to keep copies of instructions and data the microprocessor may
`soon need. Id. 1:53-56.
`
`In the Intel Pentium III [micro]processor for example, more than half of the 10 million
`transistors are dedicated to instruction cache, branch prediction, out-of-order execution
`and superscalar logic. Id. 3:48-51.
`
`As Dr. Kastner explains, the term instruction is vague as used for reconfigurable processors:
`
`Conventionally, I think of an “instruction” in the context of a CPU as statements describing
`how the CPU should compute (e.g., which operation to perform, what registers to use, etc.).
`This makes the choice to include the term “instruction” in Intel’s construction unusual since
`the ’867 patent is directed towards reconfigurable processors and not CPUs. The term
`“instruction” is not well defined when referring to a reconfigurable processor, e.g., a
`reconfigurable processor is not typically thought of to have an Instruction Set Architecture
`(ISA) like a CPU.
`
`Kastner Dec. ¶ 36.
`
`11
`
`Intel Exhibit 1038 - 15
`
`

`

`Case 1:20-cv-00834-ADA Document 45 Filed 11/17/20 Page 16 of 25
`
`The danger of Intel’s construction is that it ignores the fact that a data prefetch unit must know
`
`what data the algorithm needs in advance to make prefetching possible. See Kastner Dec. ¶ 34. For
`
`convenience, this information will be referred to herein as “Configuration Information.”
`
`A comparison can be made to a letter being mailed that needs to be put in an envelope, so a
`
`mail carrier knows where to deliver it. Id. The address is not part of the letter but is needed to
`
`accomplish the goal of sending it. Id.
`
`As an example of Configuration Information, the data prefetch unit must know in advance that
`
`an algorithm, such as that shown in Figure 2, uses only computational data A, B, and C, and,
`
`referencing our previous example, other data in the second memory would not be required. This
`
`can be further illustrated using Figure 4, which is shown in § IV.C herein. Let’s assume, for
`
`simplicity, that Memory Bank A stores computational data A, Memory Bank B stores
`
`computational data B, and so on for C, D, E, and F. Logic block 300 uses only computational data
`
`A, B, and C. It does not use F, which may instead be passed to other computational logic.
`
`Figures 8 shows an example of a memory block, 800, in which only a small quantity of the
`
`data stored in that block is needed for computational logic, namely shaded elements 801. Ex. A.
`
`7:42-51.
`
`
`
`12
`
`Intel Exhibit 1038 - 16
`
`

`

`Case 1:20-cv-00834-ADA Document 45 Filed 11/17/20 Page 17 of 25
`
`Similarly, Figures 9A through 12A each show “situations when a subset of stored data is required
`
`for computation.” 8:52-55. The data required by the algorithm is shaded in each of these figures.
`
`
`
`In each of these examples, the prefetch units are able to “meet the needs of a particular algorithm
`
`being implemented by computational elements . . .” without passing computational data that is not
`
`needed. Id. 9:1-5. They are only able to do so because they are configured to know which
`
`information is necessary.
`
`The necessity of configuring the data prefetch unit is explicitly required by the language of the
`
`claim itself as the “data prefetch unit” is “configured to conform to the needs of the algorithm . . .
`
`.” And the parties agree that the term “configured to conform to the needs of the algorithm” means
`
`“configured in reconfigurable logic to conform to the needs of the algorithm.”
`
`Intel’s construction could be read to exclude configuration of the data prefetch unit with
`
`Configuration Information stored in a second memory. But the Configuration Information must
`
`originate somewhere. Kastner Dec. ¶ 35. As Dr. Kastner explains: “One logical design choice
`
`would be to place it in a memory and there is nothing in the patent in my opinion that states that
`
`such information could not be stored in the second memory.” Kastner Dec. ¶ 35. A claim scope
`
`that excludes this possibility should not be read into the claim absent a “clear and unmistakable
`
`disclaimer,” which is not present here. Thorner v. Sony Computer Entm't Am. LLC, 669 F.3d 1362,
`
`1366-67 (Fed. Cir. 2012).
`
`13
`
`Intel Exhibit 1038 - 17
`
`

`

`Case 1:20-cv-00834-ADA Document 45 Filed 11/17/20 Page 18 of 25
`
`This Court should adopt Plaintiff’s construction of this term, which is in accordance with the
`
`specification and purpose of the invention, rather than Intel’s which ignores the fundamental fact
`
`that the data prefetch unit must know in advance what data it is prefetching, appears to exclude
`
`obtaining that information from a second memory, and adds ambiguity rather than clarity to the
`
`scope of the claim term.
`
`B. “read and write only data required for computations by the algorithm between the data
`prefetch unit and the common memory”
`
`Asserted
`Claims
`1
`
`SRC’s Construction
`
`Intel’s Construction
`
`Read, using the data prefetch unit, only
`data required for computations by the
`algorithm from common memory and
`write, using the data prefetch unit, only
`data required for computations by the
`algorithm.
`
`Reads and writes the data input to the
`algorithm
`implemented
`in
`the
`computational logic and no other data or
`instruction between the data prefetch unit
`and the common memory
`
`As with the prior term, Intel’s construction of this term could readily be read to exclude the
`
`transfer of Configuration Information to the data prefetch unit. For that reason alone, it should be
`
`rejected. It also includes the term “instruction,” which does not have a plain and ordinary meaning
`
`with respect to reconfigurable processors. Kastner Dec. ¶ 36.
`
`This term differs from the prior term in that, instead of reciting all “computational data
`
`required by the algorithm,” it recites all “data required for computations.” Data required for
`
`computations could (and should) be read to permit inclusion of configuration information. Kastner
`
`Dec. ¶ 38. In other words, “computational data” as used in the prior claim is a subset of “data
`
`required for computations.” The data prefetch unit needs to be provided with: (1) Configuration
`
`Information so that it knows what data the algorithm needs, and (2) data values, such as A, B, and
`
`C. Without this Configuration Information, the data prefetch unit cannot perform its job of
`
`prefetching data.
`
`14
`
`Intel Exhibit 1038 - 18
`
`

`

`Case 1:20-cv-00834-ADA Document 45 Filed 11/17/20 Page 19 of 25
`
`Thus, the data prefetch unit is able to retrieve “only data required for computations by the
`
`algorithm.” Once the data prefetch unit knows the data needed by the algorithm, it can write that
`
`data to a destination, where it can be processed by computational logic, such as that shown in
`
`Figure 6. This is also reflected in SRC’s construction, which recites that the data prefetch unit
`
`“write, using the data prefetch unit, only data required for computations by the algorithm.”
`
`C. “operates independent of and in parallel with logic blocks using the [computational data
`/ computional [sic] data]”
`
`Asserted
`Claims
`1, 9
`
`SRC’s Construction
`
`Intel’s Construction
`
`The term “computional” in Claim 1
`should
`be
`construed
`as
`“computational.” Otherwise, this term
`has its plain and ordinary meaning and
`need not be construed.
`
`Can initiate and carry out its operations
`each of prior to, in parallel with, or after the
`requirement for the data input to the
`computational logic.
`
`This term should be afforded its plain and ordinary meaning with the exception of fixing a
`
`clear typographical error, as the term “computional” should read as “computational.” A person of
`
`ordinary skill in the art would readily understand the concepts of “independent” and “in parallel
`
`with” as they are used in the context of reconfigurable processors to convey the design principle
`
`of configuring a portion of hardware to execute an operation in parallel with other hardware and
`
`without requiring its intervention. See Id. ¶ 42. Dependencies necessitate synchronization, which
`
`stalls execution and would confound the parallel operation recited by the clams. See Id. ¶ 43.
`
`Because this terminology describes well-understood design principles which are consistent with
`
`both the field of the invention and conventional terminology that would be understood by a juror,
`
`no construction, beyond correcting the term “computional” is needed.
`
`Intel’s proposed construction should be rejected because it is overly narrow, and imports
`
`limitations from the specification which would restrict the invention to an incomplete embodiment
`
`rather than retaining the meaning evident from the language of the claim. See SciMed Life Sys. v.
`
`15
`
`Intel Exhibit 1038 - 19
`
`

`

`Case 1:20-cv-00834-ADA Document 45 Filed 11/17/20 Page 20 of 25
`
`Advanced Cardiovascular Sys., 242 F.3d 1337, 1340 (Fed. Cir. 2001) (“one of the cardinal sins of
`
`patent law [is] reading a limitation from the written description into the claims”); Super
`
`Interconnect Tech. LLC v. Huawei Device Co. Ltd., No. 2:18-CV-462-JRG-RSP, 2020 WL 60145,
`
`at *3 (E.D. Tex. Jan. 6, 2020) (“particular embodiments appearing in the specification will not be
`
`read into the claims when the claim language is broader than the embodiments.”) (citing Electro
`
`Med. Sys., S.A. v. Cooper Life Scis., Inc., 34 F.3d 1048, 1054 (Fed. Cir. 1994)); see also Comark
`
`Commc’ns, Inc. v. Harris Corp

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket