`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`
`ARM Ltd. and ARM, Inc.,
`
`Petitioners
`
`v.
`
`Advanced Micro Devices, Inc. and ATI Technologies ULC,
`
`Patent Owner
`
`U.S. Patent 7,633,506 Issue Date: December 15, 2009
`Title: Parallel Pipeline Graphics System
`
`CASE: IPR2018-01149
`
`DECLARATION OF DR. HANSPETER PFISTER, Ph.D.
`
`IN SUPPORT OF PETITION FOR INTER PARTES REVIEW
`
`ARM, Ex. 1003, Page 1
`
`
`
`I, Dr. Hanspeter Pfister, Ph.D., declare as follows:
`
`1.
`
`I have been retained by counsel for Petitioners, ARM Ltd. and ARM, Inc.,
`
`as an expert in this inter partes review to examine whether claims 1-9 of U.S. Patent No.
`
`7,633,506 are patentable over certain prior art. With the exception of the identification
`
`of the party retaining my services, this declaration is identical to my declaration that was
`
`filed in IPR2018-00102.
`
`2.
`
`I have personal knowledge of all the facts set forth herein, and if called to
`
`testify at any hearing in this inter partes review, I would competently testify and verify
`
`that testimony contained herein.
`
`3.
`
`I am being compensated at my hourly rate of $600 per hour. I am also being
`
`reimbursed for out-of-pocket expenses. My compensation does not depend in any way
`
`on the outcome of this proceeding or the particular opinions I express, or the testimony I
`
`give.
`
`4.
`
`I expect to be available for deposition and to testify at the evidentiary
`
`hearing in this inter partes review to the extent required.
`
`5.
`
`This declaration contains my conclusions and a summary of my analysis
`
`including a summary of my conclusions; an overview of my qualifications as an expert;
`
`an overview of the scope and terms of my engagement for this declaration; an overview
`
`of the materials I have considered in arriving at my conclusions; an overview of the
`
`terminology and legal principles that I applied in my analysis; an overview of the
`
`2
`
`ARM, Ex. 1003, Page 2
`
`
`
`
`
`technical background of the subject matter; an overview of the Patent in Suit; an analysis
`
`of the level of ordinary skill in the art related to the Patent in Suit; an analysis of the
`
`asserted references; and a patentability analysis of the challenged claims.
`
`6.
`
`This declaration is based on information currently available to me. I intend
`
`to continue my investigation and study, which may include a review of documents and
`
`information that may yet be produced, as well as deposition testimony from depositions
`
`for which transcripts are not yet available or that may yet be taken in this review.
`
`Therefore, I expressly reserve the right to expand or modify my opinions as my
`
`investigation and study continue, and to supplement my opinions in response to any
`
`additional information that becomes available to me, any matters raised by Patent Owner
`
`and/or other opinions provided by Patent Owner’s expert(s), or in light of any relevant
`
`orders from the Patent Trial and Appeal Board or other authoritative body.
`
`I.
`
`SUMMARY OF OPINION
`
`7.
`
`It is my opinion that claims 1-9 of the ’506 Patent are rendered obvious by
`
`the prior art references discussed below. I will explain below my analysis and basis for
`
`my opinion in detail.
`
`II. EXPERT QUALIFICATIONS AND PRIOR TESTIMONY
`
`8. My curriculum vitae is attached hereto as Attachment A, which provides an
`
`accurate identification of my relevant background and experience in multithreaded
`
`graphics processors. For the past 25 years, I have focused on computer graphics
`
`
`
`3
`
`ARM, Ex. 1003, Page 3
`
`
`
`
`
`processing in both industrial and academic settings. I have designed cutting-edge
`
`graphics processing systems, including the world’s first real- time volume rendering
`
`graphics card. I have pioneered research into various aspects of computer graphics and
`
`received academic awards and industry recognition for my work in computer graphics.
`
`9.
`
`In 1991, I received my M.Sc. in Electrical Engineering from the Swiss
`
`Federal Institute of Technology (ETH) Zurich, Switzerland. I then proceeded to study
`
`computer science at the State University of New York at Stony Brook, Stony Brook,
`
`NY, where I earned my M.S. in Computer Science in 1994 and my Ph. D. in Computer
`
`Science in 1996.
`
`10.
`
`I am the An Wang Professor of Computer Science at the Harvard John A.
`
`Paulson School of Engineering and Applied Sciences and an affiliate faculty member of
`
`the Center for Brain Science. My research in visual computing lies at the intersection of
`
`visualization, computer graphics, and computer vision. It spans a wide range of topics,
`
`including bio-medical visualization, image and video analysis, 3D fabrication, and data
`
`science. From 2013 to 2017 I was director of the Institute for Applied Computational
`
`Science. For most of this work we are leveraging the power of graphics processing units
`
`(“GPUs”) for computer graphics, high-throughput computing, and visualization. I am
`
`currently advising and co- advising seven Ph. D. students, six post-doctoral fellows, and
`
`one research scientist. I also supervised numerous research and thesis projects and
`
`student interns.
`
`
`
`4
`
`ARM, Ex. 1003, Page 4
`
`
`
`
`
`11. Before joining Harvard, I worked for over a decade at Mitsubishi Electric
`
`Research Laboratories where I was Associate Director and Senior Research Scientist. I
`
`was the chief architect of Mitsubishi Electric's VolumePro, the world’s first PC graphics
`
`card for real-time visualization of volume data. The technology was subsequently
`
`acquired by TeraRecon and is still in wide commercial use in medicine, biology,
`
`engineering, and oil and gas exploration. VolumePro received several technical awards,
`
`including Mitsubishi Electric President's Award in 2000. Since then I have developed
`
`several new methods for high-quality and interactive volume visualization on GPUs. I
`
`am the recipient of the 2010 IEEE Visualization Technical Achievement award in
`
`recognition of my “seminal technical achievements in real-time volume rendering.”
`
`12.
`
`I am a pioneer in point-based computer graphics, a subfield of computer
`
`graphics that deals with modeling and rendering of point-sampled (e.g., laser-scanned)
`
`objects. I am among the first to introduce data-driven approaches for complex real-world
`
`objects to computer graphics, including human faces. And I developed new camera and
`
`display technologies, including the world’s first end- to-end real-time 3D TV system
`
`with auto-stereoscopic display. Currently, I am collaborating with Harvard scientists on
`
`novel analysis and visualization approaches in computational science, including
`
`neuroscience, genomics, systems biology, astronomy, and medicine. I am co-inventor of
`
`over 50 US patents and co- author on more than 160 peer-reviewed publications,
`
`including over 25 ACM SIGGRAPH papers, the premier forum in Computer Graphics.
`
`
`
`5
`
`ARM, Ex. 1003, Page 5
`
`
`
`
`
`
`
`
`
`
`
`6
`
`ARM, Ex. 1003, Page 6
`
`ARM, Ex. 1003, Page 6
`
`
`
`
`
`13.
`
`From 1999 to 2007, I built a rich academic program in computer graphics at
`
`the Harvard Extension School through the development of several new courses,
`
`including CSCI E-234 “Introduction to Computer Graphics” (1999-2007), CSCI E-235
`
`“Advanced Computer Graphics” (2002), CSCI E-236 “Advanced Topics in Computer
`
`Graphics” (2004-2006), and INDR E-399 “Independent Study in Advanced Computer
`
`Graphics” (2003). After joining Harvard in 2007, I introduced several new
`
`undergraduate and graduate courses that are also offered to the general public through
`
`the Harvard Extension School, including CS171 “Visualization” (2007-present), CS175
`
`“Introduction to Computer Graphics” (2010), CS264 “Massively Parallel Computing”
`
`(2009-2011), CS205 “Introduction to Computational Science” (2010-2012), and CS109
`
`“Data Science” (2013- present).
`
`14. Between 2009 and 2016 in collaboration with Prof. Alan Aspuru- Guzik of
`
`the Chemistry and Chemical Biology faculty at Harvard, I was the co-PI (Principal
`
`Investigator) of Harvard's CUDA Center of Excellence (CCOE). The CCOE supported
`
`“outstanding research taking place in Massively Parallel Programming and Computing”
`
`using GPUs.
`
`15.
`
`I have served on the papers committees of all major visualization and
`
`graphics conferences, including ACM SIGGRAPH, IEEE Visualization, EuroVis,
`
`Eurographics, Pacific Graphics, and many others. I have been the co-organizer of
`
`various international workshops, symposia, and conferences in computer graphics and
`
`
`
`7
`
`ARM, Ex. 1003, Page 7
`
`
`
`
`
`visualization, including general conference chair of IEEE Visualization 2002 and
`
`technical papers chair of ACM SIGGRAPH 2012. I previously chaired and am currently
`
`a director of the IEEE Visualization and Graphics Technical Committee. I served on the
`
`editorial board of the IEEE Transactions on Visualization and Computer Graphics and
`
`the ACM Transaction on Graphics, the top two journals in the field. I am co-editor of the
`
`first textbook on Point-Based Computer Graphics, published by Elsevier in 2007, and of
`
`the 2006 NIH/NSF Visualization Research Challenges Report. I am a senior member of
`
`the IEEE Computer Society, and a member of ACM, ACM SIGGRAPH, and the
`
`Eurographics Association. I received the IEEE Golden Core Award, the IEEE
`
`Meritorious Service Award, and the Petra T. Shattuck Excellence in Teaching Award.
`
`16.
`
`I have been working as an independent consultant since 2007, specializing
`
`in computer graphics, visualization, data science, technical due diligence reviews, and in
`
`the provision of expert witness services, particularly patent infringement. My clients
`
`range from domestic start-ups to international Fortune 500 companies, and include
`
`Adobe, Disney Research, Novartis, as well as NVIDIA.
`
`I. MATERIALS REVIEWED
`
`17.
`
`I have reviewed the following materials in forming my opinions:
`
`•
`
`•
`
`•
`
` U.S. Patent No. 7,633,506 (the “’506 Patent”)
`
`File History of U.S. Patent No. 7,633,506. (“File History”)
`
`“Reality Engine Graphics” by Kurt Akeley (“Akeley”)
`
`
`
`8
`
`ARM, Ex. 1003, Page 8
`
`
`
`
`
`
`
`
`
`•
`
`•
`
`•
`
`•
`
`•
`
`•
`
`•
`
`•
`
`U.S. Patent No. 5,808,690 (“Rich”)
`
`U.S. Patent No. 7,102,646 (“Rubinstein”)
`
` U.S. Patent No. 6,697,063 (“Zhu ’063”)
`
` U.S. Patent No. 6,856,320 (“Zhu ’320)
`
`U.S. Patent Application Publication No. US 2003/0076320A1
`
`(“Collodi”)
`
`U.S. Patent No. 6,646,639 (“Greene”)
`
` U.S. Patent No. 6,809,732 (“Zatz”)
`
`April 26, 200 Press release describing Nvidia GeForce 2 Graphics
`
`Chip
`
`•
`
`January 5, 2001 Article describing Nvidia GeForce 3 Graphics Chip
`
`
`
`9
`
`ARM, Ex. 1003, Page 9
`
`
`
`
`
`18.
`
`I have tried my best to list all the materials I considered and reviewed. I
`
`may have accidentally left some materials off the above list that I cited in my analysis
`
`below. Such materials, if any, should be included in the above list of materials I
`
`considered and reviewed. I reserve the right to revise this list.
`
`II. UNDERSTANDING OF THE LAW
`
`19.
`
`I am not a legal expert. In forming my opinions, I have been informed of and
`
`applied the legal standards as follows. I understand that the legal standards relating to
`
`anticipation and obviousness apply to the ‘506 Patent based on its effective filing date of
`
`November 27, 2002.
`
`20.
`
`I understand that, during an inter partes review proceeding, patent claims are
`
`given their “broadest reasonable interpretation in light of the specification of the patent.” It
`
`is my further understanding that the prosecution history is relevant to determining the
`
`correct construction of claim terms and that extrinsic evidence also may be relevant to
`
`establish the meaning of terms to the extent it is consistent with the specification and
`
`prosecution history.
`
`21.
`
`I understand that anticipation of a claim requires that each and every
`
`limitation recited in a claim must be disclosed either expressly or inherently in a single
`
`prior art reference. I further understand that, to be considered anticipatory, a written prior
`
`art reference must be enabling and describe the applicant’s claimed invention sufficiently to
`
`have placed it in possession of a person of ordinary skill in the field of the invention. Thus,
`
`
`
`10
`
`ARM, Ex. 1003, Page 10
`
`
`
`
`
`I further understand that the disclosure of prior art is considered from the perspective of
`
`one of ordinary skill in the art.
`
`22.
`
`I understand that a patent claim is “obvious” and therefore invalid when the
`
`differences between the claimed subject matter and the prior art are such that the subject
`
`matter as a whole would have been obvious at the time the invention was made to a
`
`person having ordinary skill in the art to which the subject matter pertains.
`
`
`
`
`
`
`
`11
`
`ARM, Ex. 1003, Page 11
`
`
`
`
`
`23.
`
`In making an obviousness determination, I understand that there are several
`
`factors to consider: (1) the scope and content of the prior art; (2) the level of ordinary
`
`skill in the art at the time the invention was made; (3) the differences between the
`
`claimed invention and the prior art, if any, and (4) objective considerations, if any exist,
`
`such as any commercial success, copying, prior failure by others, licenses, longstanding
`
`need, and unexpected results.
`
`24.
`
`I understand that prior art used to show that a claimed invention is obvious
`
`must be “analogous art.” Prior art is analogous art to the claimed invention if (1) it is
`
`from the same field of endeavor as the claimed invention (even if it addresses a different
`
`problem) or (2) the reference is reasonably pertinent to the problem faced by the inventor
`
`(even if it is not from the same field of endeavor as the claimed invention).
`
`25.
`
`I understand that it is improper to engage in hindsight when trying to
`
`determine the obviousness of a patent claim. I understand that the obviousness inquiry
`
`must be conducted from the standpoint of a person of ordinary skill in the field at the
`
`time the claimed invention was made. What is known today, and what is learned from
`
`the teachings and disclosures of the patent itself containing the claim under analysis,
`
`should not be considered.
`
`26.
`
`I have been informed that various “secondary considerations” (sometimes
`
`referred to as objective indicia of non-obviousness) may support a determination of non-
`
`obviousness and that such secondary considerations must be considered as part of an
`
`
`
`12
`
`ARM, Ex. 1003, Page 12
`
`
`
`
`
`obviousness analysis. I have been informed that secondary considerations of
`
`nonobviousness may include:
`
`•
`
`•
`
`•
`
`•
`
`•
`
`•
`
`commercial success of a product due to the merits of the claimed
`invention;
`
`a long felt need for the solution provided by the claimed invention;
`
`unsuccessful attempts by others to find the solution provided by the
`claimed invention;
`
`copying of the claimed invention by others;
`
`unexpected and superior results from the claimed invention; and
`
`acceptance by others of the claimed invention as shown by praise from
`others in the field or from the licensing of the claimed invention.
`
`
`
`27.
`
`I understand that in order for such “secondary considerations” evidence to
`
`be relevant to the obviousness inquiry, there must be a relationship or “nexus” between
`
`the advantages and features of the claimed invention and the evidence of secondary
`
`considerations.
`
`28.
`
`I understand that if a claim element is not obvious, then the claims that
`
`depend from it are not obvious.
`
`III. TECHNOLOGY BACKGROUND
`
`29. Graphics processing is an important part of any computer system, and has
`
`been for the past several decades. The purpose of a graphics processor is to generate
`
`complex shapes and structures to be displayed on a screen. In order to accomplish that
`
`purpose, a graphics processor converts a 3D object or scene (comprised of points in 3D
`
`
`
`13
`
`ARM, Ex. 1003, Page 13
`
`
`
`
`
`space called “vertices” that make up shapes called “primitives”) into a 2D image to be
`
`displayed on a computer screen (comprised of “pixels”). Generally, 3D graphics
`
`processing starts with creating a mathematical model of each object. The model is then
`
`processed through a series of steps, referred to as a “graphics processing pipeline,” that
`
`render the scene as a 2D image on a display:
`
`
`
`
`
`
`
`14
`
`ARM, Ex. 1003, Page 14
`
`
`
`
`
`
`
`30.
`
`In most cases, 3D objects are conceptualized as a series of primitives (e.g.,
`
`triangles) that cover the surface of an object, such as a teapot:
`
`
`
`31.
`
`Each point of the primitive is called a “vertex” and each vertex has certain
`
`
`
`properties, which are represented as data. For example, a vertex includes not just its
`
`location, but may also include other information, such as the color of the object and its
`
`material properties (e.g., whether it is reflective). A vertex processor performs the steps
`
`in the graphics pipeline that transform these vertices from 3D space into 2D space and
`
`determines how lighting and other conditions in the 3D scene impact the color of the
`
`
`
`15
`
`ARM, Ex. 1003, Page 15
`
`
`
`
`
`vertices. The ’506 Patent refers to these operations on vertices as “front-end” operations.
`
`32. After the “front-end” processing, a step called rasterization determines what
`
`pixels on the 2D screen are covered by each primitive. At least one “fragment” is
`
`generated for each pixel on the screen (as a result, the terms “fragment” and “pixel” are
`
`sometimes used interchangeably).
`
`33. Rasterization commonly includes the step of “scan conversion,” which
`
`involves stepping through the geometry of the primitives to determine which pixels are
`
`covered.
`
`
`
`
`
`34. A pixel processor then performs additional operations on the fragments or
`
`pixel data, which includes determining the color of each pixel. These operations are
`
`commonly called “pixel shading” operations and may involve lighting, texture and bump
`
`mapping, translucency and other phenomena. All of this information is gathered
`
`together through merging or blending of pixels for the final image to be displayed on a
`
`screen.
`
`
`
`
`
`
`
`16
`
`ARM, Ex. 1003, Page 16
`
`
`
`
`
`IV. THE ’506 PATENT
`
`
`
`35.
`
`The ’506 Patent discloses and claims a graphics processing system that
`
`includes a front-end and back-end. Ex. 1001 at Abstract and Claim 1. The front-end
`
`receives instructions and outputs primitives or combinations of primitives (e.g. triangles,
`
`parallelograms, etc.) (i.e., geometry). Id. at Claim 1. The back-end receives the
`
`primitives and processes them into a final image comprised of colored pixels. Id. For
`
`various embodiments, the claimed invention also includes one or more of the following
`
`features:
`
`36. Back-end with Parallel Pipelines: The ’506 Patent discloses and claims a
`
`system with a back-end comprised of multiple parallel pipelines. Id. These parallel
`
`pipelines each process a different portion of the screen in parallel. Id.
`
`
`
`
`
`17
`
`ARM, Ex. 1003, Page 17
`
`
`
`
`
`
`
`Ex. 1001 (’506 Patent) at Figure 3 (parallel pipelines annotated).
`
`
`
`18
`
`ARM, Ex. 1003, Page 18
`
`
`
`
`
`
`
`
`
`Ex. 1001 (’506 Patent) at Figure 5 (parallel pipelines annotated).
`
`37. Unified Shader: To help process the primitives, each pipeline contains a
`
`
`
`“unified shader.” Id. at Claim 1. The ’506 Patent defines “unified shader” to mean a
`
`shading unit that performs both pixel color shading and texture address shading. A
`
`passage describing the unified shader of the ’506 Patent is provided below:
`
`
`
`19
`
`ARM, Ex. 1003, Page 19
`
`
`
`
`
`
`
`
`
`
`
`20
`20
`
`ARM, Ex. 1003, Page 20
`
`ARM, Ex. 1003, Page 20
`
`
`
`
`
`A unified shader reads in rasterized texture addresses and colors, and
`
`applies a programmed sequence of instructions. A unified shader is so
`
`named because the functions of a traditional color shader and a
`
`traditional texture address shader are combined into a single unified
`
`shader. The unified shader performs both color shading and texture
`
`address shading. The conventional distinction between shading
`
`operations (i.e., color texture map and coordinate texture map or color
`
`shading operation and texture address operation) is not handled by the
`
`use of separate shaders. In this way, any operation, be it for color
`
`shading or texture shading, may loop back into the shader and be
`
`combined with any other operation.
`
`Ex. 1001 (’506 Patent) at 6:49-52 (emphasis added).
`
`38.
`
`To be clear, the ’506 Patent does not use the term “unified shader” in the
`
`way it is commonly used in the industry today. Today, a person of ordinary skill would
`
`understand the term “unified shader” to mean a unit that performs computations on both
`
`vertex (geometry) data and pixel data. Or, as the ’506 Patent would put it, a shader that
`
`performs computations on both “front-end” and “back- end” data. The ’506 Patent,
`
`however, clearly does not use the term in this way, and, instead, gives the term a
`
`definition unique to the patent. Id. I apply the ’506 Patent’s definition of “unified
`
`shader” for purposes of the opinions in this declaration.
`
`39.
`
`Figure 5 of the ’506 Patent, identifying the unified shader in green is
`
`provided below:
`
`
`
`
`
`
`
`21
`
`ARM, Ex. 1003, Page 21
`
`
`
`
`
`
`
`
`
`Ex. 1001 (’506 Patent) at Figure 5 (annotating Unified Shader)
`
`
`
`40.
`
`Tiling and Setup Unit: The system disclosed and claimed by the ’506 Patent
`
`also includes a Setup Unit that assists with “Tiling.” Tiling is the process of breaking
`
`
`
`22
`
`ARM, Ex. 1003, Page 22
`
`
`
`
`
`the screen into “tiles” and processing each tile separately. Tiling is a well-known
`
`practice in the industry and was well-known at the time the ’506 Patent was filed. It was
`
`often combined with parallel processing in order to processes multiple pieces of a final
`
`image at the same time in order to increase efficiency. To assist in such a process, the
`
`’506 Patent discloses a “Setup Unit” that receives primitives/geometry from the front-
`
`end, determines which portion or “tile” of the screen the geometry is located in, and then
`
`directs the geometry to be processed by the pipeline responsible for that tile. Ex. 1001
`
`(’506 Patent) at claim Figure 5 from the ’506 Patent id reproduced below, identifying the
`
`Setup Unit in green:
`
`
`
`
`
`
`
`23
`
`ARM, Ex. 1003, Page 23
`
`
`
`
`
`
`
`
`
`Ex. 1001 (’506 Patent) at Figure 5 (annotating Setup Unit)
`
`
`
`41.
`
`Z-buffering: The ’506 Patent discloses that certain embodiments use z-
`
`buffering. Z-buffering is the process of comparing the depth values or “z values” of
`
`various objects—measured from the screen—to determine which objects are visible and
`
`which objects are not visible (because they appear behind other objects from the point of
`
`view of the screen). Z-buffering was a common feature for graphics processors at the
`
`time the ’506 Patent was filed. In order to facilitate z-buffering, the ’506 Patent
`
`
`
`24
`
`ARM, Ex. 1003, Page 24
`
`
`
`
`
`discloses that the system contains a “z-buffer logic unit,” which a person of ordinary
`
`skill in the art would understand go be a logic unit that performs z-buffering.
`
`
`
`
`
`
`
`25
`
`ARM, Ex. 1003, Page 25
`
`
`
`
`
`42.
`
`The ’506 Patent discloses that the system may perform three different types
`
`of z-buffering: hierarchical z-buffering, early z-buffering, and late z- buffering. Id. at
`
`Claims 3-5. As explained in more detail in the context of claim construction, hierarchical
`
`z-buffering involves visibility testing at a coarse level, i.e., coarser than fragment-by-
`
`fragment or pixel-by-pixel, early z-buffering is z- buffering that occurs before pixel
`
`shading, and late z-buffering is z-buffering that occurs after pixel shading. Hierarchical
`
`z-buffering is z-buffering that occurs at a coarse level (e.g., an entire tile) as opposed to
`
`the finer fragment-by-fragment level. The various types of z-buffering are identified in
`
`Figure 5 from the ’506 Patent:
`
`
`
`
`
`
`
`26
`
`ARM, Ex. 1003, Page 26
`
`
`
`
`
`
`
`
`
`
`
`Ex. 1001 (’506 Patent) at Figure 5 (annotating z-interfaces)
`
`A.
`
`Prosecution History of the ’506 Patent
`
`43.
`
`The ’506 Patent’s application was filed on November 26, 2003. The
`
`application included 16 claims directed to “graphics chip[s]” comprising, among other
`
`things, the elements listed in the previous section. Ex. 1002 (’506 Prosecution History).
`
`
`
`27
`
`ARM, Ex. 1003, Page 27
`
`
`
`
`
`After several office actions, the Examiner allowed several of claims applicant had
`
`amended. Ex. 1002 at 39.
`
`44.
`
`In allowing the claims of the ’506 Patent, the examiner recognized that
`
`several prior art references (including Zhu ’063) taught rendering pipeline systems that
`
`used screen space tiling and double z-buffering schemes that use scan/z engines as
`
`claimed by the ’506 patent. Ex. 1002 at 7/30/2009 Notice of Allowance. The examiner
`
`stated, however, that the reviewed references did not disclose parallel pipelines with
`
`unified shaders as claimed by the ’506 Patent.
`
`45. As explained below, using parallel pipelines and “unified shaders” (as
`
`defined by the ’506 Patent) to perform pixel calculations was well-known and obvious at
`
`the time the ’506 Patent was filed.
`
`V. LEVEL OF ORDINARY SKILL IN THE ART
`
`46. A person of ordinary skill in the field, at the time the ’506 patent was
`
`effectively filed, would have had at least a four-year degree in electrical engineering,
`
`computer engineering, computer science, or a related field and two years relevant
`
`experience in the graphics processing field including developing, designing or
`
`programming hardware for graphics processing units.
`
`VI. PROPOSED CLAIM CONSTRUCTION
`
`A.
`
`Z-Buffer Logic Unit
`
`47. Claims 3-5 require a “Z Buffer Logic Unit.” A person of ordinary skill in
`
`
`
`28
`
`ARM, Ex. 1003, Page 28
`
`
`
`
`
`the art would understand that the broadest reasonable interpretation of a “Z-Buffer Logic
`
`Unit” is “a logic unit that facilitates visibility testing by comparing depth values.”
`
`48.
`
`Z-buffering was well-known and common-place in the art by the time the
`
`’506 Patent was filed. A person of ordinary skill in the art would understand that a “z-
`
`buffer” is memory used to store the “z” or “depth” information. They would further
`
`understand that a z-buffer logic unit is the logic that uses that “z” or “depth” information
`
`to perform visibility testing. This is consistent with the disclosure of the ’506 Patent,
`
`which states that “Z (depth) information is computed” and “passed to the Z buffer” such
`
`that the “Z values are compared against the values stored in the Z buffer at that
`
`location.” Ex. 1001 (’506 Patent) at 6:20-23. In other words, a person of ordinary skill in
`
`the art would understand that a z-buffer logic unit is a logic unit that facilitates z-
`
`buffering, i.e., visibility testing done by comparing depth values.
`
`B.
`
`“Hierarchical Z-Interface”
`
`49. Claim 4 requires a “Hierarchical Z-Interface.” A person of ordinary skill in
`
`the art would understand that the broadest reasonable interpretation of a “hierarchical z-
`
`interface” to mean “an interface with a z-buffer logic unit that provides for visibility
`
`testing at a coarse level, including, for example, for an entire tile or primitive.”
`
`50. Hierarchical z-buffering was well-known and common-place by the time
`
`the ’506 Patent was filed. A person of ordinary skill in the art would have understood
`
`that hierarchical z-buffering involves visibility testing at a coarse level, i.e., coarser than
`
`
`
`29
`
`ARM, Ex. 1003, Page 29
`
`
`
`
`
`fragment-by-fragment / pixel-by-pixel. The ’506 Patent itself discloses that a
`
`hierarchical Z-interface is one that steps through geometry at a coarse level, including,
`
`for example, the entire portion of a geometry that contributes to a tile. Ex. 1001 (’506
`
`Patent) at 2:67-3:1 (“In one embodiment, each parallel pipeline comprises … a
`
`‘hierarchical-Z’ component to more precisely define the borders of the geometry.”). For
`
`example, the hierarchical z-buffer may compare the smallest z-value of such a geometry
`
`(i.e., the closes point to the screen) with the largest z-value of the current tile (i.e., the
`
`furthest point away). If the object being tested is further away (i.e., behind) the rest of
`
`the objects currently being displayed in that tile, it can be discarded in total:
`
`A scan converter 540 works in conjunction with Hierarchical Z-
`
`interface of Z buffer logic 555 to step through the geometry (e.g.,
`
`triangle or parallelogram) within the bounds of the pipeline’s tile
`
`pattern. In one embodiment, initial stepping is performed at a coarse
`
`level. For each of the coarse level tiles, a minimum (i.e. closest) Z value
`
`is computed. This is compared with the farthest Z value for the tile
`
`stored in a hierarchical-Z buffer 550. If the compare fails, the tile is
`
`rejected.
`
`Ex. 1001 (’506 Patent) at 6:2-10 (emphasis added).
`
`51.
`
`The coarse level z-buffering done by the hierarchical z-interface is in
`
`contrast to the finer pixel or fragment level z-buffering done later. Id. at 6:16-18 (“The
`
`second section of the scan converter 540 works in conjunction with the Early Z-
`
`interface… to step through the coarse tile at a fine level.”)
`
`
`
`30
`
`ARM, Ex. 1003, Page 30
`
`
`
`
`
`C.
`
`“Early Z-interface” and “Late Z-interface”
`
`52.
`
`The ’506 Patent also discloses and claims and “early z-interface” and a
`
`“late z-interface.” These interfaces coincide with the concepts of “early z- buffering” and
`
`“late z-buffering,” which were both well-known and commonplace prior to the filing of
`
`the ’506 Patent.
`
`53.
`
`Early z-buffering is simply z-buffering that is done prior to shading. This is
`
`efficient because the graphics processor doesn’t have to spend time shading primitives
`
`that will never be seen anyway (e.g., a tree that is behind a building). Accordingly, the
`
`broadest reasonable interpretation of this an “early z interface” is “an interface with a z
`
`buffer logic unit that provides for visibility testing prior to shading and texturing.”
`
`54.
`
`Late z-buffering is simply z-buffering that is done after shading. In certain
`
`cases early z-buffering does not discard all of the fragments that will be obstructed and
`
`they are caught by the late z-buffering. Accordingly, the broadest reasonable
`
`interpretation of this a “late z interface” is “an interface with a z buffer logic unit that
`
`provides for visibility testing after shading and texturing.”
`
`55.
`
`These well understood concepts are described in the ’506 Patent:
`
`If the current Z buffering mode is set to “early,” each quad is passed to
`
`the Z buffer 555 where its Z values are compared against the values
`
`stored in the Z buffer at that location…. At this stage, those quads for
`
`which none of the covered pixels passed the Z compare test are
`
`discarded. The early Z functionality attempts to minimize the amount
`
`
`
`31
`
`ARM, Ex. 1003, Page 31
`
`
`
`
`
`
`
`
`
`
`
`of work applied by the unified shader and texture unit to quads that are
`
`not visible.
`
`Ex. 1001 (’506 Patent) at 6:22-33 (emphasis added).
`
`
`[i]f the current Z buffering mode is set to “late,” the Z values for the
`
`quad are compared against the values stored in the Z buffer at that
`
`location…. Although early Z operation is preferred for performance
`
`reasons, in certain situations the unified shader might modify the
`
`contents of the coverage mask… and in these cases the z buffering
`
`mode will need to be set to “late”
`
`Id at 6:66-7:9 (describing operations after shading and texturing).
`
`
`32
`
`ARM, Ex. 1003, Page 32
`
`
`
`
`
`
`
`Ex. 1001 (’506 Patent) at Figure 5 (disclosing
`early z-interface prior to shading and texturing and
`the late z-interface after shading and texturing)
`
`
`
`VII. GROUND 1: RUBINSTEIN IN VIEW OF COLLODI (CLAIMS 1-9)
`
`56.
`
`I have reviewed and am familiar with the following prior art references: (1)
`
`U.S. Patent No. 7,102,646 (“Rubenstein”) (Ex. 1004) and (2) U.S. Patent Application
`
`
`
`33
`
`ARM, Ex. 1003, Page 33
`
`
`
`
`
`No. 10/037/992, which was later published as Publication No. US 2003/0076320A1
`
`(“Collodi”) (Ex. 1007).
`
`57.
`
`It is my opinion that the combination of Rubinstein in view of Collodi
`
`renders claims 1-7 and 9 of the ’506 Patent obvious.
`
`B.
`
`Rubinstein
`
`58. U.S. Patent No. 7,102,646 (“Rubenstein”) was filed on July 9, 2014 as a
`
`continuation of Application No. 09/709,964, now U.S. Patent No. 6,856,320 (“Zhu
`
`’320”). Zhu ’320 was filed on November 10, 2000 as a continuation in part of an
`
`application that issued as U.S. Patent No. 6,697,063 (“Zhu ’063”). Rubinstein
`
`incorporates the disclosures of Zhu ’320 and Zhu ’063 by references because the three
`
`patents relate to different features of the same inventive graphics processing chip. Ex.
`
`1004 (Rubinstein)