`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`EXHIBIT 1008
`
`EXHIBIT 1008
`
`
`
`Expert Report of Dr. Seth Teller in Support of
`Petition for IPR of U.S. Patent No. 6,172,679
`-1-
`
`
`
`EXPERT REPORT OF DR. SETH TELLER IN SUPPORT OF
`PETITION FOR INTER PARTES REVIEW OF U.S. PATENT NO. 6,172,679
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`20336-1454/LEGAL27986912.1
`
`
`
`Expert Report of Dr. Seth Teller in Support of
`Petition for IPR of U.S. Patent No. 6,172,679
`-2-
`
`
`
`
`
`
`
`TABLE OF CONTENTS
`Expert Report of Dr. Seth Teller ..................................................................................................... 3
`I.
`Introduction and Summary of Testimony ................................................................................ 3
`A. Qualifications .................................................................................................................... 3
`B. Other Matters ..................................................................................................................... 6
`C. Compensation .................................................................................................................... 6
`D. Materials Reviewed ........................................................................................................... 6
`E. Level of Ordinary Skill in the Art ..................................................................................... 7
`II. Overview/Tutorial Regarding Technology .............................................................................. 7
`III. The Challenged'679 Patent ..................................................................................................... 20
`A. Background and General Description of the ‘679 Patent ................................................ 20
`B. Challenged Claims .......................................................................................................... 27
`C. Claim Construction ......................................................................................................... 29
`IV. References that Anticipate the Asserted Claims .................................................................... 34
`A. Invalidity Standard .......................................................................................................... 34
`B. Summary Opinion ........................................................................................................... 35
`C. Salesin and Stolfi 1989 (“Salesin” reference) ................................................................. 36
`1. Overview .................................................................................................................. 36
`2. Salesin anticipates all challenged Claims of the '679 Patent .................................... 39
`(a) Independent claim 1 and claims 4-5 depending therefrom ............................... 39
`D. Airey 1990 (“Airey” reference) ....................................................................................... 41
`1. Overview .................................................................................................................. 41
`2. Airey anticipates all asserted Claims of the '679 Patent .......................................... 43
`(a) Independent claim 1 and claims 4-5 depending therefrom ................................ 44
`Conclusion .................................................................................................................................... 47
`
`
`20336-1454/LEGAL27986912.1
`
`
`
`Expert Report of Dr. Seth Teller in Support of
`Petition for IPR of U.S. Patent No. 6,172,679
`-3-
`
`
`
`Expert Report of Dr. Seth Teller
`
`Introduction and Summary of Testimony
`
`I.
`
`1.
`
`My name is Seth Teller. I have been retained in the above-referenced inter
`
`partes review proceeding by Intel Corporation ("Intel") to evaluate United States Patent No.
`
`6,172,679 ("the '679 patent") against certain references that predate the earliest possible priority
`
`date of the ‘679 patent, which I am informed is June 28, 1991. (Exh. 1001).1 I am informed that
`
`Intel's petition seeks review of claims 1, 4 and 5 of the '679 patent. As detailed in this report, it
`
`is my opinion that each of the asserted claims is anticipated or rendered obvious by prior art
`
`references that predate the ‘679 patent. If requested by the Patent Trial and Appeal Board
`
`(“PTAB”), I will testify at trial about my opinions expressed herein.
`
`A.
`
`2.
`
`Qualifications
`
`I have over twenty-five years of experience in the field of computer science,
`
`spanning a variety of positions in academia and industry. In addition to my studies at
`
`undergraduate, graduate, and postdoctoral levels, I have held industrial positions in groups for
`
`technology development, advanced technology development, and for research and development.
`
`For the past nineteen years I have held a faculty position in the Electrical Engineering and
`
`Computer Science Department at the Massachusetts Institute of Technology. In these positions I
`
`have designed and implemented, or supervised the design and implementation of, a number of
`
`computer graphics methods and systems intended for use in both academic and industrial
`
`settings.
`
`
`1 All numerical exhibits cited herein are attached to the above-referenced inter partes
`review proceeding by Intel to evaluate the '679 patent.
`20336-1454/LEGAL27986912.1
`
`
`
`Expert Report of Dr. Seth Teller in Support of
`Petition for IPR of U.S. Patent No. 6,172,679
`-4-
`
`
`From 1981 to 1985, I was an undergraduate student at Wesleyan University in
`
`3.
`
`Middletown, Connecticut. My studies as a Physics major included several projects calling for
`
`graphical visualization of physical simulation data. During this period I gained experience with
`
`two-dimensional rendering and animation algorithms.
`
`4.
`
`From 1986 to 1987, I was a technical staff member at Camex, Inc., a Boston firm
`
`producing software and hardware to support high-quality phototypesetting. During this period I
`
`gained experience with three-dimensional rendering and hidden-surface elimination algorithms.
`
`5.
`
`From 1987 to 1992, I was a Ph.D. student in the Computer Science Division of
`
`the Electrical Engineering and Computer Science Department at the University of California,
`
`Berkeley. During this period I became an expert in geometric modeling and in 3-D visibility
`
`algorithms. In 1991, I co-authored with my research advisor Carlo Séquin the article, “Visibility
`
`Preprocessing for Interactive Walkthroughs,” for publication in the proceedings of SIGGRAPH
`
`(Special Interest Group on GRAPHics and Interactive Techniques conference) ’91, describing
`
`efficient methods for subdivision of viewpoint space, for visibility precomputation, and for real-
`
`time hidden surface elimination and visible surface identification. My Ph.D. thesis in Computer
`
`Science, published in 1992, elaborated on the methods described in that article and demonstrated
`
`applications of the methods to interactive rendering.
`
`6.
`
`From 1988 to 1992, I was also a part-time employee of Silicon Graphics, Inc.
`
`(SGI), as a member of its Advanced Systems Division (1998) and its Research and Development
`
`Group (1998-2002). My work at SGI focused on the development of efficient methods for
`
`interactive specification and rendering of complex curved surfaces, and efficient methods for
`
`geometric computations involving complex datasets.
`
`20336-1454/LEGAL27986912.1
`
`
`
`Expert Report of Dr. Seth Teller in Support of
`Petition for IPR of U.S. Patent No. 6,172,679
`-5-
`
`
`From 1992 to 1994, I held a series of post-doctoral positions at the Hebrew
`
`7.
`
`University of Jerusalem, and at Princeton University. My research during this period focused on
`
`the development of efficient methods for spatial partitioning, radiosity simulation, and
`
`interactive visualization within large-scale computer graphics environments.
`
`8.
`
`From 1994 to the present, I have been an Assistant Professor (1994-1998),
`
`Associate Professor without tenure (1998-2002), Associate Professor with tenure (2002-2007),
`
`and Professor (2007-present) of Computer Science and Engineering in the Electrical Engineering
`
`and Computer Science Department at the Massachusetts Institute of Technology (MIT). From
`
`1994 to 2003, I held a joint research appointment in the Laboratory for Computer Science (LCS)
`
`and Artificial Intelligence Laboratory (AI Lab) at MIT. In July 2003, those Laboratories merged
`
`to form the Computer Science and Artificial Intelligence Laboratory (CSAIL), of which I am
`
`now a member with the rank of Principal Investigator. My research at MIT since 1994 has
`
`included the development of efficient methods for spatial subdivision, visible surface
`
`identification, hidden surface elimination, occlusion culling and ray tracing.
`
`9.
`
` I have authored or co-authored dozens of publications in the area of computer
`
`graphics, which have appeared in a variety of publication venues including computer graphics
`
`workshops, conferences and journals. My publications and patents are listed on my curriculum
`
`vitae. (Exh. 1012).
`
`10.
`
`As a result of my background in the computer graphics industry and in academia,
`
`I have extensive experience pertaining to visible surface determination and hidden surface
`
`elimination techniques, including techniques that serve as a backdrop for the technology in the
`
`'679 patent. For instance, I have extensive experience developing data structures and algorithms
`
`20336-1454/LEGAL27986912.1
`
`
`
`
`to support efficient and effective spatial subdivision, visibility preprocessing, and occlusion
`
`Expert Report of Dr. Seth Teller in Support of
`Petition for IPR of U.S. Patent No. 6,172,679
`-6-
`
`culling.
`
`B.
`
`11.
`
`Other Matters
`
`There are no other legal matters in which I have testified as an expert at trial or by
`
`deposition within the preceding four years.
`
`C.
`
`12.
`
`Compensation
`
`In connection with my work as an expert, I am being compensated at a rate of
`
`$900 per hour for consulting services including time spent testifying at any hearing that may be
`
`held. I am also reimbursed for reasonable and customary expenses associated with my work in
`
`this case. I receive no other forms of compensation related to this case. No portion of my
`
`compensation is dependent or otherwise contingent upon the results of this proceeding or the
`
`specifics of my testimony.
`
`D. Materials Reviewed
`
`13.
`
`In formulating my opinions in this case I have reviewed the '679 patent and its
`
`prosecution history. I have also reviewed U.S. Patents Nos. 5,914,721 ("the '721 patent") and
`
`6,618,047 B1 ("the '047 patent"), and their prosecution histories. (The '721 patent is the "parent"
`
`patent to the '679 patent and the '047 patent is the "child" patent to the '679 patent.) In addition
`
`to my expertise, I relied on the articles and other materials cited herein and attached as exhibits
`
`to the Petition.
`
`14.
`
`In connection with live testimony in this proceeding, should I be asked to provide
`
`it, I may use as exhibits various documents that refer to or relate to the matters contained within
`
`this report, or which are derived from the result and analyses discussed in this report.
`
`20336-1454/LEGAL27986912.1
`
`
`
`
`Additionally, I may create or assist in the creation of certain demonstrative exhibits to assist me
`
`Expert Report of Dr. Seth Teller in Support of
`Petition for IPR of U.S. Patent No. 6,172,679
`-7-
`
`in testifying.
`
`15.
`
`I am prepared to use any or all of the above-referenced documents, and
`
`supplemental charts, models, and other representations based on those documents, to support my
`
`live testimony in this proceeding regarding my opinions covering the '679 patent. If called upon
`
`to do so, I will offer live testimony regarding the opinions in this report.
`
`E.
`
`16.
`
`Level of Ordinary Skill in the Art
`
`I am told that in certain cases, the claims of a patent are reviewed from the point
`
`of view of a hypothetical person of ordinary skill in the art at the time the patent application was
`
`first filed. In my opinion, for the purposes of the '679 patent, a person of ordinary skill in the art,
`
`at the time the patent was filed in 1994 (or 1991 if that should be determined to be the priority
`
`date), would be one having a bachelor's or master's degree with a concentration in computer
`
`graphics from an accredited computer science program, and at least two years of experience
`
`working in a field related to three-dimensional computer graphics.
`
`II.
`
`Overview/Tutorial Regarding Technology
`
`17.
`
`Before discussing the patent-in-suit, it would be useful to give an overview of
`
`how images of artificial three-dimensional scenes are generated using computers. Generally
`
`speaking, computer graphics image generation or “scene rendering” involves at least five
`
`elements: a specification of a “geometric model” of the scene to be rendered; a specification of a
`
`“lighting model” specifying how illumination effects are to be simulated; one or more “synthetic
`
`viewpoints” from which the model is to be rendered; a “pixel frame buffer” specification
`
`describing the properties of the device on which the rendered image is intended to be displayed
`
`or stored; and a mechanism for assigning coordinates enabling the representation of the scene
`
`20336-1454/LEGAL27986912.1
`
`
`
`
`objects, simulated viewpoint, and frame buffer in appropriate coordinate systems, transformation
`
`Expert Report of Dr. Seth Teller in Support of
`Petition for IPR of U.S. Patent No. 6,172,679
`-8-
`
`among coordinate systems, and discrete sampling of continuous objects as required to enable the
`
`above rendering elements.
`
`18.
`
`A geometric model generally consists of a specification of the shape, location,
`
`rigid-body orientation, and appearance of one or more objects or surfaces making up the scene of
`
`interest. Each object or surface is generally represented as a collection of geometric “primitives”
`
`or basic shapes, e.g. triangles, quadrilaterals, or other multi-sided polygons, themselves
`
`represented as collections of three-dimensional vertices. Each object or surface is optionally
`
`associated with one or more geometric transformations, for example scaling, translation, and
`
`rotation, which specify how the object is to be sized, positioned, and oriented in the scene.
`
`Figure 1: A scene object composed of quadrilaterals, shown with back-facing and hidden surfaces removed
`
`
`
`
`20336-1454/LEGAL27986912.1
`
`
`
`Expert Report of Dr. Seth Teller in Support of
`Petition for IPR of U.S. Patent No. 6,172,679
`-9-
`
`
`
`
`
`Figure 2: Representation of a shape at increasing level of detail (left to right), using triangles
`
`
`
`19.
`
`Although the most general scene modeling and rendering schemes encompass
`
`both opaque (i.e., reflective but non-transmissive) and semi-transparent (i.e., both reflective and
`
`transmissive) objects, for the purposes of the current discussion it is sufficient to consider only
`
`opaque objects. For a given simulated viewpoint, at most one opaque scene object or surface can
`
`be visible along any single line of sight.
`
`
`
`Figure 3: A simulated viewpoint (upper right) viewing a geometric scene model (lower left)
`
`20.
`
`A synthetic viewpoint is a description of, at minimum, a simulated viewer’s
`
`location. In practice, rendering a rectangular image from that viewpoint further requires
`
`
`
`20336-1454/LEGAL27986912.1
`
`
`
`
`specification of a “view frustum,” or truncated pyramid of positive volume with the viewpoint at
`
`Expert Report of Dr. Seth Teller in Support of
`Petition for IPR of U.S. Patent No. 6,172,679
`-10-
`
`its apex (Fig. 4). The view frustum has an orientation determined by the viewing direction. The
`
`view frustum defines a field of view determined by the left, right, bottom, and top clipping
`
`planes. Finally, the frustum has a depth of field determined by near and far clipping planes.
`
`
`
`Figure 4: The view frustum, with viewpoint at its apex, and six bounding planes
`
`Next, each scene object is transformed into the coordinates of the view frustum
`
`21.
`
`(also called “camera coordinates”) and projected onto the image plane. For each three-
`
`dimensional vertex of the scene primitive or object to be displayed, the projection operation
`
`establishes the intersection, with the near clipping plane or image plane, of a ray originating at
`
`the viewpoint and passing through the vertex. The terms “image plane,” “screen plane” and
`
`“projection plane” are understood by those of ordinary skill in the art to signify the planar
`
`surface on which all scene primitives lie after projection.
`
`20336-1454/LEGAL27986912.1
`
`
`
`Expert Report of Dr. Seth Teller in Support of
`Petition for IPR of U.S. Patent No. 6,172,679
`-11-
`
`
`
`
`
`Figure 5: Projection of geometric primitives onto the image plane
`
`
`The stages of rendering up to and including projection are generally classified
`
`22.
`
`by those skilled in the art as “object space” computations, since they manipulate scene objects,
`
`and the view frustum, using a coordinate representation that has effectively unlimited precision,
`
`and they represent each scene primitive (e.g., triangle) as covering a finite, non-zero area in both
`
`the coordinate system of the scene model, and the projected coordinate system of the image
`
`plane. Any operation or computation that involves a scene primitive or view frustum represented
`
`at object precision is known to those skilled in the art as an “object space” operation or
`
`computation.
`
`23.
`
`A frame buffer is a memory organized into a discrete grid of “pixels” or
`
`picture elements, and connected to specialized hardware that rapidly reads out the contents of
`
`memory for display. Each pixel has a “color depth,” i.e., some number of memory bits dedicated
`
`to representing the light intensity to be displayed at the display location corresponding to that
`
`20336-1454/LEGAL27986912.1
`
`
`
`
`pixel. For example, a frame buffer might devote 24 bits to representing eight bits of intensity
`
`Expert Report of Dr. Seth Teller in Support of
`Petition for IPR of U.S. Patent No. 6,172,679
`-12-
`
`information for each of the red, green, and blue image channels.
`
`
`
`Figure 6: A conventional frame buffer storing color and depth (Z) information
`
`In order for continuous object space primitives to be displayed on a discrete
`
`24.
`
`image space frame buffer, a process of “scan conversion” (also called “rasterization”) is required
`
`which determines, for each scene primitive projected into a continuous screen space, the set of
`
`discrete pixels onto which that primitive projects. The term “scan conversion” is known to those
`
`skilled in the art to mean the process of sampling a continuous object space representation of
`
`some scene primitive to identify those elements of a discrete image-space frame buffer that
`
`should be involved in the visual representation of the scene primitive.
`
`
`
`20336-1454/LEGAL27986912.1
`
`
`
`Expert Report of Dr. Seth Teller in Support of
`Petition for IPR of U.S. Patent No. 6,172,679
`-13-
`
`
`
`Figure 7: Scan conversion of a triangle primitive, with interpolation of depth and color
`
`
`
`25.
`
`Once a scene primitive has been transformed into continuous screen space, i.e.,
`
`
`
`
`the coordinate system of the frame buffer, scan conversion can commence. Scan conversion is
`
`the process of determining, for a given scene primitive, which frame buffer pixels are overlapped
`
`by the primitive. Scan conversion is typically performed through a series of arithmetic
`
`interpolation (generalized averaging) operations that combine discrete geometric (x, y, depth)
`
`and illumination (color) values evaluated at the vertices of the primitive to produce intermediate
`
`values at the interior of the primitive. Any operation or computation that is performed on a set of
`
`pixels, e.g. the set of pixels identified by scan conversion of a scene primitive, is known to those
`
`skilled in the art as an “image space” operation or computation. Any operation or computation
`
`that combines elements performed at object precision with elements performed at image
`
`precision is known to those skilled in the art as a “hybrid” operation or computation.
`
`26.
`
`A critical element of any correct computer graphics rendering system is its ability
`
`to perform visible surface identification (or, equivalently, hidden surface elimination) at each
`
`20336-1454/LEGAL27986912.1
`
`
`
`
`pixel. That is, given a specification of a geometric model, a view frustum and a frame buffer, the
`
`Expert Report of Dr. Seth Teller in Support of
`Petition for IPR of U.S. Patent No. 6,172,679
`-14-
`
`rendering system must determine, at each pixel, which scene surface (if any) is visible to the
`
`viewer along the line of sight associated with that pixel. This identification enables the rendering
`
`system to “paint,” or assign, each pixel a color arising from the scene surface visible at that pixel.
`
`When all pixels are assigned correctly, the complete set of pixels represents a correct simulated
`
`image of the scene from the specified viewpoint.
`
`
`
`Figure 8: Visible surface identification at pixel resolution
`
`27. Most modern computer graphics rendering systems, including rendering systems
`
`that existed around the time of the filing of the ‘679 patent, also included specialized “z-buffer”
`
`or “depth buffer” memory (proposed by Catmull 1974). (Exh. 1011). Along with performing
`
`other rendering operations, Graphic Processing Units (GPUs) incorporate a z-buffer to store
`
`depth data for each pixel for the purpose of eliminating hidden surfaces at the level of individual
`
`pixels or for hierarchical groups of pixels. See, e.g., Figure 6 supra. The minimum
`
`20336-1454/LEGAL27986912.1
`
`
`
`
`representable depth value generally corresponds to coincidence with the view frustum’s near
`
`Expert Report of Dr. Seth Teller in Support of
`Petition for IPR of U.S. Patent No. 6,172,679
`-15-
`
`clipping plane. The maximum representable depth value generally corresponds to coincidence
`
`with the view frustum’s far clipping plane. For a given point on an object to be depicted, the
`
`present depth value for the corresponding pixel can be retrieved from the z-buffer and compared
`
`to the depth value for the point. If the depth value for the point is farther from the viewpoint
`
`than the value already contained in the z-buffer, then the point will be blocked or hidden from
`
`the viewer. For a scene point that is determined to be hidden, the values for the color and other
`
`properties of the point will not be written into the frame buffer, and the data for the pixel that
`
`corresponds to that point will not be overwritten. If, however, the depth value for the point is
`
`closer to the viewpoint than the value in the z-buffer for the corresponding pixel, then the point
`
`will be visible. For scene points that are determined to be visible, the values for color and other
`
`properties will be written into the frame buffer and become the new values for the corresponding
`
`pixel. The new depth value will also be written into the z-buffer and become the new depth
`
`value for the displayed pixel. Thus, the frame buffer and z-buffer are used together to determine
`
`what should be displayed on the screen. The process of using the z-buffer to determine which
`
`scene points are visible is known as “z-buffering” or “z-testing.” Z-buffering is performed
`
`during the rendering process.
`
`28. More specifically, at the start of each rendered frame, the depth buffer is
`
`initialized at each pixel to the largest representable depth value. Thereafter, the depth buffer is
`
`maintained so as to store at each pixel a depth value representing the distance from the viewer, in
`
`a direction perpendicular to the image plane, of the closest scene point rendered at that pixel so
`
`far in the current frame.
`
`20336-1454/LEGAL27986912.1
`
`
`
`Expert Report of Dr. Seth Teller in Support of
`Petition for IPR of U.S. Patent No. 6,172,679
`-16-
`
`
`The depth buffer also includes conditional logic that controls the writing of color
`
`29.
`
`values to the frame buffer. At the start of each rendered frame, the frame buffer is initialized or
`
`“cleared” to some background color. Thereafter, color and depth values or “fragments”
`
`subsequently arising from lighting computations at a particular surface point are written into a
`
`particular pixel of the frame buffer only if the fragment passes the “depth test,” i.e., if the depth
`
`of the surface point is less than the depth stored at the corresponding depth buffer location. If the
`
`depth test is passed, the depth buffer is updated to contain the depth of the tested fragment.
`
`Thus, at the end of each rendered frame, each frame buffer pixel will contain a color value
`
`associated with the closest scene point visible to the viewer along the line of sight corresponding
`
`to that pixel, and the corresponding depth buffer pixel will contain the depth of the surface
`
`visible at that pixel. If no scene surface is visible at a pixel, the frame buffer will contain the
`
`background color at that pixel, and the depth buffer will contain its maximum representable
`
`value at that pixel.
`
`30.
`
`Although the depth buffer provides a solution to the hidden surface elimination
`
`problem, it makes an equivalent computational expenditure for fragments (colored pixels)
`
`regardless of whether they arise from visible or invisible surfaces; the only difference between
`
`the visible and invisible case is that the depth test fails for invisible fragments, suppressing the
`
`“framebuffer write” that occurs for visible fragments. For a great many interesting scene models
`
`and interactive rendering sessions occurring in practice, a majority of scene surfaces are invisible
`
`(i.e., occluded) from a majority of viewpoints. Thus, a rendering system relying solely on
`
`conventional depth buffering for hidden surface elimination expends the majority of its
`
`computational resources processing (transforming, lighting, scan-converting, depth testing, etc.)
`
`invisible scene points, i.e., scene points that make no contribution to the final rendered image.
`20336-1454/LEGAL27986912.1
`
`
`
`Expert Report of Dr. Seth Teller in Support of
`Petition for IPR of U.S. Patent No. 6,172,679
`-17-
`
`
`It is often of interest to compute a series of simulated images at close intervals in
`
`31.
`
`space and time, so as to produce the illusion of smooth motion through the scene. This sort of
`
`visualization is called “interactive” or “real-time” rendering, in contrast to so-called “batch” or
`
`“off-line” rendering. To maintain the illusion of smooth motion, it is generally accepted by those
`
`skilled in the art that successive images must be generated at a rate of at least 10 Hz, i.e., at least
`
`ten times per second. Even for a geometric scene model of modest complexity and a frame
`
`buffer of ordinary resolution, achieving such a refresh rate requires performing hundreds of
`
`millions of basic operations (i.e. memory loads and stores, comparisons, and arithmetic
`
`operations) per second. Thus a problem of central interest in computer graphics is to render
`
`images not only correctly, but rapidly, i.e. in a computationally efficient manner.
`
`32.
`
`Any particular hardware subsystem has some fixed, maximum rendering capacity,
`
`typically expressed as a combination of the hardware’s “transform rate” and “fill rate.” The
`
`transform rate is the rate at which the hardware can subject modeling primitives (vertices of
`
`scene objects) to geometric transformations. The fill rate is the rate at which the hardware
`
`performs scan conversion, i.e., produces surface points, with associated depth and color, destined
`
`for the depth buffer and frame buffer respectively.
`
`33.
`
`In contrast to the fixed capacity of a given rendering hardware subsystem, there
`
`are no fixed bounds on the number of geometric primitives in the scene model that may be
`
`presented to the subsystem for rendering. Thus, regardless of the performance level of the
`
`underlying rendering subsystem, it is a common occurrence for a user to attempt interactive
`
`visualization of a geometric model, only to find that the subsystem renders the model at less than
`
`interactive rates, i.e., that it sustains a refresh rate of less than 10 frames per second.
`
`20336-1454/LEGAL27986912.1
`
`
`
`Expert Report of Dr. Seth Teller in Support of
`Petition for IPR of U.S. Patent No. 6,172,679
`-18-
`
`
`34. Many researchers have described methods for lessening the rendering load on an
`
`underlying rendering subsystem by exempting some hidden surfaces from transformation,
`
`illumination, scan conversion and depth buffer computations. A steady stream of academic
`
`literature beginning in the 1960’s, and continuing to the present, has sought to develop efficient
`
`and effective methods for visible surface identification and hidden surface elimination in order to
`
`reduce load on a fixed rendering subsystem.
`
`35.
`
`A load reduction method is said by those skilled in the art to be “efficient” if two
`
`considerations hold for most scenes and most viewpoints exercised within those scenes. First,
`
`the method’s running time must grow tolerably slowly, usually only slightly faster than linearly,
`
`with the size of its input. Second, the method must achieve a net increase in frame rate in an
`
`interactive setting, i.e., during interactive rendering the method must achieve a greater time
`
`savings due to hardware rendering load reduction than the time expenditure due to performing
`
`hidden surface elimination or visible surface identification.
`
`36.
`
`The fundamental problem facing designers past and present is the lack of advance
`
`knowledge of the precise viewpoint or viewpoints to be exercised during interactive rendering.
`
`If a particular viewpoint were to be known precisely ahead of time, any method could simply
`
`“precompute” the set of scene objects or surfaces visible to that viewpoint, and store the visible
`
`set while associating it with the particular viewpoint. If and when that viewpoint were to be later
`
`exercised, the method could simply retrieve the stored visible set and issue it to the rendering
`
`subsystem for display. However, the nature of interactive rendering, in which the simulated
`
`viewpoint is under real-time user control, makes it inherently impossible for any system to have
`
`precise advance knowledge of any particular viewpoint. Moreover, precomputing the visibility
`
`20336-1454/LEGAL27986912.1
`
`
`
`
`from all possible distinct viewpoints is clearly impossible, since there are an infinite number of
`
`Expert Report of Dr. Seth Teller in Support of
`Petition for IPR of U.S. Patent No. 6,172,679
`-19-
`
`distinct continuous viewpoints within any positive volume.
`
`37.
`
`In an interactive setting, only approximate knowledge of the viewpoint to be
`
`exercised is available, with the general rule holding that the precision with which any viewpoint
`
`can be known is inversely proportional to the time remaining until that viewpoint is to be
`
`exercised. Thus, before the start of an interactive visualization session, maximum time remains
`
`until any given viewpoint is to be exercised, and literally no precision is available, since the user
`
`may choose to place the initial viewpoint anywhere at all within or even outside of the scene.
`
`Clearly in this circumstance no conclusions can be made about which surfaces will be hidden or
`
`visible to the initial viewpoint.
`
`38.
`
`However, once interactivity has begun, and assuming, as would one of ordinary
`
`skill in the art, that the user moves smoothly throughout the simulated scene, subsequent
`
`viewpoints can be known with increased precision: they will typically occur near the location of
`
`the current viewpoint. Finally, the only time at which any given viewpoint is known with
`
`complete precision is when it is an actual viewpoint to be exercised by the system, as determined
`
`by the system’s user interface and the user’s input actions. In an interactive setting, the interval
`
`during which this condition holds is typically quite brief, on the order of tens to hundreds of
`
`milliseconds. Thus any rendering load-reduction method must identify visible surfaces, or
`
`eliminate invisible surfaces, very rapidly in order to be suitable for interactive use.
`
`39.
`
`A load reduction method is