`Ashraf A. Fawzy, Reg. No. 67,914
`Akin Gump Strauss Hauer & Feld LLP
`1333 New Hampshire Avenue, NW
`Washington, D.C. 20036
`Tel: (202) 887-4000
`Fax: (202) 887-4288
`
`UNITED STATES PATENT AND TRADEMARK OFFICE
`_______________
`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`_____________
`
`Google Inc.
`Petitioner
`
`v.
`
`Art+Com Innovationpool GmbH
`Patent Owner
`
`_____________
`
`Inter Partes Review No. __________
`U.S. Patent No. RE44,550
`
`_____________
`
`DECLARATION OF DR. ANSELMO LASTRA
`
`GOOGLE EXHIBIT 1002, Page 1 of 105
`
`
`
`TABLE OF CONTENTS
`
`Page
`
`I.
`
`INTRODUCTION ........................................................................................ 1
`
`II.
`
`QUALIFICATIONS ..................................................................................... 1
`
`III. COMPENSATION ....................................................................................... 4
`
`IV. MATERIAL CONSIDERED ........................................................................ 4
`
`V.
`
`LEVEL OF ORDINARY SKILL IN THE ART ........................................... 8
`
`VI. THE DISCLOSED TECHNOLOGY ............................................................ 9
`
`A.
`
`The State of the Art in 1995 ................................................................ 9
`
`B.
`
`The ’550 Specification ...................................................................... 15
`
`VII. CLAIM CONSTRUCTION ........................................................................ 19
`
`A.
`
`Legal Standard for Claim Construction ............................................. 19
`
`B.
`
`Terms That Require Construction ..................................................... 20
`
`VIII. LEGAL and factual bases for opinion ......................................................... 23
`
`A.
`
`Priority Date...................................................................................... 23
`
`B.
`
`Legal Standard of Anticipation ......................................................... 23
`
`IX. The Challenged Claims are Anticipated ...................................................... 25
`
`A. General Description of Prior Art ....................................................... 25
`
`B. Ground 1: Claims 1-2, 4, 7, 17-18, 25, 27, 31, 33, 41-42, 76, 79, and
`84 are anticipated by the T_Vision Publication ................................. 27
`
`1.
`
`2.
`
`Claim 1 is anticipated by the T_Vision Publication ................. 27
`
`Claims 2, 4, and 84 are anticipated by the T_Vision
`Publication ............................................................................. 45
`
`3.
`
`Claim 7 is anticipated by the T_Vision Publication ................. 47
`
`
`
`i
`
`GOOGLE EXHIBIT 1002, Page 2 of 105
`
`
`
`4.
`
`5.
`
`6.
`
`7.
`
`8.
`
`Claims 17, 33, 76, and 79 are anticipated by the T_Vision
`Publication ............................................................................. 49
`
`Claims 18, 25, and 27 are anticipated by the T_Vision
`Publication ............................................................................. 50
`
`Claim 31 is anticipated by the T_Vision Project ..................... 51
`
`Claim 41 is anticipated by the T_Vision Project ..................... 52
`
`Claim 42 is anticipated by the T_Vision Publication ............... 53
`
`X.
`
`SPECIFIC GROUNDS OF UNPATENTABILITY .................................... 54
`
`XI. CONCLUSION ........................................................................................ 101
`
`
`
`
`
`
`
`
`
`
`
`ii
`
`GOOGLE EXHIBIT 1002, Page 3 of 105
`
`
`
`I, Dr. Anselmo Lastra, declare as follows:
`
`I.
`
`INTRODUCTION
`
`1.
`
`Counsel for Google Inc. (“Google” or “Petitioner”) has requested that
`
`I provide declaratory evidence in the above captioned inter partes review
`
`proceeding (my “Engagement”). I understand that this inter partes review
`
`proceeding involves U.S. Patent No. RE44,550 (“the ’550”).
`
`2.
`
`As part of my Engagement, I have been asked to provide my analysis
`
`and expert opinions regarding the state of the relevant technology at the time of the
`
`alleged invention of the ’550 and the validity of claims 1-3, 14, 18, 25, 27-30, 32,
`
`34, 35, 39, 43, 46, 48, 51-53, 58, 61, 63, and 83 (the “Challenged Claims”) of the
`
`’550.
`
`II. QUALIFICATIONS
`
`3.
`
`I have been a professor in the Department of Computer Science at the
`
`University of North Carolina at Chapel Hill since 1991. I am currently a tenured
`
`Professor of the Computer Science Department and was the Chair of the Computer
`
`Science Department from 2009 to 2014. My qualifications for formulating my
`
`analysis on this matter are summarized here and are addressed more fully in my
`
`curriculum vitae (Ex. 1003).
`
`4.
`
`I received a Bachelor of Science degree in Electrical Engineering
`
`from the Georgia Institute of Technology in 1972. In 1981, I received a Master of
`
`Arts degree in Computer Science from Duke University. In 1988, I received a
`
`1
`
`GOOGLE EXHIBIT 1002, Page 4 of 105
`
`
`
`Ph.D. in Computer Science from Duke University.
`
`5.
`
`Since 1972 I have been involved in the research, development, and
`
`design of hardware and software for computer systems, including computer
`
`graphic systems.
`
`6.
`
`From 1972 to 1975, I was an engineer at Scidata Inc. in New York,
`
`New York and Miami, Florida. From 1975 to 1979, I was an engineer and then
`
`project manager at Coulter Electronics in Miami, Florida. While holding these
`
`positions, I designed electronic hardware, software, and medical instruments and
`
`for a time I supervised a team of engineers and technicians.
`
`7.
`
`From 1988 to 1991, I was a Research Assistant Professor at Duke
`
`University in Durham, North Carolina. While holding this position, I taught
`
`computer science classes and performed research in the areas of parallel processing
`
`and performance, and computational science, with a particular interest in graphics
`
`and pattern recognition.
`
`8.
`
`In 1991, I joined the Computer Science Department at the University
`
`of North Carolina at Chapel Hill as a Research Assistant Professor. In 2001, I was
`
`promoted to Associate Professor. In 2006, I was promoted to full Professor, and
`
`from 2009 to 2014 I served as Chairman of the Computer Science Department.
`
`9.
`
`At the University of North Carolina at Chapel Hill, I worked on the
`
`Pixel-Planes and PixelFlow projects. With support from the National Science
`
`2
`
`GOOGLE EXHIBIT 1002, Page 5 of 105
`
`
`
`Foundation and the Defense Advanced Research Projects Agency, we developed a
`
`heterogeneous graphics supercomputer called Pixel-Planes 5 that was, to the best
`
`of my knowledge, the fastest of its time. We later developed another heterogeneous
`
`graphics supercomputer called PixelFlow.
`
`10. At the University of North Carolina at Chapel Hill, I have taught
`
`courses in computer graphics, digital image processing, graphics hardware,
`
`computer architecture, 3D computer animation,
`
`robotics, digital
`
`logic,
`
`programming, and other subjects.
`
`11.
`
`I have authored or co-authored over 80 research papers and articles,
`
`many of which relate to the relevant art and were presented in prestigious venues,
`
`such as the annual conference of the Association for Computing Machinery’s
`
`Special Interest Group on Computer Graphics and Interactive Techniques
`
`(“SIGGRAPH”). A complete list of my publications is contained in my curriculum
`
`vitae (Ex. 1003).
`
`12.
`
`I co-authored perhaps the first publication in the area now known as
`
`General Purpose Computing on Graphics Processors (“GPGPU”): Harris, Coombe,
`
`Scheuermann, and Lastra, “Physically-Based Visual Simulation on Graphics
`
`Hardware,” Graphics Hardware 2002, 109–118. With two colleagues from the
`
`University of North Carolina at Chapel Hill, I organized the 2004 Workshop on
`
`GPGPU held in Los Angeles, which brought this technology to a wider audience.
`
`3
`
`GOOGLE EXHIBIT 1002, Page 6 of 105
`
`
`
`13.
`
`I served a term on the editorial board of the IEEE Transactions on
`
`Visualization and Computer Graphics. I also served a term on the editorial board of
`
`IEEE Computer Graphics and Applications. I was on the SIGGRAPH conference
`
`organization committee. I served as papers chair, program chair, treasurer and
`
`general chair of the Graphics Hardware conference, and I also served on the
`
`advisory board. In 2009, we merged Graphics Hardware with the Symposium on
`
`Interactive Ray Tracing and created the High Performance Graphics Conference. I
`
`currently serve on the steering committee.
`
`III. COMPENSATION
`
`14.
`
`I am not, and never have been, an employee of Google. I have been
`
`engaged in the present matter to provide my independent analysis of the issues
`
`raised in the petition for inter partes review of the ’550. I received no
`
`compensation for this Declaration beyond my normal hourly compensation based
`
`on my time actually spent studying the matter, and I will not receive any added
`
`compensation based on the outcome of this inter partes review of the ’550.
`
`IV. MATERIAL CONSIDERED
`
`15.
`
`In writing this Declaration, I have considered the following: my own
`
`knowledge and experience, including my work experience in the fields of
`
`computer science, computer engineering, and computer graphics and digital image
`
`processing systems; my experience in teaching those subjects; and my experience
`
`4
`
`GOOGLE EXHIBIT 1002, Page 7 of 105
`
`
`
`in working with others involved in those fields.
`
`16.
`
`I have also reviewed and considered the ’550 (Ex. 1001), the
`
`prosecution file history for the ’550 (including Exs. 1004 and 1005),1 and the
`
`materials included within The T_Vision Project, CD-ROM Materials from
`
`Siggraph 95, Los Angeles, CA, USA (Aug. 6-11, 1995) (“T_Vision Publication”)
`
`(Ex. 1006).2
`
`17.
`
`I have also reviewed the following:
`
`• First Amended Complaint, ART+COM Innovationpool GmbH. v. Google
`
`Inc., No. 14-0217, Dkt. No. 9 (D. Del.) (Ex. 1007)
`
`
`1 The prosecution history for the ’550 includes: U.S. Pat. RE45,550 (U.S. App. No.
`
`13/773,431) (the “’550”); U.S. Pat. No. RE41,428 (U.S. App. No. 12/006,231);
`
`and U.S. Pat. No. 6,100,897 (U.S. App. No. 08/767,829).
`
`2 The T_Vision Publication was published and made available at the SIGGRAPH
`
`95 conference from Aug. 6, 1995. See, e.g., Ex. 1016 (“SIGGRAPH 95 Conference
`
`Proceedings August 6 – 11, 1995”); Ex. 1017 (“This disc represents a compendium
`
`of different material from the ACM SIGGRAPH conference in Los Angeles,
`
`California. While the conference itself occupies a fixed and fleeting moment in
`
`time, it is our hope that publications such as this will serve to preserve and
`
`propagate the wealth of information and experience which was embodied in that
`
`event.”).
`
`5
`
`GOOGLE EXHIBIT 1002, Page 8 of 105
`
`
`
`• Yvan G. Leclerc & Stephen Q. Lau, Jr., SRI Int’l, TerraVision: A Terrain
`
`Visualization System, Tech. Note 540 (Jan. 26, 1995) (“Leclerc”) (Ex.
`
`1008)
`
`• Geographic Information Systems: Principles and Applications, John
`
`Wiley & Sons (1991) (“The GIS Textbook,” Ex. 1009)
`
`• N.L. Faust, The Virtual Reality of GIS, 22 Env’t & Planning B: Planning
`
`and Design 257, 257-268 (May 1995) (“Faust,” Ex. 1010)
`
`• R.A. Finkel & J. L. Bentley, Quad Trees: A Data Structure for Retrieval
`
`on Composite Keys, 4 Acta Informatica, 1-9 (1974) (Finkel and Bentley,
`
`1974) (Ex. 1011)
`
`• James H. Clark, Hierarchical Geometric Models for Visible Surface
`
`Algorithms, 19 Commc’n of the ACM, 547-554 (1976) (“Clark, 1976,”
`
`Ex. 1012)
`
`• William Hibbard & David Stanek, Interactive Atmospheric Data Access
`
`via High-Speed Networks, 22 Computer Networks and ISDN Systems,
`
`103-109 (1991) (“Hibbard,” Ex. 1013)
`
`• Lance Williams, Pyramidal Parametrics, 17 ACM SIGGRAPH
`
`Computer Graphics 17, 1-11 (1983) (Williams, Ex. 1014)
`
`• Joint Statement on Claim Construction, ART+COM Innovationpool
`
`GmbH. v. Google Inc., No. 14-0217, Dkt. No. 64 (D. Del.) (Ex. 1015)
`
`6
`
`GOOGLE EXHIBIT 1002, Page 9 of 105
`
`
`
`• The SIGGRAPH 95 Website and pages from the conference publication
`
`(Exs. 1016, 1017)
`
`• The Microsoft Press Computer Dictionary (2nd ed. 1994) (Ex. 1018)
`
`18.
`
`I understand that the ’550 is the subject of another petition for inter
`
`partes review (the “Related ’550 Petition). I have also prepared a declaration
`
`supporting that petition. I hereby incorporate my statements and opinions in that
`
`declaration herein.
`
`19.
`
`I understand that the T_Vision Publication forms the basis for the
`
`grounds set forth in the Petition for inter partes review of the ’550. Additionally, I
`
`am aware of information generally available to, and relied upon by, persons of
`
`ordinary skill in the art at relevant times, including technical dictionaries and
`
`technical reference materials (including textbooks, manuals, and technical papers
`
`and articles); some of my statements below are expressly based on such awareness.
`
`20. Although this Declaration refers to selected portions of the cited
`
`references for the sake of brevity, it should be understood that one of ordinary skill
`
`in the art would view the references cited herein in their entirety, and in
`
`combination with other references cited herein or cited within the references
`
`themselves. The references used in this Declaration, therefore, should be viewed as
`
`being incorporated herein in their entirety.
`
`21. Due to procedural limitations for inter partes reviews, the grounds of
`
`7
`
`GOOGLE EXHIBIT 1002, Page 10 of 105
`
`
`
`invalidity discussed herein are based solely on prior patents and other printed
`
`publications. I understand that Google reserves all rights to assert other grounds for
`
`invalidity, not addressed herein, at a later time, for instance failure of the
`
`application to claim patentable subject matter under 35 U.S.C. § 101, failure to
`
`meet requirements under 35 U.S.C. § 112, and anticipation/obviousness under 35
`
`U.S.C. §§ 102 and 103 not based solely on patents and printed publications. Thus,
`
`absence of discussion of such matters here should not be taken as indicating that
`
`there are no such additional grounds for invalidity of the ’550.
`
`22.
`
`I reserve the right to supplement my opinions to address any
`
`information obtained, or positions taken, based on any new information that comes
`
`to light throughout this proceeding.
`
`V. LEVEL OF ORDINARY SKILL IN THE ART
`
`23.
`
`It is my understanding that the ’550 is to be interpreted based on how
`
`it would be read by a person of ordinary skill in the art (a “POSITA”). It is my
`
`understanding that factors such as the educational level of those working in the
`
`field, the sophistication of the technology, the types of problems encountered in the
`
`art, the prior art solutions to those problems, and the speed at which innovations
`
`are made may help establish the level of skill in the art.
`
`24.
`
`I was familiar with, and a practitioner of, the technology at issue and
`
`the state of the art at the time the application leading to the ’550 was filed. I am
`
`8
`
`GOOGLE EXHIBIT 1002, Page 11 of 105
`
`
`
`assuming that the effective filing date of the ’550 was December 22, 1995 for the
`
`reasons described below.
`
`25. Based upon my experience in this area, a POSITA around December
`
`22, 1995 would have had a Bachelor of Science degree and three years’ experience
`
`in research or development (e.g., engineering, product development, requirements
`
`analysis) in computer graphics and/or digital image processing. With more
`
`education, for example post-graduate degrees and/or study,
`
`less
`
`industry
`
`experience is needed to attain an ordinary level of skill.
`
`26. Based on my experiences, I have a good understanding of the
`
`capabilities of a POSITA. Indeed, I have taught, participated in organizations, and
`
`worked closely with many such persons over the course of my career, including the
`
`relevant time frame around December 22, 1995.
`
`VI. THE DISCLOSED TECHNOLOGY
`
`A. The State of the Art in 1995
`
`27. The technology described in the ’550 combines technology related to
`
`geographical information systems, computer graphics, digital image processing,
`
`and networking.
`
`28.
`
`In general, two of the objectives of the ’550 are to: (1) provide a user
`
`with the ability to visualize a large amount of geographically related data (roads,
`
`buildings, land use, weather, elevation, trees, waterways, etc.), and (2) make the
`
`9
`
`GOOGLE EXHIBIT 1002, Page 12 of 105
`
`
`
`visualization happen at high enough frame rates so that the user experiences a
`
`sense of continuous movement over the Earth.
`
`29. The first objective relates to the use of what the patent refers to as
`
`“spatially distributed data sources.” The amount of information needed was
`
`potentially considerable, so the patent discloses that the user’s computer could
`
`access data stored on multiple data sources connected to the user’s computer by a
`
`high-speed network.
`
`30. The use of high-speed networks to provide an interactive visualization
`
`of large amounts of remotely stored data was well known in 1991. For example, in
`
`1991, researchers at the University of Wisconsin Madison had developed an
`
`interactive weather simulation system that used data stored over a high-speed wide
`
`area network. Ex. 1013. Hibbard recognized that “[a] high-speed wide-area
`
`network would provide a way to extend our visualizations to much larger data
`
`sets…This design exploits the large storage capacity and transfer rate of
`
`supercomputer disks, the arithmetic speed of the super computer, and the rendering
`
`performance of the workstation.” Id. at 3.
`
`31.
`
`It is also worth noting that Patent Owner acknowledged that the prior
`
`art disclosed “distributed memories…connected via a network used to transfer
`
`data.”
`
`10
`
`GOOGLE EXHIBIT 1002, Page 13 of 105
`
`
`
`(Ex. 1005 at 4)
`
`
`
`32.
`
`Interactively visualizing GIS data in two and three dimensions was
`
`well-known by 1995. By that time, the field of GIS was “spreading like wildfire
`
`throughout the world.” Ex. 1010 at 4. Emergent technologies of visualization and
`
`virtual reality were being used to redefine the concepts of how a human interface
`
`for geographic information systems should be defined. Ex. 1010 at 4. Traditional
`
`image processing and geographic information systems had been implemented with
`
`a top-down view of geographical data, but many had recognized that it would be
`
`superior and more natural to analyze geographical data in three-dimensions. Ex.
`
`1010 at 4.
`
`33. The second objective—efficiently representing a scene in order to
`
`maintain a high enough frame rate to provide real-time interaction (or “continuous
`
`movement”)—was made possible by a number of innovations in the relevant art
`
`since the 1970s. Ex. 1010 at 11-12. On the hardware front, three-dimensional
`
`graphics workstations, such as those from early companies such as Evans &
`
`Sutherland and Silicon Graphics, rapidly increased in speed so that by the 1980s it
`
`11
`
`GOOGLE EXHIBIT 1002, Page 14 of 105
`
`
`
`was possible to represent terrain in real time with reasonably-priced professional
`
`workstations. These advancements continued into the personal and mobile
`
`computer eras and cost dropped dramatically.
`
`34. Advances in software included spatial data structures, such as the
`
`quadtree (Ex. 1011), polygonal levels of detail (Ex. 1012), and texture “mip
`
`mapping” (Ex. 1014) (all of which reduced the amount of data that needed to be
`
`processed on any given frame).
`
`35. Terrain was typically represented as polygons (often triangles) created
`
`from digital elevation maps (height fields, typically specified on a grid across the
`
`ground). The terrain was then textured, often times using images taken from a
`
`satellite or planes equipped with appropriate cameras.
`
`36. There were two major approaches to making real-time terrain
`
`representation feasible: (1) fast graphics hardware and (2) selection (using efficient
`
`software algorithms) of the minimum number of polygons to be represented to get
`
`a good image of sufficient resolution. With respect to the second point, there were
`
`two types of selection algorithms pertinent here: culling algorithms and level of
`
`detail algorithms—often used in combination.
`
`37. Culling algorithms limit the number of polygons processed to those
`
`that are within the user’s view frustum to limit the burden on the graphics
`
`hardware. To accomplish this, one would use spatial data structures such as a
`
`12
`
`GOOGLE EXHIBIT 1002, Page 15 of 105
`
`
`
`quadtree. See Ex. 1011. The figure below illustrates a polygonal mesh in grey, and
`
`three levels of a quadtree with 1, 4, and 16 nodes respectively. Each node
`
`maintains a list of the polygons that overlap it. Imagine that the blue rectangle
`
`determines the view to be represented. The culling algorithm traverses the quadtree
`
`from level 1 on, and selects the level that best bounds the area of interest. In this
`
`case, it’s level 3. Now, instead of representing all of the triangles, the graphics
`
`hardware need only represent the ones that are potentially visible, those within the
`
`two level-3, quadtree nodes shown in red.
`
`
`
`38.
`
`Imagine that the area to be displayed (blue rectangle) moves a bit, as
`
`shown below. Now the polygons to select are those within the single, level 2 node,
`
`shown below in red. Of course, a quadtree could be built with more levels in order
`
`to get tighter bounds, and potentially select fewer polygons to represent.
`
`13
`
`GOOGLE EXHIBIT 1002, Page 16 of 105
`
`
`
`
`
`39. As with culling algorithms, level-of-detail algorithms (Ex. 1012) and
`
`“mip mapping” (Ex. 1014) also aimed to reduce the number of polygons and
`
`textures represented. Level of detail algorithms often used the distance of the
`
`observer to the terrain as a basis for representing more or less detail. The basic
`
`principle is that terrain that is far away from the observer can be represented in
`
`lower detail than terrain that is closer to the observer without the loss of any
`
`realism to the user. Thus, for representing terrain that is far from the viewer, level
`
`of detail algorithms created a simpler model, with fewer, larger polygons and
`
`lower resolution
`
`textures. Level-of-detail algorithms automatically created
`
`simplified models and, at run time, chose the proper level of detail to display for
`
`the distance of the viewer from the model.
`
`40. Visibility culling algorithms may be, and were, combined with those
`
`for levels of detail; spatial data structures, such as the quadtree, are used for both
`
`purposes. When used together, the system will only represent areas of the terrain
`
`(i.e., polygons) that are both within the user’s view and at an appropriate
`
`14
`
`GOOGLE EXHIBIT 1002, Page 17 of 105
`
`
`
`resolution. As a result, the system is able to maintain a constant frame rate that
`
`provides the user with a real-time interactive experience.
`
`B.
`
`The ’550 Specification
`
`41. The ’550 relates “to a method and a device for pictorial representation
`
`of space-related data, particularly geographical data of flat or physical objects.”
`
`Ex. 1001 at 1:15-17.
`
`42. The ’550 describes alleged deficiencies in the prior art methods and
`
`devices that visualize geographical data. See id. at 1:34-61. Specifically, the ’550
`
`points to the prior art’s use of “fixed data sets in order to generate the desired
`
`images.” Id. at 1:42-44. “The resolution of the representation is therefore limited to
`
`the resolution of the [fixed] data sets stored in a memory unit.” Id. at 1:44-46.
`
`43. According to the ’550, the alleged invention “enable[s] the [space-
`
`related] data to be represented in any pre-selected image resolution in the way in
`
`which the object would have been seen by an observer with a selectable location
`
`and selectable direction of view.” Id. at 2:3-8. It also “keep[s] the effort required
`
`for generating an image so low that . . . upon alteration of the location and/or of the
`
`direction of view of the observer, the impression of continuous movement above
`
`the object arises.” Id. at 2:8-13.
`
`44. The ’550 describes a method for retrieving space-related data, such as
`
`geographical data of the Earth, to generate pictorial representations of the data. See
`
`15
`
`GOOGLE EXHIBIT 1002, Page 18 of 105
`
`
`
`id. at Abstract. The space-related data are stored in “spatially distributed data
`
`sources.” Id. at 2:18-20. Figure 1 of the ’550 (reproduced below) illustrates the
`
`spatially distributed data sources 4 that are coupled to computing devices 1, 2, and
`
`3 via a data transmission network. See id. at 6:18-23. “These data sources include
`
`for example data memories.” Id. at 2:20-22.
`
`
`
`45. An input medium (track ball) coupled to a computing device enables a
`
`user to select “both the location and the direction of view of the observer” that
`
`together establish a field of view. Id. at 7:26-31, 7:59-63. The field of view
`
`identifies “[t]he portion of an object to be observed,” and is illustrated by the lines
`
`17 in Fig. 3, below. Id. at 2:23-24, 8:18-25, Fig. 3. I will note here that the use of
`
`the term “field of view” in the ’550 is not entirely consistent with how the term is
`
`used in the relevant art. Specifically, the term “field of view” is commonly
`
`understood to mean the solid angle through which the camera observes a scene and
`
`16
`
`GOOGLE EXHIBIT 1002, Page 19 of 105
`
`
`
`not the portion of the object being observed. A POSITA would understand that the
`
`field of view is a frustum originating at the location of the observer and expanding
`
`in the direction of view. Often the frustum is described using the horizontal and
`
`vertical angular extents of the field of view, for example 80 degrees horizontally
`
`and 60 vertically. Objects within the frustum are potentially visible to the observer.
`
`Nonetheless, because the ’550 describes the “field of view” as the “portion of the
`
`object to be observed,” I will adopt this meaning of the “field of view” for the
`
`purposes of this inter partes review.
`
`
`
`46. Once the field of view is established in the ’550, the computing device
`
`retrieves space-related data associated with the field of view from at least one of
`
`the spatially distributed data sources and stores the retrieved data in a central
`
`storage. See id. at 7:59-67; Fig. 1. The computing device represents the space-
`
`17
`
`GOOGLE EXHIBIT 1002, Page 20 of 105
`
`
`
`related data stored in the central storage in a pictorial representation of the field of
`
`view. See id. at 7:67-8:3; Figs. 1, 2, 3. As illustrated in Fig. 3, different sections of
`
`the field of view may be represented according to different spatial resolutions. Id.
`
`at 3:1-5, 8:18-25. The ’550 discloses that the difference in spatial resolutions can
`
`depend on whether a section of the field of view “is in the immediate vicinity of
`
`the observer or at a great distance therefrom.” Id. at 3:3-5.
`
`47. The ’550 discloses a progressive sub-division of the field of view into
`
`smaller and smaller sections for generating a pictorial representation. For example,
`
`the ’550 discloses a quadtree representation of the field of view to support such
`
`progressive sub-division. See id. at 7:55-58. As illustrated in Fig. 4, below, each
`
`level of the quadtree corresponds to a finer spatial resolution level and smaller-
`
`sized sections of the field of view relative to a higher level of the quadtree. Id. at
`
`4:4-13, 8:53-63, Fig. 4.
`
`
`
`18
`
`GOOGLE EXHIBIT 1002, Page 21 of 105
`
`
`
`48. The progressive sub-division technique in the ’550 traverses the
`
`quadrant tree and retrieves space-related data for sections from at least one of the
`
`spatially distributed data sources until the desired spatial resolution for each
`
`section is achieved. Id. at 7:55-58, 8:4-9. Specifically, ’550 states:
`
`(Id. at 2:28-36)
`
`
`
`49. The only independent claim of the ’550 is directed to: (a) providing a
`
`plurality of spatially distributed data sources, (b) determining a field of view, (c)
`
`requesting data for the field of view, (d) centrally storing the data, (e) representing
`
`the data for the field of view with one or more sections, and (f) recursively
`
`dividing, requesting and representing new, higher resolution data, (g) until all
`
`sections have the desired resolution or no further resolution data is available.
`
`VII. CLAIM CONSTRUCTION
`
`A. Legal Standard for Claim Construction
`
`50. My understanding is that a primary step in determining validity of a
`
`patent’s claims is to properly construe the claims to determine claim scope and
`
`meaning.
`
`19
`
`GOOGLE EXHIBIT 1002, Page 22 of 105
`
`
`
`51.
`
`In an inter partes review proceeding, I understand that claims are to be
`
`given their broadest reasonable interpretation (“BRI”) in light of the patent’s
`
`specification. See 37 C.F.R. § 42.100(b). In that regard, I understand that the best
`
`indicator of claim meaning is its usage in the context of the patent specification as
`
`understood by a POSITA. I further understand that the words of the claims should
`
`be given their plain meaning unless that meaning is inconsistent with the patent
`
`specification or the patent’s history of examination before the Patent Office. I also
`
`understand that the words of the claims should be interpreted as they would have
`
`been interpreted by a POSITA at the time the alleged invention was made.
`
`Accordingly, I have used the effective filing date of the ’550, December 22, 1995,
`
`as that point in time for claim interpretation purposes.
`
`B.
`
`Terms That Require Construction
`
`52. The claims require a “pictorial representation.” The specification of
`
`the ’550 uses, but does not specifically define the term “pictorial representation.”
`
`The BRI of “pictorial representation” in light of the specification is an “image data
`
`representation.”
`
`53.
`
` The ’550 recites “determin[ing] the representation of the data on the
`
`display unit.” Ex. 1001 at 2:50-51. Further, the ’550 recites transmitting “image
`
`data required for representation” to a computer/display device. Id. at 7:16-18. The
`
`’550 also discusses using quadtree or octant tree representations of the image data
`
`20
`
`GOOGLE EXHIBIT 1002, Page 23 of 105
`
`
`
`(id. at 7:55-58) and “determin[ing] the representation of the data” before “sending
`
`this transmission for viewing” to the computer/display device. Id. at 7:67-8:3.
`
`Therefore, the BRI of “pictorial representation” is an “image data representation.”
`
`54. The claims recite a “plurality of spatially distributed data sources.”
`
`The specification of the ’550 uses, but does not specifically define the term
`
`“spatially distributed data sources.” A POSITA would understand “spatially
`
`distributed data sources” to mean “two or more separate data sources.” Such an
`
`interpretation of “spatially distributed data sources” is consistent with the stated
`
`purpose of the alleged invention and the examples discussed in the specification.
`
`55. For example, the specification discloses that the alleged invention is,
`
`at least in part, providing “distributed data sources” so that “[i]n principle, the
`
`amount of available data” is “not limited, and can be extended at will.” Ex. 1001 at
`
`2:57-59.
`
`56.
`
`In addition, the specification of the ’550 describes examples of data
`
`sources as “data memories and/or other data sources which call up and/or generate
`
`space-related data.” Id. at 2:18-22. The specification of the ’550 identifies spatially
`
`distributed databases of research institutes as examples of spatially distributed data
`
`sources. Id. at 9:42-43. The specification also discloses satellites as another
`
`example of a spatially distributed data source.
`
`57. A POSITA would understand that “two or more separate data
`
`21
`
`GOOGLE EXHIBIT 1002, Page 24 of 105
`
`
`
`sources” accomplishes the stated purpose of the alleged invention (i.e., unlimited
`
`amount of data and extendable at will) and is the BRI in line with the examples of
`
`data sources disclosed (e.g., data memories or satellites). Therefore, the BRI of
`
`“plurality of spatially distributed data sources” is “two or more separate data
`
`sources.”
`
`58.
`
` The claims recite a “field of view.” The specification of the ’550
`
`describes “the field of view” as being the “portion of the object to be observed.” Id.
`
`at 2:22-23. Further, Figure 3 of the ’550 (reproduced below) illustrates a “field of
`
`view” where “the view of an object 18 by an observer whose field of view is
`
`limited by the two lines 17.” Id. at 8:18-19. The two lines 17 form the boundaries
`
`for the field of view, such that only the portion of the object that falls within the
`
`boundary is visible. Therefore, the BRI of “field of view” is “the portion of the
`
`object to