throbber
INFORMATION TO USERS
`
`This manuscript has been reproduced from the microfilm master. UMI
`
`films the text directly from the original or copy submitted. Thus, some
`
`thesis and dissertation copies are in typewriter face, while others may be
`
`from any type of computer printer.
`
`The quality of this reproduction is dependent upon the quality of the
`copy submitted. Broken or indistinct print, colored or poor quality
`illustrations and photographs, print bleedthrough, substandard margins,
`
`and improper alignment can adversely affect reproduction.
`
`In the unlikely event that the author did not send UMI a complete
`manuscript and there are missing pages, these will be noted. Also, if
`
`unauthorized copyright material had to be removed, a note will indicate
`
`the deletion.
`
`Oversize materials (e.g., maps, drawings, charts) are reproduced by
`
`sectioning the original, beginning at the upper left-hand comer and
`
`continuing from left to right in equal sections with small overlaps. Each
`
`original is also photographed in one exposure and is included in reduced
`
`form at the back of the book.
`
`Photographs included in the original manuscript have been reproduced
`
`xerographically in this copy. Higher quality 6" x 9" black and white
`
`photographic prints are available for any photographs or illustrations
`appearing in this copy for an additional charge. Contact UMI directly to
`
`order.
`
`UMI
`
`A Bell & Howell Information Company
`300 North Zeeb Road. Ann Arbor MI 48106-1346 USA
`313n61-4100
`soo1s21-0600
`
`3SHAPE 1010 3Shape v Align IPR2021-01383
`
`

`

`3SHAPE 1010
`
`3ShapevAlign
`
`IPR2021-01383
`
`3SHAPE 1010 3Shape v Align IPR2021-01383
`
`

`

`Surface Reconstruction and Display
`
`from Range and Color Data
`
`by
`
`Kari Pulli
`
`A dissertation submitted in partial fulfillment
`
`of the requirements for the degree of
`
`Doctor of Philosophy
`
`University of Washington
`
`1997
`
`Approved by ____ .9'~#1-~~E:.........:~-~..,g,,:::."+-~~-,l&,J,,,~Mf=-"----------------
`(Chairperson of Supe 1sory Committee)
`
`~~-
`
`Program Authorized 1) "'-\)
`to Offer Degree _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
`
`Date,_\_'l..__:l:-2.._l=--~-T _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
`
`3SHAPE 1010 3Shape v Align IPR2021-01383
`
`

`

`UMI Number: 9819292
`
`UMI Microform 9819292
`Copyright 1998, by UMI Company. All rights reserved.
`
`This microform edition is protected against unauthorized
`copying under Title 17, United States Code.
`
`UMI
`
`300 North Zeeb Road
`Ann Arbor, MI 48103
`
`3SHAPE 1010 3Shape v Align IPR2021-01383
`
`

`

`In presenting this dissertation in partial fulfillment of the requirements for the Doctoral de(cid:173)
`
`gree at the University of Washington, I agree that the Library shall make its copies freely
`
`available for inspection. I further agree that extensive copying of this dissertation is allow(cid:173)
`
`able only for scholarly purposes, consistent with "fair use" as prescribed in the U.S. Copy(cid:173)
`
`right Law. Requests for copying or reproduction of this dissertation may be referred to Uni(cid:173)
`
`versity Microfilms, 1490 Eisenhower Place, P.O. Box 975, Ann Arbor, MI 48106, to whom
`
`the author has granted "the right to reproduce and sell (a) copies of the manuscript in mi(cid:173)
`
`croform and/or (b) printed copies of the manuscript made from micro form."
`
`Sionature ~ • ry~
`
`e,
`
`Date_\_2._,_{ -z..____._j_5_1"-_ _ _ _ _ _
`
`3SHAPE 1010 3Shape v Align IPR2021-01383
`
`

`

`University of Washington
`
`Abstract
`
`Surface Reconstruction and Display
`
`from Range and Color Data
`
`by Kari Pulli
`
`Chairperson of Supervisory Committee
`
`Professor Linda G. Shapiro
`
`Computer Science and Engineering
`
`This dissertation addresses the problem of scanning both the color and geometry of real ob(cid:173)
`
`jects and displaying realistic images of the scanned objects from arbitrary viewpoints. We
`
`present a complete system that uses a stereo camera system with active lighting to scan the
`
`object surface geometry and color as visible from one point of view. Scans expressed in
`
`sensor coordinates are registered into a single object-centered coordinate system by align(cid:173)
`
`ing both the color and geometry where the scans overlap. The range data are integrated into
`
`a surface model using a robust hierarchical space carving method. The fit of the resulting
`
`approximate mesh to data is improved and the mesh structure is simplified using mesh op(cid:173)
`
`timization methods. In addition, two methods are developed for view-dependent display
`
`of the reconstructed surfaces. The first method integrates the geometric data into a single
`
`model as described above and projects the color data from the input images onto the surface.
`
`The second method models the scans separately as textured triangle meshes, and integrates
`
`the data during display by rendering several partial models from the current viewpoint and
`
`combining the images pixel by pixel.
`
`3SHAPE 1010 3Shape v Align IPR2021-01383
`
`

`

`List of Figures
`
`List of Tables
`
`Chapter 1: Introduction
`
`I. I Problem statement
`
`1.2 Motivating applications .
`
`1.2. l
`
`Surface reconstruction
`
`1.2.2 Textured objects
`
`1.3 Previous work . . . . . .
`
`1.3. l
`
`Surfaces from range
`
`1.3.2
`
`Image-based rendering
`
`1.4 Contributions
`
`1.5 Overview
`
`. .
`
`Chapter 2: Data acquisition
`
`2.1
`
`Introduction .
`
`2.2 Physical setup
`
`2.2.1 Calibration
`
`2.3 Range from stereo .
`
`2.4 Scanning color images
`
`2.5 Spacetime analysis
`
`. .
`
`Table of Contents
`
`V
`
`viii
`
`1
`
`2
`
`2
`
`4
`
`5
`
`5
`
`6
`
`7
`
`8
`
`9
`
`9
`
`9
`
`11
`
`12
`
`14
`
`14
`
`3SHAPE 1010 3Shape v Align IPR2021-01383
`
`

`

`2.5. l
`
`Problems in locating the stripe
`
`2.5.2
`
`Solution to the problem .
`
`2.6
`
`Contributions
`
`. .
`
`Chapter 3: Registration
`
`3. l
`
`Introduction . . . . . . . . . . . . . . . . . . .
`
`3.2 Traditional approaches to pairwise registration .
`
`3.2. l
`
`Initial registration .
`
`3.2.2
`
`Iterative solution
`
`.
`
`3.3 Projective pairwise registration
`
`3.3. l
`
`2D Image registration
`
`3.3.2 Minimization . . . . .
`
`3.3.3
`
`Summary of the algorithm
`
`3.3.4 Experiments
`
`.
`
`3.4 Multi-view registration
`
`3.4. l Generalize pairwise registration
`
`3.4.2 New method for multi-view registration
`
`3.5 Discussion . . . . . . . . . . . . . . .
`
`3.5. l Automatic initial registration .
`
`3.5.2 Other approaches to registration
`
`3.5.3 Contributions . . . .
`
`Chapter 4: Surface reconstruction
`
`4. l
`
`Introduction . . . . . . . .
`
`4.2 Hierarchical space carving
`
`4.2. l
`
`Space carving . . .
`
`4.2.2 Carve a cube at a time
`
`11
`
`15
`
`17
`
`17
`
`19
`
`19
`
`20
`
`22
`
`23
`
`29
`
`32
`
`39
`
`41
`
`42
`
`49
`
`49
`
`51
`
`54
`
`54
`
`55
`
`57
`
`59
`
`59
`
`62
`
`63
`
`64
`
`3SHAPE 1010 3Shape v Align IPR2021-01383
`
`

`

`4.2.3 Hierarchical approach
`
`4.2.4 Mesh extraction .
`
`4.2.5 Discussion
`
`4.3 Mesh optimization
`
`4.3. l
`
`Energy function .
`
`4.3.2
`
`Iterative minimization method
`
`4.3.3 Discussion . . . . . . . . . .
`
`Chapter 5: View-dependent texturing
`
`5.1
`
`Introduction . . . . . .
`
`5.2
`
`Image-based rendering
`
`5 .2. l Distance measures for rays
`
`5.3 View-dependent texturing of complete surfaces
`
`5 .3. l Choosing views . . . . .
`
`5.3.2 Finding compatible rays
`
`5.3.3 Combining rays . . . . .
`
`5.3.4 Precomputation for run-time efficiency
`
`5.3.5 Results . . . .
`
`5.4
`
`View-based rendering .
`
`5.4. l View-based modeling.
`
`5.4.2
`
`Integration in image space
`
`5.4.3 Results
`
`5.5
`
`Discussion . . .
`
`5.5.1 Related work
`
`5.5.2 Contributions
`
`iii
`
`66
`
`68
`
`69
`
`74
`
`76
`
`76
`
`79
`
`80
`
`80
`
`83
`
`85
`
`89
`
`90
`
`91
`
`94
`
`95
`
`96
`
`97
`
`98
`
`99
`
`102
`
`103
`
`103
`
`106
`
`3SHAPE 1010 3Shape v Align IPR2021-01383
`
`

`

`Chapter 6: Summary and future work
`
`6.1 Data acquisition . . .
`
`6.1. l
`
`Future work .
`
`6.2 Registration . . . . .
`
`6.2. l
`
`Future work .
`
`6.3 Surface reconstruction
`
`6.3. l
`
`Initial mesh construction
`
`6.3.2 Mesh optimization
`
`6.4 View-dependent texturing .
`
`6.4. l
`
`Future work . . . .
`
`Bibliography
`
`108
`
`108
`
`109
`
`110
`
`111
`
`111
`
`l 11
`
`112
`
`113
`
`114
`
`117
`
`iv
`
`3SHAPE 1010 3Shape v Align IPR2021-01383
`
`

`

`List of Figures
`
`2-1 The scanner hardware.
`
`2-2 The calibration object.
`
`2-3 Geometry of triangulation.
`
`2-4 Sources of errors in range by triangulation ..
`
`2-5 Effects of intensity changes in spacetime analysis ..
`
`3-1 Registration aligns range data.
`
`3-2
`
`Initial registration. . . . . . . .
`
`3-3 Pair each control point with the closest point in the other view.
`
`3-4 Point pairing heuristics. . . . . . . . . .
`
`3-5 Weik's point pairing through projection.
`
`3-6 Fixed ideal springs.
`
`3-7 Sliding springs ..
`
`3-8 Heuristics lead to inconsistent pairings.
`
`3-9 Consistent pairing using 2D registration and projection.
`
`3-10 Mesh projection directions. . . . . .
`
`3-11 Registration algorithm pseudocode.
`
`3-12 The initial registration configuration.
`
`3-13 Registration results: new method.
`
`3-14 Registration results: ICP.
`
`. . . . .
`
`3-15 Experiments with synthetic data sets ..
`
`V
`
`IO
`
`I 1
`
`13
`
`15
`
`16
`
`19
`
`21
`
`24
`
`24
`
`26
`
`27
`
`28
`
`30
`
`32
`
`36
`
`42
`
`43
`
`45
`
`47
`
`48
`
`3SHAPE 1010 3Shape v Align IPR2021-01383
`
`

`

`3-16 Errors in pairwise registrations accumulate .
`
`3-17 Multiview registration.
`
`3-18 A spin image.
`
`. . . . .
`
`4-1 Eight intensity images corresponding to the views of the miniature chair.
`
`4-2 An example where the old method fails.
`
`4-3 The idea of space carving.
`
`4-4 Labeling of cubes. . . . . .
`
`4-5 An octree cube and its truncated cone. The arrow points to the sensor.
`
`4-6 Multiple cubes and sensors.
`
`4-7 Hierarchical carving of cubes.
`
`4-8 Only cubes that share a face should be connected in the mesh.
`
`4-9 Space carving for thin surfaces ..
`
`. .
`4-10 Mesh optimization example.
`4-11 The structure of the linear system Ax = b.
`4-12 Elementary mesh transformations.
`
`4-13 Mesh optimization for a toy dog. .
`
`5- l Realism of a toy husky is much enhanced by texture mapping.
`
`5-2 View-dependent scan data.
`
`. . . . . . . . . . . . .
`
`5-3 No single texture looks good from every direction.
`
`5-4 A pencil of rays and a light field.
`
`5-5 A spherical lumigraph surface.
`
`5-6 Ray-surf ace intersection.
`
`. .
`
`5-7
`
`Importance of ray direction.
`
`5-8 Surface sampling quality. . .
`
`5-9 Directional blending weight.
`
`vi
`
`49
`
`52
`
`55
`
`60
`
`61
`
`63
`
`64
`
`65
`
`66
`
`67
`
`69
`
`72
`
`75
`
`77
`
`78
`
`79
`
`80
`
`82
`
`83
`
`84
`
`85
`
`87
`
`88
`
`89
`
`91
`
`3SHAPE 1010 3Shape v Align IPR2021-01383
`
`

`

`5-10 Project a pixel to model and from there to input images. .
`
`5-11 Self-occlusion.
`
`. . . . . . . . . . . . . . . . . . . . . .
`
`5-12 A view of a toy dog, the sampling quality weight, the feathering weight.
`
`5-13 An interactive viewer. . . . . . . . .
`
`5-14 Overview of view-based rendering. .
`
`5-15 View-based modeling.
`
`. . . . . . .
`
`5-16 Comparison of view-based rendering with and without blending.
`
`5-17 Problems with undetected step edges.
`
`5-18 Pseudocode for the pixel composition algorithm.
`
`5-19 View-based rendering viewer.
`
`5-20 The algebraic approach to image-based rendering ..
`
`92
`
`93
`
`95
`
`96
`
`97
`
`98
`
`I 00
`
`I 01
`
`102
`
`103
`
`104
`
`vii
`
`3SHAPE 1010 3Shape v Align IPR2021-01383
`
`

`

`List of Tables
`
`4-1 Statistics for the chair data set.
`
`. . . . . . . . . . . . . . . . . . . . . . . . 70
`
`viii
`
`3SHAPE 1010 3Shape v Align IPR2021-01383
`
`

`

`Acknowledgments
`
`There are many people I would like to thank for helping me in the research which lead
`
`to this dissertation. First, I would like to thank my advisor Linda Shapiro: she is the nicest
`
`advisor one could hope to have. But the other members in my advisory committee. Tony
`
`DeRose, Tom Duchamp, and Werner Stuetzle, have been equally important for focusing and
`
`developing this dissertation. Other people attending our meetings include John McDonald.
`
`Hugues Hoppe, Michael Cohen, and Rick Szeliski; working with all these people has been
`
`a most pleasurable learning experience. Special thanks to Hugues Hoppe for code. advice
`
`how to use it, and surface models.
`
`I thank the staff in general for taking care of the paper work and for keeping the computer
`
`systems running, and Lorraine Evans in particular. I thank David Salesin for including me
`
`in the graphics group as the only student that wasn't supervised by him. I am grateful for
`
`all my fellow students in vision (Mauro, Habib, Pam, ... ) and in graphics (Eric. Frederic,
`
`Daniel, ... ) for giving me friendship and support during my studies. I especially want to
`
`thank Habib Abi-Rached for his hard work with our stereo scanner. I want to thank all my
`
`bug house, bridge, and hiking partners for the relaxation in the midst of hard work. Finally.
`
`I want to thank Omid Madani for being such a good friend.
`
`I want to thank my committee, Hugues Hoppe, Eric Stollnitz and Brian Curless for proof(cid:173)
`
`reading this dissertation and for providing valuable feedback.
`
`Summer internships offered a chance for a much needed break from schoolwork and for
`
`learning useful new skills. I enjoyed working with Chas Boyd at Microsoft, Mark Segal at
`
`Silicon Graphics, and Michael Lounsbery at AliaslWavefront.
`
`IX
`
`3SHAPE 1010 3Shape v Align IPR2021-01383
`
`

`

`My research career began at the University of Oulu in Finland. I would like to thank
`
`Matti Pietikainen, Olli Silven, Juha Roning, Tapani Westman, and Visa Koivunen who all
`
`in their own way helped me and taught me how to do research.
`
`I am greatful to several foundations and institutions for direct and indirect financial sup(cid:173)
`
`port for my studies. I thank AS LA/Fulbright Foundation, Academy of Finland, Finnish Cul(cid:173)
`
`tural Foundation, Emil Aaltonen Foundation, Seppo Saynajakangas Foundation, National
`
`Science Foundation and Human Interface Technology Laboratory.
`
`Finally, I want to thank my parents and my wife Anne and daughter Kristiina for their
`
`constant support.
`
`X
`
`3SHAPE 1010 3Shape v Align IPR2021-01383
`
`

`

`Chapter 1
`Introduction
`
`Computer Vision and Computer Graphics are like opposite sides of the same coin: in
`
`vision, a description of an object or scene is derived from images. while in graphics images
`
`are rendered from geometric descriptions. This dissertation addresses issues from both of
`
`these disciplines by developing methods for sampling both the surface geometry and color
`
`of individual objects, building surface representations, and finally displaying realistic color
`
`images of those objects from arbitrary viewpoints.
`
`We begin this chapter by defining a set of subproblems within the greater framework
`
`of scanning and displaying objects. We then discuss some applications that benefit from
`
`techniques developed in this dissertation. Some relevant previous work is discussed, and
`
`the chapter concludes with an overview of the dissertation.
`
`1.1 Problem statement
`
`This dissertation presents a complete system for scanning and displaying textured objects.
`
`The tasks required to fulfill our goal can be organized into a sequence of stages that process
`
`the input data. Each of these stages presents us with its own problems. Specifically, the
`
`problems addressed in this dissertation are:
`
`• How can we scan the object surface geometry using color cameras and structured
`
`light?
`
`3SHAPE 1010 3Shape v Align IPR2021-01383
`
`

`

`2
`
`• Our scanner only allows us to digitize one view of an object at a time. Since the scan
`
`data is expressed in the sensor coordinate system, and the sensor moves with respect
`
`to the object between views, the data in different views are expressed in different coor(cid:173)
`
`dinate systems. How can we accurately express all the data in a single object-centered
`
`coordinate system?
`
`• How can we convert separate range views into a single surface description? How can
`
`we use our knowledge of the scanning process to make this process more robust and
`
`reliable?
`
`• How can we manipulate the surface description so that it better approximates the input
`
`data, and how can we control the size and resolution of the surface representation?
`
`• How can we combine the color information from the different views together with the
`
`surface geometry to render realistic images of our model from arbitrary viewpoints?
`
`1.2 Motivating applications
`
`Our research is motivated by numerous applications. The following two sections first dis(cid:173)
`
`cuss applications that require scanning surface geometry, followed by applications that ad(cid:173)
`
`ditionally require colored and textured surface descriptions.
`
`1.2.1 Surface reconstruction
`
`Surface reconstruction applications take geometric scan data as input and produce surface
`
`descriptions that interpolate or approximate the data. The geometric scan data is typically
`
`expressed as a set of range maps called views. Each view is like a 2D image, except that
`
`at each pixel the coordinates of the closest surface point visible through the pixel are stored
`
`instead of a color value.
`
`3SHAPE 1010 3Shape v Align IPR2021-01383
`
`

`

`3
`
`Reverse engineering
`
`Today's CAM (Computer Aided Manufacturing) systems allows one to manufacture objects
`
`from their CAD (Computer Aided Design) specifications. However, there may not exist a
`
`CAD model for an old or a custom-made part. If a replica is needed, one could scan the
`
`object and create a geometric model that is then used to reproduce the object.
`
`Design
`
`Although CAD system user interfaces are becoming more natural, they still provide a poor
`
`interface for designing free-form surfaces. Dragging control points on a computer display
`
`is a far cry from the direct interaction an artist can have with clay or wood. Surface recon(cid:173)
`
`struction allows designers to create initial models using materials of their choice and then
`
`scan the geometry and convert it to CAD models. One could also iterate between manufac(cid:173)
`
`turing prototypes, manually reshaping them, and constructing a new model of the modified
`
`version.
`
`Inspection
`
`Manufactured objects can be scanned and the data can be compared to the specifications.
`
`Detected deviations from the models can be used to calibrate a new manufacturing process,
`
`or it can used in quality control to detect and discard faulty parts.
`
`Planning of medical treatment
`
`Surface reconstruction has many applications in medicine. Scanning the shape of a patient's
`
`body can help a doctor decide the direction and magnitude of radiation for removing a tumor.
`
`In plastic surgery, scanning the patient can help a doctor to quantify how much fat to remove
`
`or how large an implant to insert to obtain the desired outcome.
`
`3SHAPE 1010 3Shape v Align IPR2021-01383
`
`

`

`4
`
`Custom fitting
`
`Surface reconstruction allows automatic custom fitting of generic products to a wide variety
`
`of body sizes and shapes. Good examples of customized products include prosthetics and
`
`clothes.
`
`1.2.2 Textured objects
`
`For displaying objects, rather than replicating or measuring them, one needs color informa(cid:173)
`
`tion in addition to geometric information. Some scanners, including ours, indeed produce
`
`not only the 3D coordinates of visible surface points, but also the color of the surface at
`
`those points. The requirements for the accuracy of the geometric data are often much more
`
`lenient, as the color data can capture the appearance of fine surface detail.
`
`Populating virtual worlds
`
`Populating virtual worlds or games with everyday objects and characters can be a labor(cid:173)
`
`intensive task if the models are to be created by artists using CAD software. Further, such
`
`objects tend to look distinctly artificial. This task can be made easier and the results more
`
`convincing by scanning both the geometry and appearance of real-world objects, or even
`
`people.
`
`Displaying objects on internet
`
`The influence of the internet is becoming more pervasive in our society. The world wide
`
`web used to contain only text and some images, but 3D applications have already begun to
`
`appear. Instead of looking at an object from some fixed viewpoint, one can now view objects
`
`from an arbitrary viewpoint. Obvious applications include building virtual museums and
`
`thus making historical artifacts more accessible both to scholars and to the general public,
`
`as well as commerce over the internet where the potential buyer can visualize the products
`
`before purchase.
`
`3SHAPE 1010 3Shape v Align IPR2021-01383
`
`

`

`5
`
`Special effects for films
`
`Computer graphics is increasingly used in films. Special effects that would be otherwise
`
`impossible, infeasible, or just expensive can be digitally combined with video sequences.
`
`The extra characters, objects, or backgrounds tend to look more realistic if they are scanned
`
`from real counterparts than if they were completely generated by a computer.
`
`1.3 Previous work
`
`In this section I briefly cover some previous work that most influenced the research de(cid:173)
`
`scribed in this dissertation. Further descriptions of related work can be found in Chapters
`
`2, 3, 4, and 5.
`
`1.3.1 Surfaces from range
`
`There has been a large number of PhD dissertations and other research in surf ace recon(cid:173)
`
`struction from range data. The work that first motivated my research in range vision was
`
`Paul Besl's dissertation [Besl 86, Bes! 88]. Besl did an excellent job of exploring low-level
`
`range image processing and segmentation. Also, his later work in registration of range data
`
`has been influential [Bes! & McKay 92].
`
`The greatest influence on this research was that of Hugues Hoppe at University of Wash(cid:173)
`
`ington with Professors Tony DeRose, Tom Duchamp, John McDonald, and Werner Stuetzle
`
`[Hoppe et al. 92, Hoppe et al. 93, Hoppe et al. 94, Hoppe 94]. Whereas Bes! concentrated
`
`on low-level processing of range maps, Hoppe et al. developed a three-phase system for
`
`reconstructing arbitrary surfaces from unstructured 3D point data. The first phase creates
`
`an approximate triangle mesh from the point data by estimating a signed distance function
`
`from the data and then extracting the zero set of that function. The second phase takes the
`
`initial mesh and the point data and iteratively improves the fit of the mesh to the data and
`
`simplifies the mesh while keeping it close to the data. In the third phase the surface rep(cid:173)
`
`resentation is changed to Loop's subdivision scheme [Loop 87] that can represent curved
`
`3SHAPE 1010 3Shape v Align IPR2021-01383
`
`

`

`6
`
`surfaces more accurately and more compactly than a triangle mesh. Hoppe et al. extended
`
`Loop's scheme to allow sharp surface features such as creases and corners.
`
`Around the same time Yang Chen [ l 994] worked with Professor Gerard Medioni at USC
`
`on similar problems. Their research addressed registration of range maps into a single co(cid:173)
`
`ordinate system and then integrating the data into surf ace representation.
`
`The most recent work in surface reconstruction that had a large influence on my research
`
`was that of Brian Curless who worked with Professor Marc Levoy at Stanford [Curless 97].
`
`Curless's dissertation has two parts. In the first part spacetime analysis, a more robust and
`
`accurate scheme for estimating depth using a light stripe scanning system was developed
`
`[Curless & Levoy 95]. In the second part a volumetric method for building complex models
`
`from range images was developed [Curless & Levoy 96].
`
`1.3.2
`
`Image-based rendering
`
`My work on view-dependent texturing was heavily influenced by the image-based render(cid:173)
`
`ing papers published in SIGGRAPH 96. The Microsoft Lumigraph [Gortler et al. 96] and
`
`Stanford Light Field Rendering [Levoy & Hanrahan 96] address the same problem: given
`
`a collection of color images of an object from known viewpoints, how can one create new
`
`images of the same object from an arbitrary viewpoint? Their solution was to think of the
`
`pixels of the input images as half-rays ending at the camera image plane and encoding the
`
`apparent directional radiance of the object surface. From these samples a function that re(cid:173)
`
`turns a color for a directed ray can be calculated. New images can then be created by eval(cid:173)
`
`uating that function over the rays associated with each pixel in the output image and paint(cid:173)
`
`ing the pixels with the returned colors. The approach is quite general and allows realistic
`
`rendering of objects that are hard to model and render using traditional graphics methods.
`
`However, a very large set of input images is required, and the 4D function mapping rays to
`
`colors requires a large storage.
`
`Debevec et al. [ 1996] described a system that allows a user to interactively reconstruct
`
`architectural scenes from color images. After the user specifies the basic structure of the
`
`3SHAPE 1010 3Shape v Align IPR2021-01383
`
`

`

`7
`
`geometric models and marks correspondences between image features and model features,
`
`the system calculates the actual dimensions of the model elements. The models are then
`
`view-dependently textured using the original camera images.
`
`1.4 Contributions
`
`Here is a summary of the contributions in this dissertation:
`
`• We have constructed a practical range scanner that uses inexpensive color cameras
`
`and a controlled light source. The range data is inferred from camera images using
`
`optical triangulation. The quality of the range data is increased by adapting spacetime
`
`analysis [Curless & Levoy 95] for our scanner.
`
`• We have developed a new method for pairwise registration of range and color scans.
`
`Our method directly addresses the most difficult part of 3D registration: establishing
`
`reliable point correspondences between the two scans. We implement a robust opti(cid:173)
`
`mization method for aligning the views using the established point pairs.
`
`• We have developed a method for simultaneously registering multiple range maps into
`
`a single coordinate system. Our method can be used in conjunction with any pairwise
`
`registration method. The scans are first registered pairwise. The results of pairwise
`
`registration create constraints that can be then used to simultaneously find a global
`
`registration.
`
`• We have developed a simple and efficient hierarchical space carving method for cre(cid:173)
`
`ating approximate meshes from registered range maps.
`
`• We have developed two methods for view-dependent texturing of geometric mod(cid:173)
`
`els using color images. The first method is used to texture complete surface models,
`
`while the second method models each range scan separately and integrates the scans
`
`during display time in screen space.
`
`3SHAPE 1010 3Shape v Align IPR2021-01383
`
`

`

`8
`
`1.5 Overview
`
`The first three chapters in this dissertation discuss methods for reconstructing geometric sur(cid:173)
`
`face models from range scans. Chapter 2 deals with data acquisition, Chapter 3 addresses
`
`registration of the scan data, and Chapter 4 discusses methods to reconstruct surf aces from
`
`registered range data. Chapter 5 concerns the use of color data and presents two methods for
`
`view-dependent texturing of scanned surfaces. Chapter 6 concludes the thesis. Some mate-
`
`rial presented in Chapters 4 and 5 has been published previously [Pulli et al. 97a, Pulli et al. 97b,
`
`Pulli et al. 97c].
`
`3SHAPE 1010 3Shape v Align IPR2021-01383
`
`

`

`Chapter 2
`Data acquisition
`
`2.1
`
`Introduction
`
`It is possible to develop algorithms for surface reconstruction and view-dependent textur(cid:173)
`
`ing without having an actual range and color scanner. However, with the aid of an accurate
`
`scanner we can create new data sets of many different classes of objects and surface mate(cid:173)
`
`rials under various scanning configurations. We can get even more control over the input
`
`data by building a scanner instead of buying one as no data remains inaccessible inside the
`
`scanning system.
`
`This chapter describes the system we built for scanning both range and color data. In
`
`Section 2.2 we describe the hardware configuration of our scanner as well as the calibra(cid:173)
`
`tion process. In Section 2.2. l we discuss our simple but robust method for obtaining dense
`
`range data from stereo with active lighting. The chapter concludes with a description of how
`
`spacetime analysis, a method that was developed for another type of scanner. was adapted
`
`to our system for more reliable and accurate scanning.
`
`2.2 Physical setup
`
`Our scanning system consists of the following main parts (see Fig. 2- l ). Four Sony l 07-A
`
`color video cameras are mounted on an aluminum bar. Each camera is equipped with man(cid:173)
`
`ually adjustable zoom and aperture. The cameras are connected to a Matrox Meteor digitiz(cid:173)
`
`ing board, which can switch between the four inputs under computer control, and produces
`
`images at 640 x 480 resolution. The digitizing board is attached to a Pentium PC.
`
`3SHAPE 1010 3Shape v Align IPR2021-01383
`
`

`

`lO
`
`Figure 2-1 The scanner hardware. Four video cameras are attached to an aluminum bar,
`which rests on a tripod. A slide projector is placed on a turntable under the cameras. Table
`lamps provide adjustable lighting for capturing color images.
`
`Below the cameras, a slide projector sits on a computer-controlled turntable. The slide
`
`projector emits a vertical stripe of white light, which is manually focused to the working
`
`volume.
`
`The object to be scanned is placed on a light table that is covered by translucent plas(cid:173)
`
`tic. When the lamps under the light table are turned on, the background changes; thus we
`
`can easily detect background pixels by locating the pixels that change color. 1 Next to the
`
`scanner is a set of adjustable lamps that are used to control the object illumination when we
`
`capture color images.
`
`1 In order for this to work the cameras have to aim down so they don't see past the light table.
`
`3SHAPE 1010 3Shape v Align IPR2021-01383
`
`

`

`1 1
`
`.·. -_,.:;t;;;,+:;
`
`:, .•
`
`Figure 2-2 The calibration object is metal plate mounted on a rail. The plate is perpen(cid:173)
`dicular to the rail. and it can be moved in 10 mm steps along the rail. A calibration pattern
`consisting of a 7 x 7 dot field is pasted on the plate.
`
`2.2.1 Calibration
`
`In order to make any measurements from camera images, we need to know the camera cal(cid:173)
`
`ibration parameters. Some of the internal parameters (size of the CCD array, aspect ratio of
`
`the pixels, etc.) are fixed, but others, such as focal length and distortion. vary. The exter(cid:173)
`
`nal parameters define the pose of the camera, that is, the 3D rotation and translation of the
`
`optical center of the camera with respect to some fixed coordinate system.
`
`We measure the calibration parameters for all four cameras simultaneously using Tsai 's
`
`algorithm [ 1987] and the calibration object shown in Fig. 2-2. The calibration object con(cid:173)
`
`sists of a metal plate that can be translated along a rail that is perpendicular to the plate. A
`
`calibration pattern made of a 7 x 7 block of circular black dots on a white background is
`
`pasted on the metal plate.
`
`The calibration proceeds as follows. The calibration object is placed in front of the cam-
`
`3SHAPE 1010 3Shape v Align IPR2021-01383
`
`

`

`12
`
`eras so that all four cameras can see the whole pattern. The origin of the calibration coordi(cid:173)
`
`nate system is attached to the center of the top left dot in the pattern, with the x-axis aligned
`
`with the dot rows, the y-axis with the dot columns, and the z-axis with the rail on which
`
`the calibration plate rests. The dots are located within the camera images using the Hough
`
`transform [Davies 90], and the 7 x 7 pattern is matched to those dot locations. The image
`
`coordinates of each dot are then paired with the corresponding 3D coordinates. The plate is
`
`moved by a fixed distance along the z-axis, and the process is repeated several times over
`
`the whole working volume. The calibration parameters for each camera are estimated from
`
`the entire set of 3D-2D point correspondences using Tsai's algorithm (1987].
`
`2.3 Range from stereo
`
`In traditional stereo vision one tries to match pixels in one camera image to pixels in the
`
`image of another camera [Barnard & Fischler 82]. The pixel matching relies on intensity
`
`variations due to surface texture. If the cameras have been calibrated and the matched pixels
`
`correspond to the same point on some surface, it is trivial to calculate an estimate of the 3D
`
`coordinates of that point.
`
`There is an inherent tradeoff in the

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket