throbber
|||||||||||||||||||||||||||||l|||||||||||l|||||||||||||||||||||||||||||||||
`
`USI)0675I}873BI
`
`(12)
`
`United States Patent
`US 6,750,873 B1
`(10) Patent N0.:
`Bernardini et al.
`
`(45) Date of Patent: Jun. 15, 2004
`
`(54)
`
`(75)
`
`HIGH QUALITY TEXTURE
`RECONSTRUCTION FROM MULTIPLE
`SCANS
`
`Inventors: Fausto Bernardini, New York. NY
`(US); Ioana M. Martin. Mohegan
`Lake, NY (US); Holly E. Rushmeier,
`Mount Kisco, NY (US)
`
`(73)
`
`Assignee:
`
`International Business Machines
`Corporation. Armonk, NY (US)
`
`t‘)
`
`Notice:
`
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`use 154w) by 259 days.
`
`(31)
`
`App}. No: 097603328
`
`(22
`
`Filed:
`
`Jun. 27, 2000
`
`(51)
`(52)
`
`(53)
`
`(56)
`
`Int. Cl.7
`U.S. Cl.
`
`..G06K 9700
`3451582, 3457581; 3457586;
`
`3437606 3457629, 3827294; 3823295
`Field of Search
`“3457582, 586.
`3457581,2389 (306,609, 419, 639, 629;
`3827154 108, 294. 295, 296, 298
`
`References Cited
`
`U.S. PA'I‘ENT DOCUMENTS
`
`OTHER PUBLICATIONS
`
`"Surface Reconstruction and Display from Range and Color
`Data" by Pulli. Dissertation submitted in partial fulfillment
`of the requirements for the degree of Doctor of Philosophy,
`University of Washington, 1997, pps.
`l—-ll7.
`"'lbwards a General Multi—View Registration Technique" by
`Bergevin et 31., IEEE Transactions on Pattern Analysis and
`Machine intelligence, vol. 18, No. 5, May 1996, pps.
`540—547.
`”Object Modeling by Registration of Multiple Range
`images" by Chen et al, institute for Robotics and Intelligent
`Systems, Apr. 1991, pps. 2724—2729.
`"A Method for Registration of 3—D Shapes" by Best et al.,
`lEEIE. Transactions on Pattern Analysis and Machine Intel-
`ligence, vol. 14, No. 2, Feb. 1992. pps. 239—256.
`
`{List continued on next page.)
`
`Primary Examiner—Matthew Luu
`Assistant cleanser—Daniel Chung
`(74) Attorney; Agent, or Firrrr—Ohlandl, Greeley, Ruggiero
`& Perle, L.L.1’.; Louis l’ercello
`
`(57)
`
`ABSTRACT
`
`A system and method is disclosed for constructing a digital
`model ofan object. The system includes an imaging system
`for generating object surface scan data from a plurality of
`surface scans, the surface scan data having a first resolution
`and representing the object from a plurality of vieWpoints.
`The imaging system further generates image data having a
`second, higher resolution than the surface scan data for
`representing the object from the plurality of viewpoints. The
`system further includes a data processor for iteratively
`registering the surface scan data [or the plurality 01‘ surface
`scans, using the image data. and for reconstructing substan-
`tially seamless surface texture data for the model using
`weights that reflect a level of confidence in the data at a
`plurality of surface points.
`
`25 Claims, 19 Drawing Sheets
`
`345.3530
`3457419
`364747424
`3457433
`345740)
`
`A
`171995 Ellson
`“
`5,381,526
`5,579,456 A " 1171996 Cosman
`5,715,106 A
`271998 13051 ctal.
`5,986,668 A
`1171999 Sleeliski el al.
`5,995,050
`“ 1171999 Migdaletal.
`611109.190
`1271999 Szcliski ct a1.
`6,049,636
`472000 Yang
`6,256,038 B1
`772.001 Krishnamurttiy
`6,271.84? Bl
`872001 Shumetat.
`6,362.821 Bl
`372002 Gibson c: at.
`6,469,710 Bl
`1072002 Shunt et ai.
`6,476,803 B1
`1172002 Zhang et a].
`
`i-fll-i‘lit‘
`
`AAA
`
`
`
`.
`
`............... 3457419
`
`
`
`Align Ex. 1015
`Align EX. 1015
`U.S. Patent No. 9,962,244
`U.S. Patent No. 9,962,244
`
`0001
`
`0001
`
`

`

`US 6,750,873 B1
`
`Page 2
`
`OTHER PUBLICATIONS
`
`“A Computer—Assisted Range Image Registration System
`for Nuclear Waste Cleanup” by Gagnon et al., IEEE Trans-
`actions on Instrumentation and Measurement, vol. 48, No. 3,
`Jun. 1999, pps. 758—762.
`“The Digital Michelangelo Projects: 3D Scanning of Large
`Statues“ by Levoy el al. Proc. Siggraph, 2000, pps.
`l—l4.
`"Multi—Feature Matching Algorithm for Free—Form 3D Sur-
`face Registration" by Schultz el al., Instilule for Microtech-
`nology. Neuehatel, Switzerland, 1998.
`“Building Models From Sensor Data: An Application
`Shared by the Computer vision and the Computer Graphics
`
`Community" by Gerhard Roth, Visual Information Technol-
`ogy Group, National Research Council of Canada. pps. 1—9,
`undated.
`"Computing Consistent Normals and Colors [rorn Photo-
`metric Data” by Rushmeier el al., IBM Thomas J. Watson
`Research Center, Oct. 1999.
`"Acquisition and Visualization of Colored 3D Objects" by
`Abinached el al., University of Washington, undated.
`"Texturing 3D Models of Real World Objects from Multiple
`Unregistered Photographic Views" by Nuegehauer et al.,
`Fraunhofer Institute for Computer Graphics, vol. 18, N0. 3,
`1999, pps. 245-256.
`* cited by examiner
`
`0002
`
`0002
`
`

`

`US. Patent
`
`Jun. 15, 2004
`
`Sheet 1 0f 19
`
`US 6,750,873 B1
`
`Range
`images
`
`Registration
`
`
`
`Intensity
`images
`
`-------
`
`Texture-to-geometry
`registration
`
`
`
`
`
`
`
`Integration of scans
`into a single mesh
`
`
`
`Computation of
`illumination invariants
`
`r11
`
`Surface
`
`parameterization
`
`
`
`
`
`Surface
`
`parameterization
`
`
`
`Fig. 1
`
`(Prior Art)
`
`0003
`
`0003
`
`

`

`US. Patent
`
`Jun. 15, 2004
`
`Sheet 2 0f 19
`
`US 6,750,873 B1
`
`
`
`0004
`
`0004
`
`

`

`US. Patent
`
`Jun. 15, 2004
`
`Sheet 3 0f 19
`
`US 6,750,873 B1
`
`
`
`0005
`
`0005
`
`

`

`US. Patent
`
`Jun. 15, 2004
`
` heat 4 0f 19
`
`US 6,750,873 B1
`
`
`
`
`
`0006
`
`0006
`
`

`

`US. Patent
`
`Jun. 15, 2004
`
`Sheet 5 0f 19
`
`US 6,750,873 B1
`
`Input:
`
`scans toberegistered: S:,...,SN
`initial regisu'ationmau'ices: M1,...,MN
`dctailmapsle,...,DN
`depthmaps: 21,...,ZN
`boundingboxes: B“...,BN
`, LN
`lists ofsample points: L1,” .
`where Ln = {(tmmmfltm E 13um ‘5 Sm}
`camera parameters: 0
`
`Output: accurate registration matrices: M1, .
`
`.
`
`.
`
`, MN
`
`Algorithm Image-Based Regimarion
`
`read 131,...,BN
`1.
`2. While (convergence criterion not satisfied)
`(m1, .
`. ”111”) = randompermutarion (2, .
`for (m = mh. ..,mN)
`Cm 3 camera (Mm, C)
`ragtlsterjcan (m, Cm)
`
`SI???”
`
`.
`
`.
`
`, N)
`
`Procedure register-Joan (m1 Cm)
`
`read 0",, Zm, Lm
`1.
`for (i = 1,-..,N) and (i 3% m.)
`2.
`read 8,, D,-
`3.
`if (Bm intersects Bi)
`4
`Di =pmjectjcan (3;, Di, Cm)
`5
`0,,“- = computejmageflveriap (Dm, Di)
`6.
`if (10mgl > minflverlap)
`7
`for ((tm,p,,,) e L") and (tn, E On“)
`8
`ti = t‘m
`9-
`b,- =find_best.match (tm, 0,“, a, 13,)
`10.
`p,- : backproject (In, 3;, Cm) E S,-
`11.
`insertjpair (pm, pg, pairsJisr)
`12.
`13. if (lpairslfstl 2 3)
`14.
`Mm = computeJ‘fgfdJransfonn (pairslist)
`
`Fig. 4
`
`0007
`
`0007
`
`

`

`US. Patent
`
`Jun. 15, 2004
`
`Sheet 6 0f 19
`
`US 6,750,873 B1
`
`
`
`Fig. 5A
`
`Fig. 53
`
`
`
`0008
`
`0008
`
`

`

`US. Patent
`
`Jun. 15, 2004
`
`Sheet 7 0f 19
`
`US 6,750,873 B1
`
`t-IlllIIII
`
`
`
`II
`I II
`II I I
`
`.nlnI-Illu "Ila-“II
`IIIUIIIIIIIHIIIIIIIII
`Ill-IIIIHIIHIIIIIIIII
`
`
`
`
`
`IIIIEI=:III%IIIII=III
`
`
`
`
`
`smm = Zomamz- anmtti‘nfl?
`
`sn =X5iufif-Hfoathlf
`
`smi = ZDmuLffJ-I (t?) - ~1—Z Omaha 2 5| (t? ).
`
`Fig. 7A
`
`0009
`
`0009
`
`

`

`US. Patent
`
`Jun. 15, 2004
`
`Sheet 8 0f 19
`
`US 6,750,873 B1
`
`
`
`0010
`
`0010
`
`

`

`US. Patent
`
`Jun. 15, 2004
`
`Sheet 9 0f 19
`
`US 6,750,873 B1
`
`
`
`0011
`
`0011
`
`

`

`U S. Patent
`
`Jun. 15, 2004
`
`Sheet 10 0f 19
`
`US 6,750,873 B1
`
`
`
`0012
`
`0012
`
`

`

`US. Patent
`
`Jun. 15, 2004
`
`Sheet 11 0f 19
`
`US 6,750,873 B1
`
`
`
`Fig. 10E
`
`Fig. 10F
`
`0013
`
`0013
`
`

`

`US. Patent
`
`Jun. 15, 2004
`
`Sheet 12 0f 19
`
`US 6,750,873 B1
`
`
`
`0014
`
`0014
`
`

`

`US. Patent
`
`Jun. 15,2004
`
`Sheet 13 0f19
`
`US 6,750,873 B1
`
`114
`
`
`
`110
`
`
`
`M 1
`
`00
`
`120
`
`MEMORY
`
`
`
`GRAPHICS
`
`
`SUBSYSTEM
`
`
` 118
`
`
`SUBSYSTEM
`
`l/O
`
`
`112
`
`
`
`ACQUISITION
`SUBSYSTEM
`
`Fig. 11A
`
`0015
`
`0015
`
`

`

`US. Patent
`
`Jun. 15, 2004
`
`Sheet 14 0f 19
`
`US 6,750,873 B1
`
`106
`
`1103
`
`BUS
`
`INTERFACE
`
`GRAPFHCS
`
`CONTROL
`
`PROCESSOR
`
`GEOMETRY
`
`SUBSYSTEM
`
`1109
`
`1108
`
`1101
`
`1 10"
`
`Z—BUFFER
`
`RASTERIZER
`
`FRAME
`
`BUFFER
`
`Fig. 118
`
`0016
`
`0016
`
`

`

`US. Patent
`
`Jun. 15, 2004
`
`Sheet 15 0f 19
`
`US 6,750,873 B1
`
`
`
`0017
`
`0017
`
`

`

`US. Patent
`
`M
`
`91f061tcchS
`
`US 6,750,873 B1
`
`....lm.........
`
`
`
`Rm.»................5...,
`
`'4
`
`ig. 11D
`
`n...x.umJ...2“.
`
`\F
`
`
`cur-.........un/FDAvFfldr‘bah9§AVW
`_.«m««flfi¢14m$h>4pfl¢<Auqv‘.
`wwmwflmagéwvaafinw
`xvébhfldhhaq“,
`
`Fig. 11F
`
`Fig. 11G
`
`0018
`
`0018
`
`
`
`
`

`

`US. Patent
`
`Jun. 15, 2094
`
`Sheet 17 0f 19
`
`US 6,750,873 Bl
`
`
`
`AGE-BASED REGISTRATION
`
`
`
`Read B1,
`
`
`
`
`Bn
`
`mn) as a
`Generate (m2,
`random permutation of (2,
`
`n)
`
`
`Cmi = camera (Mmi, C)
`Register Scan (mi, Cmi)
`
`
`
`
`convergence criterion
`IS satisfied
`
`Fig. 12A
`
`0019
`
`0019
`
`

`

`US. Patent
`
`Jun. 15, 2004
`
`Sheet 18 0f 19
`
`US 6,750,873 BI
`
`
`
`REGISTER SCAN (m, Cm)
`
`
`
`
`
`No
`
`
`3 pairs of points in
`the list of pairs
`
`
`
`Compute a transformation
`matrix Mm using the list of pairs
`
`
`
`
`
`Read Dm, Zm, Lm
`
`
`
`fi
`
`Yes
`
`Read Si, Di
`
`No
`
`Bm intersects Br
`
`Yes
`
`
`
`
`Project soanSi
`textured with its detail
`
`map Di using camera
`position Cm onto
`
`image D'i
`
`
`
`overtap of image
`Dm with image D‘I IS
`
`significant
`
`Yes
`
`
`
`
`
`tm, pm) is a sample pair
`Yes
`in Lm not yet processed
`
`
`_
`’
`g
`‘ AND
`
`
`trn rs msrde the overtap region
`
`
`
`Find hi, the position in D'i that constitutes
`a best match for tm in Dm
`
`
`
`
`
`Back-project bi using camera position Cm
`onto pi on the surface of scan Si
`
`insert the pair (pm. pi) into the list of pairs
`
`
`
`0020
`
`0020
`
`

`

`US. Patent
`
`Jun. 15, 2004
`
`Sheet 19 0f 19
`
`US 6,750,873 BI
`
`TEXTURE
`RECONSTRUCTION
`
`-—.=1
`
`No ®
`
`Yes
`
`Initialize accumulation buffers AcA AcN and AM
`
`Define orthogonal camera for patch P]
`
`
`
`’Compute dimensions sh sv of output maps (In pixels. " )
`
`
`
`
`
`
`
`Yes
`
`
`Load Ai, Ni, Wt. Zi into
`memory
`
`No @Yes
`
`
`
`
`
`
`
`
`
`
`
`Scan conver P1 and store xyz per pixet in point map XY
`
`
`
`1ew frustum Vi intersects
`bounding box of patch P‘
`
`Yes
`
`Ma»
`Yes
`
`0.
`
`
`
`No
`i += 1
`
`AcR i: Acw
`AcN I: AcW
`
`save AoA as albedo map A'j.
`save AcN as n rmal map N‘J
`
`
`
`
`Transform p into q = (qx, qy, qz)
`a point in the coordinate system
`of scan Si
`
`
`2 < 1 AND
`
`
`
`qz < Zi(h, v) + epsilo
`
`
`
`AcA(h, v] += Wi{qx, qy)*Ai(qx, qy .
`
`AcN(h, v) += Wi(qx,‘qy)*Ni(qxl qy}
`AcW(h. V) += Wlth. to)
`
`
`
`
`
`0021
`
`0021
`
`

`

`0022
`
`

`

`US 6,750,873 B1
`
`3
`client images from approximately aligned scans are trans-
`formed into a common camera View. Locations 01' corre-
`sponding points on overlapping scans are inferred based on
`the difference between intensity values at a given pixel and
`the gradient at that pixel. This method works well only if the
`spatial variation of the gradient is small relative to errors in
`the alignment of the scans.
`It is also known to present a non-lCP method for using
`intensity images to refine an initial manual alignment. In this
`approach pairs of range images are aligned manually by
`marking three points on overlapping intensity images. The
`locations of the matching points are refined by searchng
`their
`immediate neighborhoods with image cross-
`correlation. A least-squares optimization follows to deter-
`mine a general SD transformation that minimizes the dis—
`tances between the point pairs.
`Image registration
`techniques are also used for image mosaics in which only
`rotations or translations are considered.
`
`ill
`
`15
`
`so
`
`4
`perspective warping of the first image is then computed to
`match the rendered image of the second scan. For each
`corresponding pixel of the two images, under the computed
`transformation, a pair of points from the two scans is
`generated. A least-squ ares optimization is then performed to
`compute a registration matrix. The process is iterated until a
`convergence criterion is satisfied. Pulli also discusses an
`extension for multi-view registration.
`Reference may also be made to K. Pulli et a]., “Acquisi-
`tion and Visualization of Colored 3D Objects", ICI’R 1998,
`for a description of a system for scanning geometry and the
`surface color. The data is registered and a surface that
`approximates the data is constructed,
`It
`is said that
`the
`surface estimate can be fairly coarse, as the appearance of
`fine detail is recreated by view—dependent texturing of the
`surface using color images. This process uses three diflerent
`weights (directional, sampling quality and feathering) when
`averaging together the colors of compatible rays.
`Pulli et al. do not explicitly form texture images associ-
`ated with geometry, but propose a dynamic, view-dependent
`texturing algorithm which determines a subset of the origi-
`nal images taken from a view direction that is close to the
`current view, and the synthesis of new color images from the
`model geometry and input images.
`Based on the foregoing, it can be readily appreciated that
`a need exists for improved methods to construct accurate
`digital models of multi-scanned objects, in particular digital
`models that exhibit high-quality texture.
`
`OBJECTS AND ADVANTAGES OF THE
`INVENTION
`
`is a first object and advantage of this invention to
`It
`provide an improved method and apparatus for constructing
`accurate digital models that exhibit high-quality surface
`texture.
`
`It is a further object and advantage of this invention to
`provide an improved method and apparatus for constructing,
`from object scan data, an accurate digital model of the object
`that exhibits high-quality surface texture.
`
`SUMMARY OF TIIE INVENTION
`
`The foregoing and other problems are overcome and the
`objects of the invention are realized by methods and appa-
`ratus in accordance with embodiments of this invention.
`
`Disclosed herein are methods to construct accurate digital
`models of scanned objects by integrating high-quality tex-
`ture and normal maps with geometric data. These methods
`can be used with inexpensive, electronic camera-based sys-
`tems in which low-resolution range images and high-
`resolution intensity images are acquired. The resulting mod—
`els are wellnsuited for interactive rendering on the latestn
`generation graphics hardware with support
`for bump
`mapping.
`In general, bump mapping refers to encoding
`small scale geometry in an image. The large scale geometry
`contains pointers to the bump map image that are used by the
`computer graphics hardware to display both large and small
`scale geometry.
`The inventive methods provide techniques for processing
`range, albedo, and surface normal data. for image-based
`registration of scans, and for reconstructing high-quality
`textures for the output digital object.
`The scanning system used during the execution of the
`methods described herein was equipped with a high-
`resolution digital color camera that acquires intensity images
`under controlled lighting conditions. Detailed normal and
`
`After the intensity images are aligned to the geometry,
`illumination invariant maps are computed to estimate the
`surface reflectance (step E). The number of scans versus the
`number of intensity images, as well as the resolution of the
`scans compared to the resolution of the images are consid~
`ered at this stage. For a small number of scans and a large
`number of intensity images obtained under calibrated light- -
`ing conditions, a full Bidirectional Reflectance Distribution
`Function (BRDF) can be estimated.
`Ifmany scans are required to represent an object, and only
`a few high—resolution intensity images are captured per scan,
`photometric stereo techniques can be used to estimate Lam-
`bertian reflectance. Alternatively, if the range and intensity
`images have the same resolution, the geometry can be used
`to compute reflectance from a single image.
`In step F the final texture is reconstructed. The illumina—
`tion invariant maps are projected onto the integrated, param-
`etrized surfaces. The main concerns at this step are that the
`final texture is as sharp as the best input images, that seams
`between scans or height-field patches are not visible, and
`that all information available is fully exploited to maximize
`the signal-to-noise ratio.
`To maintain sharpness, a stitching approach has been
`proposed that uses a single illumination invariant map at any
`given surface point. Continuity in sharp features between
`adjoining maps is maintained by a local texture adjustment
`at texture boundaries. However, this approach requires high
`quality input maps that have no visible noise and no scan-
`to-scan chromatic differences. Map adjustment techniques
`such as this, as well as de-ghosting methods for image
`mosaics, decouple texture from geometric variations. This
`may cause noticeable artifacts when these variations are
`correlated (e.g., dents and scratches that reveal underlying
`material with different reflectance properties.)
`To avoid jumps in color appearance and to reduce noise,
`it is known to combine information from multiple overlap-
`ping scans. In this case, however, if texture alignment is
`imperfect then blurring or ghosting artifacts may be gener-
`ated.
`
`35
`
`40
`
`45
`
`50
`
`55
`
`Reference can be had to K. Pulli, "Surface reconstruction
`and display from range and color data”, PhD Thesis, Dept.
`ofComputer Science and Engineering, Univ. of Washington,
`December 1997.
`
`In general, this approach uses intensity images to pair
`points from overlapping scans. With the assumption that two
`scans are already roughly aligned, Pulli’s method starts by
`rendering the second scan, textured with its own intensity
`image, from the viewpoint of the first
`image. A planar
`
`60
`
`65
`
`0023
`
`0023
`
`

`

`US 6,750,873 B1
`
`5
`albedo maps of the surface are computed based on these
`images. By comparison, geometry is captured at
`lower
`resolution.
`typically at a resolution that
`is sufficient
`to
`resolve only the major shape features.
`The benefits of such a system are twofold. First, it allows
`for the use of relatively inexpensive hardware by eliminating
`the nee d for dense geometric sampling, and by taking
`advantage of digital color cameras that are quickly gaining
`In resolution while dropping In price. Second, the generated
`models are more readily usable in a visualization or mod-
`eling environment that exploits the hardware-assisted bump
`mapping feature increasingly available in commercial-grade
`BD accelerators.
`
`the issue of acquiring and reconstructing
`In general,
`high-quaiity texture maps has received less attention than
`the issue of capturing high-quality geometry. The inventors
`have built upon existing techniques developed for texture
`acquisition, reconstmction, and image registration to gener-
`ate maps of high visual quality for the scanned objects.
`Particularly because the noise and inaccuracies of a lower-
`cost scanner are greater than those of high-end. more expen—
`sive systems,
`it
`is desirable to exploit
`in full all of the
`geometric and image information acquired to improve the
`visual quality of the final representation.
`A novel
`texture reconstruction framework is disclosed
`that uses illumination-invariant albedo and normal maps
`derived from calibration-registered range and intensity
`images. The albedo maps are used in a unique way to refine
`a geometry-only registration of the individual rangc images.
`After the range data is integrated into a single mesh, the
`resulting object
`is partitioned into a set of height—field
`patches. New textures are synthesized by projecting the
`maps onto the patches and combining the best data available
`at each point using weights that reflect the level of confi—
`dence in the data. The weighted averaging lowers noise
`present
`in the images, while the fine registration avoids
`blurring, ghosting, and loss of fine texture details.
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`The above set forth and other features of the invention are
`made more apparent in the ensuing Detailed Description of
`the Invention when read in conjunction with the attached
`Drawings, wherein:
`FIG. 1 is flow diagram depicting the basic operations
`necessary to create a digital model from a series ofcaptured
`images;
`FIG. 2 is an example of the visual quality of textures for
`a model of a statue, obtained from multiple overlapping
`scans, that are enhanced with image-based tine-registration
`and weighted averaging techniques in accordance with the
`teachings herein;
`FIG. 3, shown as FIGS. 3A, SB and 3C, provides a
`pictorial representation of the presently preferred image-
`based registration algorithm;
`FIG. 4 depicts a pseudocode representation of the image-
`based registration algorithm in accordance with the teach-
`ings herein;
`FIGS. 5A and SB depict a sample point selection tech-
`nique based on an edge-detection technique;
`FIG. 6 illustrates two contrasting occlusion situations;
`FIG. 7 illustrates an example of a presently preferred
`sliding-window cross-correlation approach for a point that
`maximizes cross-correlation between image neighborhoods;
`FIG. 7A shows equations used in the computation of a
`correlation coefficient of n data points;
`
`it]
`
`IS
`
`an
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`6
`FIG. 8A illustrates a two-dimensional diagram of texture
`remapping, while FIG. SB shUWs an example of an occlu-
`sion;
`FIGS. 9A—9C are representations of photographs, and are
`useful in gaining an understanding the statue model example
`employed in the description;
`FIGS. lOA—IOE are examples of a vase data set example,
`while FIGS.
`lflF—lOI are examples of the statue data set;
`FIG. 11A is a block diagram of a computer system with
`graphics and 3D data acquisition capabilities that is suitable
`for practicing this invention;
`FIG. 1113 shows the graphics subsystem in greater detail;
`FIG. 11C depicts the 3D data acquisition system in greater
`detail;
`FIG. 11D is an example of the operation of the 3D
`scanner;
`
`FIG. 1113. shows the result of acquiring multiple stripes;
`FIG. 11F shows, in the scan integration phase, the com-
`putation of a triangular mesh;
`FIG. 110 shows a closeup view of the computed lrian~
`gular mesh;
`FIG. 12A is a logic flow diagram of the image~based
`registration method in accordance with this invention, while
`FIG. 123 is a logic flow diagram of the register scan(m)
`procedure of FIG. 12A; and
`FIG. 13 is a logic flow diagram of a presently preferred
`texture reconstruction method.
`
`DETAILED DESCRIPTION OF THE
`INVENTION
`
`FIG. 2 is an example of the visual quality of textures for
`a model of a statue, obtained from multiple overlapping
`scans, that are enhanced with image—based fine—registration
`and weighted averaging techniques in accordance with the
`teachings herein. The top image shows the final
`texture
`reconstructed from 20 overlapping scans. The two smaller
`images illustrate a detail from a highly carved area of the
`hair before (left) and after (right) image-based registration.
`The line chisel marks become clearly visible after the
`registration.
`A description is now provided of presently preferred
`methods to realize the steps shown in FIG. I. The goal is to
`produce a model of high visual quality, rather than to acquire
`a dense geometric representation. In accordance with these
`teachings novel techniques are disclosed [or the registration
`and reconstruction steps, respectively.
`Beginning with scan acquisition and processing, a scan—
`ning system is used in which range and intensity images are
`obtained simultaneously and are registered via a priori
`calibration. The presently preferred scanner is a hybrid
`multi-viewr’photometric system built around the Visual
`Interface Virtuoso. Range images are captured by a multi—
`camera stereo system which projects a striped pattern onto
`the target object. The system produces range images that are
`converted to individual
`triangle meshes (scans) with an
`approximate intersample distance of2 mm and submillime-
`ter depth accuracy.
`Intensity images are recorded under five calibrated light—
`ing conditions (see light sources 112ar in FIG. 11C) and have
`a much higher resolution than the projected light geometric
`scans (between about 0.25 mm and 0.5 mm intersample
`distance, depending on surface orientation.)
`The intensity images are processed to extract RGB
`albedo, normal, and weight (confidence) maps for each scan.
`
`0024
`
`0024
`
`

`

`US 6,750,873 B1
`
`7
`This photometric processing involves toting the approximate
`knowledge of the underlying geometry to account for imper-
`fections in the measuring system, as well as global com-
`pensation of variations in camera response.
`The images that contain the RGB albedo and normals are
`referred to generically as “texture maps". Holes mayr occur
`in these textures where photometric processing fails due to
`fewer than three light sources being visible at a point. In
`Such regions, the textures are filled using the normals of the
`underlying mesh and data from just one or two of the
`associated intensity images.
`A “weight map" encodes a degree of confidence in the
`data at each pixel of a texture map. The weight is propor-
`tional to the ratio of the projected surface area to the true
`area of the surface represented by the pixel. This ratio is
`computed based on the distance from the corresponding
`point on the surface to the camera 112!) center and the scalar
`product between the surface normal and the view direction.
`The weight is higher at pixels computed from the photo-
`metric data rather than based on underlying geometry. In
`addition, the weight of a pixel decreases as the distance to
`the scan boundary becomes smaller.
`The boundaries between photometric values and under—
`lying values are preferably smooth, as are the corresponding
`weights, so that abrupt changes in the final texture are not
`visible.
`
`The acquisition of intensity images aligned with the
`corresponding range images and the photometric processing
`correspond to steps D and E in FIG. 1.
`Next,
`the scan registration step A is initiated with a
`pairwise manual alignment to estimate the position of each
`scan with respect to a global coordinate system. This step is
`performed at the same time as the removal of outliers in the
`range data.
`Then,
`the initial manual registration is refined using a
`variation of the multiview lCP algorithm. The presently
`preferred variation of the multiview ICP algorithm uses
`geometric information only.
`It
`is desired to produce high-quality texture maps by
`combining information from multiple overlapping scans at
`each pixel. To achieve this, a highly accurate alignment is
`required to prevent the appearance of ghosting and blurring
`artifacts. This is accomplished, in accordance with an aspect
`of this invention. by avoiding the mixing of position and
`color data by performing an image-based alignment after the
`geometry-based registration has converged.
`In a manner similar to a known approach (E. Gagnon,_I.-F.
`Rivest, M. Greenspan, and N. Bunnyk, “A computer-
`assisted range image registration system for nuclear waste
`cleanup”,
`IEEE Transactions of lnstrumentations and
`Measurement, 48)3):758-762, 1999).
`image matching is
`used to refine the alignment. Howuver, the presently pre-
`ferred embodiment differs in a number of critical ways from
`known approaches. First, since this is a refinement step, the
`inventive method compares images that are reprojected onto
`the same camera view to account for geometric distortions.
`Second,
`instead of manually selecting three points to be
`matched,
`the inventive method implements an automatic
`selection procedure that
`identifies samples in areas with
`significant
`image structure. Finally, images are employed
`that are consistent with each other by processing them
`according to methods described in H. Rushmeicr and E.
`Bemardini (two of the instant inventors), “Computing con-
`sistent normals and colors from photometric data“, In Proc.
`of the Second Intl. Conf. on 3-D Digital
`Imaging and
`Modeling”, pages 99—108, Ottawa, Canada, October 1999.
`
`ll]
`
`15
`
`*
`
`an
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`8
`FIG. 3 provides a pictorial representation of the presently
`preferred image-based registration algorithm, which takes
`advantage of the high-resolution information in the images
`to refine the geometric alignment obtained by ICP.
`A basic idea is to use "detail maps" computed from the
`texture maps to generate highly-accurate pairs of corre~
`spending points on the scans by matching their image
`neighborhoods. The generated pairs are subsequently used
`to compute rigid transformations that lead to an improved
`alignment of the geometry.
`The scans are considered for alignment one at a time, in
`random order, while the others may remain fixed. Given a
`scan to be aligned, sample points are selected automatically
`on its detail map in regions where interest ing- features are
`present. The detail maps corresponding to all overlapping
`scans are then projected onto the current image plane and a
`search is performed in the neighborhood of each sample
`point
`for a best match in the overlapping areas of the
`projected maps. The resulting pairs of image point samples
`are back-projected onto their corresponding geometry and
`used to compute a rigid transformation that minimizes the
`sum ofsquared distances between corresponding points. The
`transformation thus obtained is applied to the moving scan,
`and the process is repeated until a convergence criterion is
`satisfied.
`
`A more detailed description of the image-based registra-
`tion algorithm is given below. The image-based registration
`completes step A in the image processing pipeline.
`Turning now to surface reconstruction and
`parameterizalion, having obtained a tightly aligned set of
`scans, the method proceeds to integrate them into a seamless
`triangle mesh (step B.) A presently preferred technique is
`known as Ball Pivoting, which efficiently generates a tri-
`angle mesh by interpolating an unorganized set of points
`(see also FIGS. 11E and HF).
`Next, the method determines a suitable surface paramv
`eterization (step C) to allow for an etlicient computation of
`texture maps that cover the entire object without overlap-
`ping. Since it is desired to use mip-mapping techniques, the
`surface is partitioned into “height-lield patches", rather than
`parameterizing it one triangle at a time. A mip-ntap (multum
`in parvo) is an image which uses three quarters of its pixels
`to store a texture, and the remaining one quarter to store a
`filtered and reduced version of the same texture.
`
`the
`The texture—reconstruct ion technique uses all of
`acquired data at each point, as opposed to stitching together
`pieces of the initial scans. Hence, the initial scans are not
`used as height-field patches. Instead, a new partition of the
`geometry is computed by a greedy approach that begins with
`a seed triangle and grows the surface region around it until
`a maximum patch size or maximum slope deviation is
`reached. Each patch is then projected in the direction which
`maximizes its total projected area, providing a simple local
`parameterization.
`Since the reconstniction technique employs projecting
`textures corresponding to the original camera positions,
`conventional types of surface parameterizations that do not
`maintain visibility and metric information are not particu-
`larly well suited to this approach.
`With regard to texture reconstruction, and once the model
`is partitioned into height-field patches, albedo and normal
`maps are reconstructed for each patch by combining the
`information in all overlapping textures (step 1"). For each
`pixel of a texture to be computed, all
`textures that have
`information about
`its albedo and normal are identified.
`
`Corresponding values are combined using a weighting
`
`0025
`
`0025
`
`

`

`US 6,750,873 B1
`
`9
`scheme that lakes into account the confidence in each value.
`while avoiding the creation of discontinuities at patch-to-
`patch and scan-to-scan transitions {see also FIG. 8). Occlu-
`sions are also be handled correctly.
`Turning now to image-based registration, a goal of image-
`based registration technique is to improve the geometry—
`based alignment of scans that make up a 3D object. This is
`accomplished by taking into account additional information
`contained in the high-resolution detail maps computed for
`each scan.
`
`Detail maps are generated from albedo and normals maps.
`Depending on the application, they may be RGB images,
`grey-scale images, or geometry-invariant representations of
`the normals.
`
`it]
`
`15
`
`10
`
`sponding point t,t on {Di} for a point til—'1 that maximizes
`image correlation. The point b,1 is then back-projected onto
`the scan Si into pf. The process is repeated to generate a set
`of pairs of corresponding points (pm , phk). A rigid transfor-
`mation is computed that minimizes the sum of squared
`distances between corresponding points.
`further
`The image-based registration algorithm is
`described in pseudocode in FIG. 4, and is depicted in even
`greater detail in the logic flow diagrams of FIGS. 12A and
`12B. The input data contains the scans to be registered with
`their initial registration matrices, as well as their correspond-
`ing detail maps, depth maps, bounding boxes, and lists of
`sample points. In addition, the calibration parameters of the
`intc nsily-capture camera are considered to be known. These
`parameters include the position and orientation of the cam-
`era in the local frame of each scan, its field—of—view angle,
`and the pixel size of the output image. Knowledge of these
`parameters enables one to define a camera that matches the
`view of the capture camera. With the exception of the
`bounding boxes which are stored in memory for the entire
`duration of the alignment, all other data is preferably loaded
`on demand. The output is a set of registration matrices (one
`per scan) that defines the new, more accurate alignment. The
`main steps o

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket