`US 20020057438Al
`
`(19) United States
`(12) Patent Application Publication
`Decker
`
`(10) Pub. No.: US 2002/0057438 Al
`May 16, 2002
`(43) Pub. Date:
`
`(54) METHOD AND APPARATUS FOR
`CAPTURING 3D SURFACE AND COLOR
`THEREON IN REAL TIME
`
`(76)
`
`Inventor: Derek Edward Decker, Byron, CA
`(US)
`
`Correspondence Address:
`Derek Edward Decker
`835 Discovery Bay Blvd.
`Byron, CA 94514 (US)
`
`(21) Appl. No.:
`
`10/007,715
`
`(22) Filed:
`
`Nov. 13, 2001
`
`Related U.S. Application Data
`
`(63) Non-provisional of provisional application
`60/247,248, filed on Nov. 13, 2000.
`
`No.
`
`Publication Classification
`
`Int. Cl.7 ..................................................... G0lB 11/24
`(51)
`(52) U.S. Cl. .............................................................. 356/601
`
`(57)
`
`ABSTRACT
`
`A method and apparatus for acquiring surface topography.
`The surface being acquired is illuminated by illumination
`sources with patterns of light from one optical perspective
`and the light reflected off the surface is captured by image
`sensors from one optical perspective that is different than the
`perspective of the illumination. The images obtained are of
`the surface with one or more patterns superimposed upon the
`surface. The surface topography is computed with a proces(cid:173)
`sor based upon the patterned image data, the known sepa(cid:173)
`ration between the illumination sources and the imaging
`sensors, and knowledge about how the patterns of light are
`projected from the illumination sources.
`
`10 t
`
`20
`
`24
`
`26
`
`20
`
`3SHAPE 1012 3Shape v Align IPR2021-01383
`
`
`
`Patent Application Publication May 16, 2002 Sheet 1 of 8
`
`US 2002/0057438 Al
`
`10 t
`
`20
`
`24
`
`26
`
`FIG. 1
`
`20
`
`3SHAPE 1012 3Shape v Align IPR2021-01383
`
`
`
`Patent Application Publication May 16, 2002 Sheet 2 of 8
`
`US 2002/0057438 Al
`
`period
`
`32
`
`30 JI
`
`34
`
`FIG. 2a
`
`FIG. 2b
`
`44
`
`38 42 40
`
`bright
`
`,46
`
`FIG. 2c
`
`~
`~
`
`-~ 00
`d
`~
`~ d
`·~
`
`dark
`
`2
`
`3
`
`4
`
`5
`
`6
`
`3SHAPE 1012 3Shape v Align IPR2021-01383
`
`
`
`Patent Application Publication May 16, 2002 Sheet 3 of 8
`
`US 2002/0057438 Al
`
`············•;1,_54
`J
`···········
`, ,
`I
`•••••·••
`I
`.. ............................. :
`I
`,·-·····
`..
`
`••••••
`-~•••••••••
`
`•...
`..
`
`•••.
`
`48,,,,
`
`t"
`
`I
`I
`I
`I
`I
`
`"
`
`,
`
`-~
`••
`••.,
`•••
`
`FIG. 3a
`
`'
`:
`
`I
`J
`,,
`
`·-.J/
`50
`---- ,, .v
`----,: ..-ss
`1
`: FIG. 3b
`,
`-------
`---------------~---
`-~
`~,
`,,_
`'~-
`
`62
`
`50
`
`56
`
`__ ,
`
`,,,.
`
`- • - -
`
`, ,
`
`I
`
`.._.__
`"'",
`
`I
`I
`I , ,
`
`...... ~
`
`I
`
`I
`
`•,
`I
`J
`
`,,
`
`z
`
`e FIG. Se
`
`X
`
`FIG. 3d
`
`3SHAPE 1012 3Shape v Align IPR2021-01383
`
`
`
`Patent Application Publication May 16, 2002 Sheet 4 of 8
`
`US 2002/0057438 Al
`
`FIG.4
`
`~
`80
`
`3SHAPE 1012 3Shape v Align IPR2021-01383
`
`
`
`Patent Application Publication May 16, 2002 Sheet 5 of 8
`
`US 2002/0057438 Al
`
`FIG.5
`
`3SHAPE 1012 3Shape v Align IPR2021-01383
`
`
`
`Patent Application Publication May 16, 2002 Sheet 6 of 8
`
`US 2002/0057438 Al
`
`120
`
`FIG.6
`
`3SHAPE 1012 3Shape v Align IPR2021-01383
`
`
`
`Patent Application Publication May 16, 2002 Sheet 7 of 8
`
`US 2002/0057438 Al
`
`FIG. 7
`
`FIG.8
`
`3SHAPE 1012 3Shape v Align IPR2021-01383
`
`
`
`Patent Application Publication May 16, 2002 Sheet 8 of 8
`
`US 2002/0057438 Al
`
`FIG. 9a
`
`FIG. 9b
`
`FIG. 9c
`
`3SHAPE 1012 3Shape v Align IPR2021-01383
`
`
`
`US 2002/0057438 Al
`
`May 16, 2002
`
`1
`
`METHOD AND APPARATUS FOR CAPTURING 3D
`SURFACE AND COLOR THEREON IN REAL TIME
`
`BACKGROUND OF THE INVENTION
`
`[0001] 1. Field of the Invention
`[0002] This invention relates to the real-time acquisition
`of surface
`topography using non-contact methods by
`employing a light source and a detector.
`[0003] 2. Related US Application
`[0004] The present invention claims priority from U.S.
`Provisional Application No. 60/247,248 filed Nov. 13, 2000
`under the same title.
`[0005] 18. Description of the Related Art
`[0006] There has been a need to accurately model objects
`for centuries. From the first days of anatomy and botany,
`people have tried to convey the shapes of things to other
`people using words, drawings and two-dimensional photo(cid:173)
`graphs. Unfortunately, this is often an incomplete and inac(cid:173)
`curate description of an object. Often there is the need to
`have this information in real time, such as monitoring the
`shape of a heart while it is beating, perhaps while exposed
`during surgery. Few technologies exist today that can meet
`the needs of low cost, simplicity, non-contact, high resolu(cid:173)
`tion and real time performance. For example, there is a
`technology where reflecting dots are placed on various
`surface points on the skin of someone's body for measuring
`movement of body parts using multiple cameras to track
`these dots from different perspectives. Such a system could
`have real time performance as well as low cost but fails by
`being complex, having very low resolution and requiring
`contact of reflectors to the surface (not a good technology for
`the heart application mentioned above).
`[0007] By capturing the true surface shape, without mak(cid:173)
`ing physical contact, and automatically having that topology
`entered into a computer for easy rendering of the model in
`real time, one can obtain better knowledge about the objects
`being investigated. One can also imagine real-time manipu(cid:173)
`lation of the model. For example, when an object moves
`outside of a predetermined boundary or fails to follow a
`predetermined motion, the color of the modeled surface is
`changed and an audible sound is triggered. Going back to the
`heart example, such real time modification of the model
`could help doctors to detect and visualize abnormal heart
`function.
`[0008]
`In reviewing the prior art, many patents require two
`or more light sources or two or more cameras or detectors in
`order to extract surface information. The additional sources
`and detectors used by others in this type of system are not
`needed for the invention in this application. For example,
`U.S. Pat. No. 5,691,815 by Huber, et al. teaches the need for
`two light sources in perpendicular slit arrangements, each
`illuminating a slice of the surface at different angles with
`respect to each other. Such an additional complication is not
`needed by the invention in this application, which only uses
`one source. Also, Huber's method only determines the
`position of one point rather than for all points simulta(cid:173)
`neously in one image as is done in the present invention.
`[0009] U.S. Pat. No. 6,141,105 by Yahashi, et al. does
`function with a single source and imaging detector. How(cid:173)
`ever, it requires angle scanning a slit source with time and
`
`synchronously capturing multiple images in order to acquire
`the surface data. Disadvantages include getting incorrect
`data on surfaces that are moving which change shape during
`the time interval required to take multiple images. U.S. Pat.
`No. 6,233,049 by Kondo, et al. and U.S. Pat. No. 6,094,270
`by Uomori, et al. both suffer from the same problem of
`scanning a slit illumination. Both also suffer from slow
`speed because they must capture and process, as many
`images as there are lines of resolution (which could be
`thousands of lines in a megapixel resolution system).
`
`[0010] U.S. Pat. No. 5,969,820 by Yoshii, et al. uses
`oblique illumination of target patterns on semiconductor
`wafers to get the proper height of the flat surface before
`exposing its photoresist coating on the surface to a two
`dimensional optical pattern for circuit fabrication applica(cid:173)
`tions. The intent of this patent is not to determine surface
`shape of irregular surfaces and in fact would fail to do so due
`to shadowing that occurs from oblique angle illumination. It
`also fails to collect more than one surface height, relying on
`the knowledge they are working with a flat surface.
`
`[0011] U.S. Pat. No. 6,128,585 by Greer describes a
`system that requires a reference target He goes on to
`describe this reference target as being in a tetrahedral shape
`and having LEDs at the vertices that blink at different rates
`so they can be identified in a computerized vision system.
`Greer's patent and claims are written with the purpose of
`positioning a feature sensor and not with determining sur(cid:173)
`face topography. Moreover, the requirement of a reference
`target with blinking vertices adds complexity and cost and
`slows down the time it takes the computerized vision system
`to calibrate and operate.
`
`[0012] U.S. Pat. No. 4,541,721 by Robert Dewar mentions
`using a single line of collimated light incident across a gap
`between surfaces that one is trying to measure and control
`for manufacturing purposes. The need for collimated light,
`rather than a divergent light source, suffers from several
`limitations including the greater cost and complexity as well
`as safety concerns of using a laser source and having to
`arrange optics which must be at least as large as the
`collimated light beam. Additionally, gaps imply shadows,
`which are particularly troublesome for acquiring surface
`topography due to a loss of reflected light. Furthermore,
`trying to use Dewar's system for topography across a
`surface with a thousand lines of resolution would require a
`thousand images be captured and processed while the
`present invention can do it all in one step.
`
`[0013] A method described by inventor Shin Yee Lu (in
`U.S. Pat. No. 5,852,672) employs two cameras and a light
`projector which all must be precisely aligned and calibrated.
`FIG. 1 is an illustration of the top view of Lu's system 10
`(which is roughly equivalent to Lu's FIG. 9 in U.S. Pat. No.
`5,852,672). The camera sensors, CCDs 12, can be thought to
`image through pinholes 14. Regions in object space 16
`image through pinhole 14 to image space 18 where light
`from a particular object point follows a path 20 through the
`pinhole 14 to a pixel on CCD 12. There exists an overlap
`region 22 of the two object spaces 16 defined by each
`camera. By having the object 24 viewed by both cameras in
`the overlap region 22, it becomes possible for a common
`point to be found in each CCD 12. Finding a common point
`in some regions will not be possible when the slope of the
`surface is sufficiently steep to create shadowing which
`
`3SHAPE 1012 3Shape v Align IPR2021-01383
`
`
`
`US 2002/0057438 Al
`
`May 16, 2002
`
`2
`
`prevents one or both cameras from seeing a particular spot
`A projector 26 between the CCD's 12 projects a vertical
`array of lines 28 onto the object 24, and through software
`intelligence in a computer system (not shown), tries to
`identify common points on the object 24 from images
`captured by the CCDs 12. If a common point is determined,
`triangulation can be performed by using the intersection of
`the two imaging lines 20 emanating from the common point
`on the object 24.
`[0014] Lu makes use of triangulating two intersecting
`imaging lines (one from each camera system) by guessing at
`the intersection point on the surface with help from a
`projection of vertical light and dark lines and intelligent
`software. The light and dark pattern (such as using a Ranchi
`ruling) is imaged onto the surface. The shadows obscure
`information that is lost which decreases the resolution one
`can obtain.
`[0015] While Lu does explain how one can obtain sub(cid:173)
`pixel accuracy, it comes at a cost of reduced resolution. For
`example, in the case of Lu's projection scheme, assume that
`there are approximately three pixels in shadow and three
`pixels in light for a period of light and dark regions imaged
`onto CCD pixels. See illustration 30 in FIG. 2a where dark
`regions 32 fall upon camera picture elements, pixels 34.
`There are approximately six pixels per period and two edges
`per period in this example. FIG. 2b shows in illustration 40
`how two adjacent pixels will have similar values (such as the
`dark pixels 2, 3 and 8, 9 and light pixels 5, 6) but, in general,
`at an edge (pixels 1, 4, 7, and 10) will be some values
`between light and dark established by where the edge of
`light falls within these pixels. The pixel will integrate all of
`the light incident upon it resulting in an average intensity
`value. Dark region 38 and light region 40 illustrate the
`minimum and maximum values while gray region 42 and a
`darker gray region 44 in illustration 40 convey that an edge
`( of the light and dark pattern) falls within those pixels.
`Suffice it to say, interpolation and other numerical tech(cid:173)
`niques can be applied to the pixel intensity values (see FIG.
`2c) in order to obtain knowledge about the edge location that
`is more precise than the resolution of the pixel array. In other
`words, the edge can be determined to be located within a
`fraction of a pixel. But what does this say about the number
`of points in a resultant three dimensional mesh? It says that
`only one in three pixels are used to define positions. When
`compared to the present invention, which uses every pixel,
`the number of points used by Lu is reduced by a factor of
`three and a great amount of information is thereby lost. It
`also turns out that the method for obtaining sub-pixel
`accuracy can still be applied to this invention so there is no
`trade off in accuracy, there is only a significant 3x gain in
`resolution. Sub-pixel accuracy is obtained in this invention
`by interpolation and other numerical techniques being
`applied to detected colors along any row of pixels being
`analyzed.
`
`SUMMARY OF THE INVENTION
`
`[0016] This invention describes a method and an apparatus
`for acquiring surface topography, which the dictionary
`defines as the surface configuration of anything. The surface
`being acquired is illuminated with patterns of light from one
`optical perspective and the light reflected off the surface is
`captured by image sensors from one optical perspective that
`is different than the perspective of the illumination. The
`
`images obtained are of the surface with one or more patterns
`superimposed upon the surface. The surface topography is
`computed based upon the patterned image data, the known
`separation between the illumination sources and the imaging
`sensors, and knowledge about how the patterns of light are
`projected from the illumination sources. This method can be
`carried out by the following apparatus. Illumination sources
`emit patterns of light onto the surface through one optical
`perspective. Image sensors image the surface through one
`optical perspective which is different from the optical per(cid:173)
`spective of the illumination sources. A processor is coupled
`to the illumination sources and the imaging sensors. The
`processor computes the surface topography.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`[0017] The objects and features of the present invention,
`which are believed to be novel, are set forth with particu(cid:173)
`larity in the appended claims. The present invention, both as
`to its organization and manner of operation, together with
`further objects and advantages, may best be understood by
`reference to the following description, taken in connection
`with the accompanying drawings, in which:
`[0018] FIG. 1 is prior art and represents a top view of the
`system by Lu in which two camera imaging systems and the
`projection of lines on a surface can be used to try and
`determine surface topology.
`[0019] FIG. 2a, FIG. 2b and FIG. 2c depict how Lu's
`projection of black and white lines, when imaged onto a row
`of CCD pixels, results in pixel integrated gray levels that can
`be used to obtain sub-pixel accuracy of line edge location.
`[0020] FIG. 3a, FIG. 3b, FIG. 3c, FIG. 3d and FIG. 3e
`show perspective views of the way in which triangulation
`and coordinates can be obtained.
`[0021] FIG. 4 is a top view of the preferred embodiment
`of the invention in which an object is illuminated by light
`(which varies in color), is imaged by a color camera, and
`computer processing of the image results in a 3D surface
`model displayed on the computer screen.
`[0022] FIG. 5 is an illustration that shows the three flat cut
`away surfaces on the inside of the spherical section in a color
`space with red, green and blue coordinate axes.
`[0023] FIG. 6 is an illustration that shows a path (on the
`curved surface of the normalized color sphere) which is
`starting at red and spiraling in to end up at white.
`[0024] FIG. 7 is the image formed by the intersection of
`a rainbow light projector and a white piece of paper.
`[0025] FIG. 8 is the image formed by the intersection of
`a rainbow light projector and a white ceramic object in the
`shape of a kitten holding a ball.
`[0026] FIG. 9a is a bar chart indicating that given white
`light illumination ( equal intensities of red, green and blue)
`on the surface, reflections making into the camera off a
`colored surface (such as a green iris) result in half of the red
`light making it into a particular camera pixel while all of the
`green made it and only a quarter of the blue made it in.
`[0027] FIG. 9b is a bar chart indicating that for an
`unknown projector color illumination, half of the red made
`it into the camera pixel while none of the green made it and
`only a quarter of the blue made it in.
`
`3SHAPE 1012 3Shape v Align IPR2021-01383
`
`
`
`US 2002/0057438 Al
`
`May 16, 2002
`
`3
`
`[0028] FIG. 9c is a bar chart illustrating that we doubled
`the red light in FIG. 9b (by dividing out the ½ red response
`of the surface imaged by that pixel) and we quadrupled the
`blue light in FIG. 9b (by dividing out the ¼ blue response
`of the surface imaged by that pixel) which tells us that the
`color incident on the surface (before it was changed by the
`surface) was a light purple with equal amounts of blue and
`red.
`
`DETAILED DESCRIPTION OF IBE
`INVENTION
`
`[0029] The following description is provided to enable any
`person skilled in the art to make and use the invention and
`sets forth the best modes presently contemplated by the
`inventor of carrying out the invention. Various modifica(cid:173)
`tions, however, will remain readily apparent to those skilled
`in the art, since the generic principles of the present inven(cid:173)
`tion have been defined herein.
`
`[0030] This invention can be used for acquiring surface
`topography in real time. It makes use of an illumination
`system to project patterns of light on a surface from one
`perspective and simultaneously acquires images of that
`surface from a slightly different perspective. The images
`obtained of the surface have one or more patterns superim(cid:173)
`posed upon them. One way in which to utilize more than one
`illumination source with the same perspective view is to
`make use of beam splitters within the illumination sub(cid:173)
`system. One way in which to utilize more than one imaging
`sensor with the same perspective view is also to make use of
`beam splitters within the imaging subsystem. A computing
`device is used to detect the distortions of these patterns and
`compute the surface topography.
`
`[0031] This invention acquires all the surface data in a
`single snapshot, eliminating errors associated with scanning
`systems and objects which change their shape or position
`with time. The snapshot time interval is limited only by a
`combination of shutter speed and/or a flash of illumination,
`so long as sufficient numbers of photons interact with the
`imaging detector to give a good signal. This can presumably
`be on the order of billionths of a second ( or less) if
`nanosecond (or shorter) laser pulses are employed.
`
`[0032]
`In the preferred embodiment, this invention uses
`only one color light source and one color camera to deter(cid:173)
`mine the topography of surfaces in real time. This makes for
`a simple and inexpensive system. Unlike other systems, this
`invention can determine the x, y and z positions of every
`surface point imaged by a pixel of the camera with only one
`color image. An equivalent position of a point on the surface
`can be represented in spherical coordinates of radius from
`the origin, and angles theta and phi.
`
`[0033] Once these positions are known in the computer
`they can be displayed as a contour map, shaded surface, wire
`mesh or interactively moveable object as might be done in
`any number of computer aided design software packages.
`
`Real time is meant to infer that surface data can be captured
`at video rates or rates in which the human eye does not detect
`individual frames, in other words, seeing a continuous
`motion. This is very important for applications including
`video conferencing and virtual presence as well as surgery
`accomplished over the Internet using robotics and real time
`visual feedback. This invention could actually operate much
`faster, limited only by the time it takes to capture and
`process the image. For some experiments, the data could be
`collected with film, CCDs or other methods and the pro(cid:173)
`cessing ( often the longer task) could be done afterwards to
`study such things as cell division and shape evolution of
`shocked surfaces.
`
`[0035] The illumination system 48 in FIG. 3a projects a
`vertical array of planes of light, each plane differing in color
`and angle. This could be accomplished with a back illumi(cid:173)
`nated transparency (such as a 35 mm slide in a slide
`projector) or by generating a rainbow using a prism or
`grating to separate the colors (in angle) from a white light
`source. In either case, each system can maintain intensity
`over the surface being inspected but vary in color along the
`horizontal direction. The intersection of this projected beam
`and a flat white surface would simply be imaged by the color
`camera as an array of columns (each having a unique color),
`each color identified by their relative ratio of red, green and
`blue light. FIG. 3a illustrates one color 50 of the spectrum
`being projected from a transparency 51 on the left through
`a pinhole 52 along projection ray 53 to a region 54.
`
`[0036] FIG. 3b illustrates an imaging system 56 compris(cid:173)
`ing object space region 58 imaged through pinhole 60 until
`the light arrives on an imaging detector 62, such as a CCD.
`FIG. 3c shows the combination of the systems in FIG. 3a
`and FIG. 3b with overlap of regions 54 and 58 of those
`figures, respectively. Shown is colored light 50 passing
`through projector pinhole 52 along projection ray 53 and on
`to the object surface point 64 which reflects off the surface
`and follows imaging path 66 through camera pinhole 60 to
`a pixel 61 of CCD 62.
`
`[0037] FIG. 3d shows a triangle comprised of segment "c"
`of the imaging path 66, segment "b" of the projection ray 53
`and a line 68 connecting the pinholes and labeled "a" .
`Opposite these sides are their respective angles A (at object
`point 64), B ( at camera pinhole 60) and C ( at projector
`pinhole 52). We can let "a" coincide with the y axis 70 in
`FIG. 3e. Given knowledge about "a" and the geometrical
`location of pinholes relative to the CCD 62 and transparency
`51, we can determine angle B (from the relationship between
`the pixel location 61 and camera pinhole 60) and angle C
`from the color (determined by the camera), angles of imag(cid:173)
`ing path 66 and prior knowledge of how color is projected
`as a function of angle through projector pinhole 52. Angle
`A=180°-B-C because the sum of angles in any triangle is
`always 180°. The Law of Sines tells us a/SinA=b/SinB=c/
`SinC. We can now solve for the two unkowns b=SinB(a/
`SinA) and c=SinC( a/SinA).
`
`[0034] All triangulation is accomplished through knowl(cid:173)
`edge of the paths each ray of light travels from an illumi(cid:173)
`nation source point to a point at the camera image. Color
`coding is used to help make each ray easily identifiable by
`the detector and the image processing software in the
`computer. Among the many advantages of this invention are
`simplicity, reduced cost of hardware and rapid operation.
`
`[0038] The relationship between the camera pinhole 60
`and pixel location on CCD 62 gives us the angles theta and
`phi as shown in FIG. 3e. The radius is simply the length c.
`We now have all we need to identify the position of object
`point 64 in three dimensional space. Theta is defined as the
`angle in the x and y plane as measured from the x axis 72
`to the projection 74 on the x and y plane of the imaging line
`
`3SHAPE 1012 3Shape v Align IPR2021-01383
`
`
`
`US 2002/0057438 Al
`
`May 16, 2002
`
`4
`
`66 connecting the origin 76 to the object point 64. Phi is the
`angle measured from the z axis to that line segment "c".
`Conversion to x, y, z or other coordinates is trivial at this
`point.
`[0039] FIG. 4 is a top view of the preferred embodiment
`80. The illumination system 82 projects a vertical ( out of the
`paper) array of planes of light 84, each plane differing in
`color and angle. One color 86 reflects off surface 88 and is
`imaged on CCD 90. The camera 92 transfers the color image
`to a computer 94 where it is processed into three-dimen(cid:173)
`sional data and displayed on monitor 96.
`[0040] The color camera 92 identifies these colored planes
`by their relative ratio of red, green and blue light. An
`advantage of using color to uniquely identify planes of light
`is that it is independent of intensity and therefor has no
`requirement for intensity calibration between the projector
`and camera. In standard cameras and frame capture elec(cid:173)
`tronics, each pixel is assigned a 24-bit number to determine
`its color and intensity. These 24 bits represent 8 bits for red,
`8 for green and 8 for blue. Eight bit numbers in integer
`format range from 0 to 255 (the equivalent of 256 equally
`spaced values). For most purposes we can define pixel
`values as a triplet of the form (Red,Green,Blue) where (255,
`0, 0) is red, (0, 255, 0) is green and (0, 0, 255) is blue.
`[0041] Other pixel representations exist, such as (Cyan,
`Yellow, Magenta, blacK) and (Intenisty, Hue, Saturation),
`but the (R, G, B) format will suffice for the moment. In
`particular, it will be important to define color as independent
`from intensity. For example, since equal amounts of red and
`blue makes a purple color, we define (200, 0, 200) and (5, 0,
`5) to be the same color but of different intensity. The higher
`numbers (or counts in CCD jargon) indicate more intense
`light (higher numbers of photons) were incident on the
`pixels because they generated more electrons in the CCD
`pixel wells that when shifted out of the image detector were
`digitized to yield a bigger number. Thus, one could say (5,
`0, 5) is a darker version of the light purple color (200, 0,
`200).
`
`[0042] Unless we have well known surfaces under inspec(cid:173)
`tion, like parts coming off an assembly line, it is probable
`that reflectivity will vary across the surface being analyzed.
`In fact, not only can there be absorption in the surface and
`along the optical path, but the surface angles and surface's
`specular quality will alter how much light gets back into the
`imaging system (because scattering and vignetting occurs).
`Additionally, reflected colors can be changed by colored
`surfaces (reflecting more of one color and absorbing more of
`another color) and secondary reflections (such as red reflec(cid:173)
`tions off the side of a nose making a blue illuminated cheek
`appear more purple than it would be without the secondary
`reflection). To improve results, we can take a white light
`picture and divide the color efficiencies into the image taken
`with the special color illumination system. This is important
`for objects with varying reflectivity along the surface (like
`red lips and green eyes from a face). It has the added
`advantage that one may map the white illuminated color
`image back onto the 3D surface, giving you 4D information
`(original surface color being the fourth dimension).
`
`[0043] Strictly speaking, color does not qualify as a fourth
`dimension in the same way space does. Dimensions in space
`(x, y, and z) each extend from minus infinity to plus infinity.
`Color can be characterised in many ways but for our
`
`purposes we will either use red, green and blue (R, G, B) or
`cyan, yellow, magenta and black ( C, Y, M, K). Each of those
`subdimensions varies from zero to a maximum, which our
`sensors ( camera, film, scanner, densitometer, profilometer,
`etc.) can detect.
`[0044]
`If the projector is a computer controllable device,
`such as a projection video display or spatial light modulator,
`one can also use the white light image to control the
`projector intensities and color to produce a projected illu(cid:173)
`mination pattern that will optimize signals at the detector.
`This can be accomplished by computer processing the white
`light illuminated image. Using that information one can
`brighten the colored projection image in specific locations
`where it was dark on the white light image due to absorptive
`pigments, scattering surfaces or shiny surfaces that slope
`away from near normal incidence ( and thus little light is
`reflected into the camera optics, a vignetting effect).
`[0045] Up to this point in this application, the imaging
`optics of the projector and camera have been described to
`simply comprise a pinhole. In reality, lenses are more likely
`to be used, given that they are able to transmit more light and
`image more effectively.
`[0046] The pattern of projector light need not be the
`rainbow shown in FIG. 7. And, projected lines in the context
`of this application need not be straight lines, rather they are
`permitted to be curved lines so long as they do not cross any
`row more than once. One could also have the planes of light
`be projected horizontal or along another direction instead of
`vertical. One only need to displace the optical axis of the
`imaging system along a path that is other than along that
`same direction. Displacement of the camera along the same
`direction as the angle of the light planes would eliminate the
`perceived shift in source rays because these rays would be
`stretched and compressed along the same direction as those
`rays of the same color and thus the computer would not be
`able to tell where the ray movement occurred. For curved
`planes and lines, the displacement must be along any path
`other than along the direction of any tangent of the curved
`lines.
`
`[0047] To obtain sub-pixel resolution as well as uniquely
`coded light planes, one should consider using a continuous
`range of changing color projected onto the surface from left
`to right. By knowing the path in color space is smooth and
`continuous, one can perform interpolation and other numeri(cid:173)
`cal methods. To visualize color space, let the three colors
`(Red, Green and Blue) represent independent axes (which
`are perpendicular to each other like the edges on a cube). Let
`one corner of a cube be the origin of the color space and as
`you travel along either of three adjacent edges (the three
`color axes), the values go from Oto 255. The farthest corner
`from the origin has the value (255, 255, 255) and represents
`white. Traveling back to the origin along the cube diagonal,
`each component decreases uniformly to shades of gray and
`eventually black at the origin.
`
`[0048]
`In order to compare colors, we do a normalization
`of the color vector in color space by dividing each compo(cid:173)
`nent by the length of the vector. Now imagine a sphere of
`unit radius (radius equal to one). Since the normalized color
`vectors all have a length of one, each vector extends from the
`origin to the surface of the unit sphere. In other words,
`convert from cartesian to polar coordinates and concern
`yourself only with the angles. What was a three element
`
`3SHAPE 1012 3Shape v Align IPR2021-01383
`
`
`
`US 2002/0057438 Al
`
`May 16, 2002
`
`5
`
`color (R, G, B) is now a two angle color (alpha and gamma).
`The color space can now be visualized as an eighth of a
`sphere because the initial components (R, G, B) were always
`positive in value. Just place the sphere origin at the origin
`previously defined for our color space cube and you'll see
`the intersection is a one-eighth section of the sphere. FIG.
`5 shows the three flat surfaces 98, 100 and 102 on the inside
`of the sphere section.
`[0049] To select an improved rainbow for uniqueness and
`sub-pixel accuracy, one merely need travel along the color
`space ( on the curved surface of the one-eighth section of the
`color sphere) in the following way. There should be one
`beginning, one end, no crossings, and sufficient spa