`(12) Patent Application Publication (10) Pub. No.: US 2002/0057438A1
`(43) Pub. Date:
`May 16, 2002
`Decker
`
`US 20020057438A1
`
`(54)
`
`(76)
`
`(21)
`(22)
`
`(63)
`
`METHOD AND APPARATUS FOR
`CAPTURING 3D SURFACE AND COLOR
`THEREON IN REAL TIME
`
`Inventor: Derek Edward Decker, Byron, CA
`(US)
`Correspondence Address:
`Derek Edward Decker
`835 Discovery Bay Blvd.
`Byron, CA 94514 (US)
`Appl. No.:
`10/007,715
`
`Filed:
`
`Nov. 13, 2001
`Related U.S. Application Data
`Non-provisional of provisional application No.
`60/247,248, filed on Nov. 13, 2000.
`
`Publication Classification
`
`(51) Int. Cl." ..................................................... G01B 11/24
`(52) U.S. Cl. .............................................................. 356/601
`
`(57)
`
`ABSTRACT
`
`A method and apparatus for acquiring Surface topography.
`The Surface being acquired is illuminated by illumination
`Sources with patterns of light from one optical perspective
`and the light reflected off the Surface is captured by image
`Sensors from one optical perspective that is different than the
`perspective of the illumination. The images obtained are of
`the Surface with one or more patterns Superimposed upon the
`Surface. The Surface topography is computed with a proces
`Sor based upon the patterned image data, the known Sepa
`ration between the illumination Sources and the imaging
`Sensors, and knowledge about how the patterns of light are
`projected from the illumination Sources.
`
`
`
`26
`
`3SHAPE EXHIBIT 1012
`3Shape v. Align
`IPR2019-00154
`
`
`
`Patent Application Publication May 16, 2002 Sheet 1 of 8
`
`US 2002/0057438A1
`
`
`
`26
`
`12
`
`Z2
`
`
`
`12
`
`FIG. 1
`
`
`
`Patent Application Publication May 16, 2002. Sheet 2 of 8
`
`US 2002/0057438A1
`
`
`
`period
`
`32
`
`30
`
`
`
`Patent Application Publication May 16, 2002 Sheet 3 of 8
`
`US 2002/0057438A1
`
`
`
`t
`
`ve
`
`sen
`-o"
`
`one
`
`Pess....
`
`o
`
`P
`an
`"hate.
`w
`a
`
`--
`
`FIG. 3d
`
`
`
`Patent Application Publication May 16, 2002 Sheet 4 of 8
`
`US 2002/0057438A1
`
`
`
`.
`
`.
`
`.
`
`.
`
`.
`
`.
`
`.
`
`.
`
`.
`
`is
`II"...in
`
`FIG. 4
`
`
`
`Patent Application Publication May 16, 2002 Sheet 5 of 8
`
`US 2002/0057438A1
`
`
`
`FIG. 5
`
`
`
`Patent Application Publication May 16, 2002 Sheet 6 of 8
`
`US 2002/0057438A1
`
`&ES:s: ESE :::
`
`al
`
`2.
`
`& sa
`
`
`
`
`
`
`
`
`
`
`
`FIG. 6
`
`
`
`Patent Application Publication May 16, 2002 Sheet 7 of 8
`122
`
`US 2002/0057438A1
`
`FIG. 7
`
`
`
`FIG. 8
`
`12
`
`
`
`
`
`No. NpòI
`
`LØon|q
`||||||||||||||||||uºa??
`
`Patent Application Publication May 16, 2002
`
`Sheet 8 of 8
`
`US 2002/0057438A1
`
`F.G. 9a
`
`FIG. 9b)
`
`G. 9C
`
`
`
`
`
`
`
`US 2002/0057438A1
`
`May 16, 2002
`
`METHOD AND APPARATUS FOR CAPTURING 3D
`SURFACE AND COLOR THEREON IN REAL TIME
`
`BACKGROUND OF THE INVENTION
`0001) 1. Field of the Invention
`0002 This invention relates to the real-time acquisition
`of Surface topography using non-contact methods by
`employing a light Source and a detector.
`0003 2. Related US Application
`0004. The present invention claims priority from U.S.
`Provisional Application No. 60/247,248 filed Nov. 13, 2000
`under the Same title.
`0005) 18. Description of the Related Art
`0006 There has been a need to accurately model objects
`for centuries. From the first days of anatomy and botany,
`people have tried to convey the shapes of things to other
`people using words, drawings and two-dimensional photo
`graphs. Unfortunately, this is often an incomplete and inac
`curate description of an object. Often there is the need to
`have this information in real time, Such as monitoring the
`shape of a heart while it is beating, perhaps while exposed
`during Surgery. Few technologies exist today that can meet
`the needs of low cost, Simplicity, non-contact, high resolu
`tion and real time performance. For example, there is a
`technology where reflecting dots are placed on various
`Surface points on the skin of Someone's body for measuring
`movement of body parts using multiple cameras to track
`these dots from different perspectives. Such a system could
`have real time performance as well as low cost but fails by
`being complex, having very low resolution and requiring
`contact of reflectors to the Surface (not a good technology for
`the heart application mentioned above).
`0007. By capturing the true surface shape, without mak
`ing physical contact, and automatically having that topology
`entered into a computer for easy rendering of the model in
`real time, one can obtain better knowledge about the objects
`being investigated. One can also imagine real-time manipu
`lation of the model. For example, when an object moves
`outside of a predetermined boundary or fails to follow a
`predetermined motion, the color of the modeled Surface is
`changed and an audible Sound is triggered. Going back to the
`heart example, Such real time modification of the model
`could help doctors to detect and Visualize abnormal heart
`function.
`0008. In reviewing the prior art, many patents require two
`or more light Sources or two or more cameras or detectors in
`order to extract Surface information. The additional Sources
`and detectors used by others in this type of System are not
`needed for the invention in this application. For example,
`U.S. Pat. No. 5,691,815 by Huber, et al. teaches the need for
`two light Sources in perpendicular Slit arrangements, each
`illuminating a slice of the Surface at different angles with
`respect to each other. Such an additional complication is not
`needed by the invention in this application, which only uses
`one source. Also, Huber's method only determines the
`position of one point rather than for all points Simulta
`neously in one image as is done in the present invention.
`0009 U.S. Pat. No. 6,141,105 by Yahashi, et al. does
`function with a Single Source and imaging detector. How
`ever, it requires angle Scanning a slit Source with time and
`
`Synchronously capturing multiple images in order to acquire
`the Surface data. Disadvantages include getting incorrect
`data on Surfaces that are moving which change shape during
`the time interval required to take multiple images. U.S. Pat.
`No. 6,233,049 by Kondo, et al. and U.S. Pat. No. 6,094,270
`by Uomori, et al. both suffer from the same problem of
`Scanning a slit illumination. Both also Suffer from Slow
`Speed because they must capture and process, as many
`images as there are lines of resolution (which could be
`thousands of lines in a megapixel resolution System).
`0010 U.S. Pat. No. 5,969,820 by Yoshii, et al. uses
`oblique illumination of target patterns on Semiconductor
`wafers to get the proper height of the flat Surface before
`exposing its photoresist coating on the Surface to a two
`dimensional optical pattern for circuit fabrication applica
`tions. The intent of this patent is not to determine Surface
`shape of irregular Surfaces and in fact would fail to do So due
`to Shadowing that occurs from oblique angle illumination. It
`also fails to collect more than one Surface height, relying on
`the knowledge they are working with a flat Surface.
`0011 U.S. Pat. No. 6,128,585 by Greer describes a
`System that requires a reference target He goes on to
`describe this reference target as being in a tetrahedral shape
`and having LEDs at the vertices that blink at different rates
`So they can be identified in a computerized vision System.
`Greer's patent and claims are written with the purpose of
`positioning a feature Sensor and not with determining Sur
`face topography. Moreover, the requirement of a reference
`target with blinking vertices adds complexity and cost and
`Slows down the time it takes the computerized vision System
`to calibrate and operate.
`0012 U.S. Pat. No. 4,541,721 by Robert Dewar mentions
`using a single line of collimated light incident acroSS a gap
`between Surfaces that one is trying to measure and control
`for manufacturing purposes. The need for collimated light,
`rather than a divergent light Source, Suffers from Several
`limitations including the greater cost and complexity as well
`as Safety concerns of using a laser Source and having to
`arrange optics which must be at least as large as the
`collimated light beam. Additionally, gaps imply shadows,
`which are particularly troublesome for acquiring Surface
`topography due to a loSS of reflected light. Furthermore,
`trying to use Dewar's System for topography acroSS a
`Surface with a thousand lines of resolution would require a
`thousand images be captured and processed while the
`present invention can do it all in one Step.
`0013) A method described by inventor Shin Yee Lu (in
`U.S. Pat. No. 5,852,672) employs two cameras and a light
`projector which all must be precisely aligned and calibrated.
`FIG. 1 is an illustration of the top view of Lu's system 10
`(which is roughly equivalent to Lu's FIG. 9 in U.S. Pat. No.
`5,852,672). The camera sensors, CCDs 12, can be thought to
`image through pinholes 14. Regions in object Space 16
`image through pinhole 14 to image Space 18 where light
`from a particular object point follows a path 20 through the
`pinhole 14 to a pixel on CCD 12. There exists an overlap
`region 22 of the two object spaces 16 defined by each
`camera. By having the object 24 viewed by both cameras in
`the overlap region 22, it becomes possible for a common
`point to be found in each CCD 12. Finding a common point
`in Some regions will not be possible when the slope of the
`Surface is Sufficiently Steep to create Shadowing which
`
`
`
`US 2002/0057438A1
`
`May 16, 2002
`
`prevents one or both cameras from Seeing a particular spot
`A projector 26 between the CCD's 12 projects a vertical
`array of lines 28 onto the object 24, and through software
`intelligence in a computer System (not shown), tries to
`identify common points on the object 24 from images
`captured by the CCDs 12. If a common point is determined,
`triangulation can be performed by using the interSection of
`the two imaging lines 20 emanating from the common point
`on the object 24.
`0.014 Lu makes use of triangulating two intersecting
`imaging lines (one from each camera System) by guessing at
`the interSection point on the Surface with help from a
`projection of Vertical light and dark lines and intelligent
`Software. The light and dark pattern (Such as using a Ronchi
`ruling) is imaged onto the Surface. The shadows obscure
`information that is lost which decreases the resolution one
`can obtain.
`0.015 While Lu does explain how one can obtain Sub
`pixel accuracy, it comes at a cost of reduced resolution. For
`example, in the case of Lu's projection Scheme, assume that
`there are approximately three pixels in Shadow and three
`pixels in light for a period of light and dark regions imaged
`onto CCD pixels. See illustration 30 in FIG.2a where dark
`regions 32 fall upon camera picture elements, pixels 34.
`There are approximately six pixels per period and two edges
`per period in this example. FIG.2b shows in illustration 40
`how two adjacent pixels will have similar values (such as the
`dark pixels 2,3 and 8, 9 and light pixels 5, 6) but, in general,
`at an edge (pixels 1, 4, 7, and 10) will be some values
`between light and dark established by where the edge of
`light falls within these pixels. The pixel will integrate all of
`the light incident upon it resulting in an average intensity
`value. Dark region 38 and light region 40 illustrate the
`minimum and maximum values while gray region 42 and a
`darker gray region 44 in illustration 40 convey that an edge
`(of the light and dark pattern) falls within those pixels.
`Suffice it to Say, interpolation and other numerical tech
`niques can be applied to the pixel intensity values (see FIG.
`2c) in order to obtain knowledge about the edge location that
`is more precise than the resolution of the pixel array. In other
`words, the edge can be determined to be located within a
`fraction of a pixel. But what does this say about the number
`of points in a resultant three dimensional mesh? It says that
`only one in three pixels are used to define positions. When
`compared to the present invention, which uses every pixel,
`the number of points used by Lu is reduced by a factor of
`three and a great amount of information is thereby lost. It
`also turns out that the method for obtaining Sub-pixel
`accuracy can Still be applied to this invention So there is no
`trade off in accuracy, there is only a significant 3x gain in
`resolution. Sub-pixel accuracy is obtained in this invention
`by interpolation and other numerical techniques being
`applied to detected colorS along any row of pixels being
`analyzed.
`
`SUMMARY OF THE INVENTION
`0016. This invention describes a method and an apparatus
`for acquiring Surface topography, which the dictionary
`defines as the Surface configuration of anything. The Surface
`being acquired is illuminated with patterns of light from one
`optical perspective and the light reflected off the Surface is
`captured by image Sensors from one optical perspective that
`is different than the perspective of the illumination. The
`
`images obtained are of the Surface with one or more patterns
`Superimposed upon the Surface. The Surface topography is
`computed based upon the patterned image data, the known
`Separation between the illumination Sources and the imaging
`Sensors, and knowledge about how the patterns of light are
`projected from the illumination Sources. This method can be
`carried out by the following apparatus. Illumination Sources
`emit patterns of light onto the Surface through one optical
`perspective. Image Sensors image the Surface through one
`optical perspective which is different from the optical per
`Spective of the illumination Sources. A processor is coupled
`to the illumination Sources and the imaging Sensors. The
`processor computes the Surface topography.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`0017. The objects and features of the present invention,
`which are believed to be novel, are set forth with particu
`larity in the appended claims. The present invention, both as
`to its organization and manner of operation, together with
`further objects and advantages, may best be understood by
`reference to the following description, taken in connection
`with the accompanying drawings, in which:
`0018 FIG. 1 is prior art and represents a top view of the
`System by Lu in which two camera imaging Systems and the
`projection of lines on a Surface can be used to try and
`determine Surface topology.
`0019 FIG. 2a, FIG. 2b and FIG.2c depict how Lu's
`projection of black and white lines, when imaged onto a row
`of CCD pixels, results in pixel integrated gray levels that can
`be used to obtain Sub-pixel accuracy of line edge location.
`0020 FIG. 3a, FIG. 3b, FIG. 3c, FIG. 3d and FIG. 3e
`show perspective views of the way in which triangulation
`and coordinates can be obtained.
`0021
`FIG. 4 is a top view of the preferred embodiment
`of the invention in which an object is illuminated by light
`(which varies in color), is imaged by a color camera, and
`computer processing of the image results in a 3D Surface
`model displayed on the computer Screen.
`0022 FIG. 5 is an illustration that shows the three flat cut
`away Surfaces on the inside of the Spherical Section in a color
`Space with red, green and blue coordinate axes.
`0023 FIG. 6 is an illustration that shows a path (on the
`curved Surface of the normalized color sphere) which is
`Starting at red and Spiraling in to end up at white.
`0024 FIG. 7 is the image formed by the intersection of
`a rainbow light projector and a white piece of paper.
`0025 FIG. 8 is the image formed by the intersection of
`a rainbow light projector and a white ceramic object in the
`shape of a kitten holding a ball.
`0026 FIG. 9a is a bar chart indicating that given white
`light illumination (equal intensities of red, green and blue)
`on the Surface, reflections making into the camera off a
`colored Surface (Such as a green iris) result in half of the red
`light making it into a particular camera pixel while all of the
`green made it and only a quarter of the blue made it in.
`0027 FIG. 9b is a bar chart indicating that for an
`unknown projector color illumination, half of the red made
`it into the camera pixel while none of the green made it and
`only a quarter of the blue made it in.
`
`
`
`US 2002/0057438A1
`
`May 16, 2002
`
`0028 FIG. 9c is a bar chart illustrating that we doubled
`the red light in FIG.9b (by dividing out the /2 red response
`of the Surface imaged by that pixel) and we quadrupled the
`blue light in FIG.9b (by dividing out the 4 blue response
`of the surface imaged by that pixel) which tells us that the
`color incident on the Surface (before it was changed by the
`Surface) was a light purple with equal amounts of blue and
`red.
`
`DETAILED DESCRIPTION OF THE
`INVENTION
`0029. The following description is provided to enable any
`person skilled in the art to make and use the invention and
`sets forth the best modes presently contemplated by the
`inventor of carrying out the invention. Various modifica
`tions, however, will remain readily apparent to those skilled
`in the art, Since the generic principles of the present inven
`tion have been defined herein.
`0030 This invention can be used for acquiring surface
`topography in real time. It makes use of an illumination
`System to project patterns of light on a Surface from one
`perspective and Simultaneously acquires images of that
`Surface from a slightly different perspective. The images
`obtained of the Surface have one or more patterns Superim
`posed upon them. One way in which to utilize more than one
`illumination Source with the same perspective view is to
`make use of beam Splitters within the illumination Sub
`System. One way in which to utilize more than one imaging
`Sensor with the same perspective view is also to make use of
`beam splitters within the imaging Subsystem. A computing
`device is used to detect the distortions of these patterns and
`compute the Surface topography.
`0031. This invention acquires all the surface data in a
`Single Snapshot, eliminating errors associated with Scanning
`Systems and objects which change their shape or position
`with time. The Snapshot time interval is limited only by a
`combination of Shutter Speed and/or a flash of illumination,
`So long as Sufficient numbers of photons interact with the
`imaging detector to give a good Signal. This can presumably
`be on the order of billionths of a second (or less) if
`nanosecond (or Shorter) laser pulses are employed.
`0032. In the preferred embodiment, this invention uses
`only one color light Source and one color camera to deter
`mine the topography of Surfaces in real time. This makes for
`a simple and inexpensive System. Unlike other Systems, this
`invention can determine the X, y and Z positions of every
`Surface point imaged by a pixel of the camera with only one
`color image. An equivalent position of a point on the Surface
`can be represented in Spherical coordinates of radius from
`the origin, and angles theta and phi.
`0033) Once these positions are known in the computer
`they can be displayed as a contour map, shaded Surface, wire
`mesh or interactively moveable object as might be done in
`any number of computer aided design Software packages.
`0034 All triangulation is accomplished through knowl
`edge of the paths each ray of light travels from an illumi
`nation Source point to a point at the camera image. Color
`coding is used to help make each ray easily identifiable by
`the detector and the image processing Software in the
`computer. Among the many advantages of this invention are
`Simplicity, reduced cost of hardware and rapid operation.
`
`Real time is meant to infer that Surface data can be captured
`at Video rates or rates in which the human eye does not detect
`individual frames, in other words, Seeing a continuous
`motion. This is very important for applications including
`Video conferencing and virtual presence as well as Surgery
`accomplished over the Internet using robotics and real time
`Visual feedback. This invention could actually operate much
`faster, limited only by the time it takes to capture and
`process the image. For Some experiments, the data could be
`collected with film, CCDs or other methods and the pro
`cessing (often the longer task) could be done afterwards to
`Study Such things as cell division and shape evolution of
`Shocked Surfaces.
`0035) The illumination system 48 in FIG. 3a projects a
`Vertical array of planes of light, each plane differing in color
`and angle. This could be accomplished with a back illumi
`nated transparency (Such as a 35 mm slide in a slide
`projector) or by generating a rainbow using a prism or
`grating to separate the colors (in angle) from a white light
`Source. In either case, each System can maintain intensity
`over the Surface being inspected but vary in color along the
`horizontal direction. The interSection of this projected beam
`and a flat white Surface would simply be imaged by the color
`camera as an array of columns (each having a unique color),
`each color identified by their relative ratio of red, green and
`blue light. FIG. 3a illustrates one color 50 of the spectrum
`being projected from a transparency 51 on the left through
`a pinhole 52 along projection ray 53 to a region 54.
`0036 FIG. 3b illustrates an imaging system 56 compris
`ing object Space region 58 imaged through pinhole 60 until
`the light arrives on an imaging detector 62, Such as a CCD.
`FIG. 3c shows the combination of the systems in FIG. 3a
`and FIG. 3b with overlap of regions 54 and 58 of those
`figures, respectively. Shown is colored light 50 passing
`through projector pinhole 52 along projection ray 53 and on
`to the object surface point 64 which reflects off the surface
`and follows imaging path 66 through camera pinhole 60 to
`a pixel 61 of CCD 62.
`0037 FIG. 3d shows a triangle comprised of segment “c”
`of the imaging path 66, segment “b” of the projection ray 53
`and a line 68 connecting the pinholes and labeled “a” .
`Opposite these sides are their respective angles A (at object
`point 64), B (at camera pinhole 60) and C (at projector
`pinhole 52). We can let “a” coincide with the y axis 70 in
`FIG. 3e. Given knowledge about “a” and the geometrical
`location of pinholes relative to the CCD 62 and transparency
`51, we can determine angle B (from the relationship between
`the pixel location 61 and camera pinhole 60) and angle C
`from the color (determined by the camera), angles of imag
`ing path 66 and prior knowledge of how color is projected
`as a function of angle through projector pinhole 52. Angle
`A=180°-B-C because the Sum of angles in any triangle is
`always 180°. The Law of Sines tells us a/SinA=b/SinB=c/
`SinG. We can now solve for the two unkowns b=SinB(a/
`SinA) and c=SinC(a/SinA).
`0038. The relationship between the camera pinhole 60
`and pixel location on CCD 62 gives us the angles theta and
`phi as shown in FIG. 3e. The radius is simply the length c.
`We now have all we need to identify the position of object
`point 64 in three dimensional Space. Theta is defined as the
`angle in the X and y plane as measured from the X axis 72
`to the projection 74 on the X and y plane of the imaging line
`
`
`
`US 2002/0057438A1
`
`May 16, 2002
`
`66 connecting the origin 76 to the object point 64. Phi is the
`angle measured from the Z axis to that line Segment “c”.
`Conversion to x, y, Z or other coordinates is trivial at this
`point.
`0039 FIG. 4 is a top view of the preferred embodiment
`80. The illumination system 82 projects a vertical (out of the
`paper) array of planes of light 84, each plane differing in
`color and angle. One color 86 reflects off surface 88 and is
`imaged on CCD 90. The camera 92 transfers the color image
`to a computer 94 where it is processed into three-dimen
`sional data and displayed on monitor 96.
`0040. The color camera 92 identifies these colored planes
`by their relative ratio of red, green and blue light. An
`advantage of using color to uniquely identify planes of light
`is that it is independent of intensity and therefor has no
`requirement for intensity calibration between the projector
`and camera. In Standard cameras and frame capture elec
`tronics, each pixel is assigned a 24-bit number to determine
`its color and intensity. These 24 bits represent 8 bits for red,
`8 for green and 8 for blue. Eight bit numbers in integer
`format range from 0 to 255 (the equivalent of 256 equally
`Spaced values). For most purposes we can define pixel
`values as a triplet of the form (Red,Green, Blue) where (255,
`0, 0) is red, (0, 255, 0) is green and (0, 0, 255) is blue.
`0041. Other pixel representations exist, such as (Cyan,
`Yellow, Magenta, blacK) and (Intenisty, Hue, Saturation),
`but the (R, G, B) format will suffice for the moment. In
`particular, it will be important to define color as independent
`from intensity. For example, since equal amounts of red and
`blue makes a purple color, we define (200, 0, 200) and (5,0,
`5) to be the same color but of different intensity. The higher
`numbers (or counts in CCD jargon) indicate more intense
`light (higher numbers of photons) were incident on the
`pixels because they generated more electrons in the CCD
`pixel wells that when shifted out of the image detector were
`digitized to yield a bigger number. Thus, one could say (5,
`0, 5) is a darker version of the light purple color (200, 0,
`200).
`0.042 Unless we have well known surfaces under inspec
`tion, like parts coming off an assembly line, it is probable
`that reflectivity will vary acroSS the Surface being analyzed.
`In fact, not only can there be absorption in the Surface and
`along the optical path, but the Surface angles and Surface's
`Specular quality will alter how much light gets back into the
`imaging System (because Scattering and Vignetting occurs).
`Additionally, reflected colors can be changed by colored
`Surfaces (reflecting more of one color and absorbing more of
`another color) and Secondary reflections (such as red reflec
`tions off the Side of a nose making a blue illuminated cheek
`appear more purple than it would be without the Secondary
`reflection). To improve results, we can take a white light
`picture and divide the color efficiencies into the image taken
`with the Special color illumination System. This is important
`for objects with varying reflectivity along the Surface (like
`red lips and green eyes from a face). It has the added
`advantage that one may map the white illuminated color
`image back onto the 3D Surface, giving you 4D information
`(original Surface color being the fourth dimension).
`0.043
`Strictly speaking, color does not qualify as a fourth
`dimension in the same way Space does. Dimensions in Space
`(x, y, and Z) each extend from minus infinity to plus infinity.
`Color can be characterised in many ways but for our
`
`purposes we will either use red, green and blue (R,G,B) or
`cyan, yellow, magenta and black (C, Y, M, K). Each of those
`Subdimensions varies from Zero to a maximum, which our
`Sensors (camera, film, Scanner, densitometer, profilometer,
`etc.) can detect.
`0044) If the projector is a computer controllable device,
`Such as a projection video display or Spatial light modulator,
`one can also use the white light image to control the
`projector intensities and color to produce a projected illu
`mination pattern that will optimize Signals at the detector.
`This can be accomplished by computer processing the white
`light illuminated image. Using that information one can
`brighten the colored projection image in Specific locations
`where it was dark on the white light image due to absorptive
`pigments, Scattering Surfaces or Shiny Surfaces that Slope
`away from near normal incidence (and thus little light is
`reflected into the camera optics, a vignetting effect).
`0045. Up to this point in this application, the imaging
`optics of the projector and camera have been described to
`Simply comprise a pinhole. In reality, lenses are more likely
`to be used, given that they are able to transmit more light and
`image more effectively.
`0046) The pattern of projector light need not be the
`rainbow shown in FIG. 7. And, projected lines in the context
`of this application need not be Straight lines, rather they are
`permitted to be curved lines So long as they do not croSS any
`row more than once. One could also have the planes of light
`be projected horizontal or along another direction instead of
`vertical. One only need to displace the optical axis of the
`imaging System along a path that is other than along that
`Same direction. Displacement of the camera along the same
`direction as the angle of the light planes would eliminate the
`perceived shift in Source rays because these rays would be
`Stretched and compressed along the Same direction as those
`rays of the same color and thus the computer would not be
`able to tell where the ray movement occurred. For curved
`planes and lines, the displacement must be along any path
`other than along the direction of any tangent of the curved
`lines.
`0047. To obtain sub-pixel resolution as well as uniquely
`coded light planes, one should consider using a continuous
`range of changing color projected onto the Surface from left
`to right. By knowing the path in color Space is Smooth and
`continuous, one can perform interpolation and other numeri
`cal methods. To Visualize color Space, let the three colors
`(Red, Green and Blue) represent independent axes (which
`are perpendicular to each other like the edges on a cube). Let
`one corner of a cube be the origin of the color Space and as
`you travel along either of three adjacent edges (the three
`color axes), the values go from 0 to 255. The farthest corner
`from the origin has the value (255, 255, 255) and represents
`white. Traveling back to the origin along the cube diagonal,
`each component decreases uniformly to shades of gray and
`eventually black at the origin.
`0048. In order to compare colors, we do a normalization
`of the color vector in color Space by dividing each compo
`nent by the length of the vector. Now imagine a sphere of
`unit radius (radius equal to one). Since the normalized color
`vectors all have a length of one, each vector extends from the
`origin to the Surface of the unit Sphere. In other words,
`convert from cartesian to polar coordinates and concern
`yourself only with the angles. What was a three element
`
`
`
`US 2002/0057438A1
`
`May 16, 2002
`
`color (R,G,B) is now a two angle color (alpha and gamma).
`The color Space can now be visualized as an eighth of a
`sphere because the initial components (R,G,B) were always
`positive in value. Just place the Sphere origin at the origin
`previously defined for our color Space cube and you'll See
`the intersection is a one-eighth section of the sphere. FIG.
`5 shows the three flat surfaces 98, 100 and 102 on the inside
`of the Sphere Section.
`0049. To select an improved rainbow for uniqueness and
`Sub-pixel accuracy, one merely need travel along the color
`Space (on the curved Surface of the one-eighth Section of the
`color sphere) in the following way. There should be one
`beginning, one end, no crossings, and Sufficient Space
`between paths. Such that experimental errors don’t cause an
`interpretation problem. FIG. 6 shows such a path 120
`Starting at red and Spiraling in to end up at white. Other
`improvements might include minimizing curvature in this
`path, especially where there is a need for fewer errors,
`perhaps in the middle of your image. One can also improve
`data analysis by maximizing the rate of color change per unit
`angle from the projector in places of importance, Such as the
`middle Section.
`0050 A prototype system has been built and tested.
`Positive results exist confirming the technique works as
`predicted. A rainbow projector was built using a slide
`projector and a prism. FIG. 7 shows the rainbow 122
`incident on a flat piece of paper and you see columns of
`common color ranging from blue on the left to red on the
`right. The light comes in from the left, which means the
`camera is horizontally displaced to the right of the projector.
`In FIG. 8 notice the ball 124 held by the kitten. The yellow
`column down the middle gets bowed to the left (concave
`right) to make a “(” shape of yellow on the ball. The
`horizontal row of pixels (imaged in the middle of the ball)
`Sees that colors red through yellow are stretched out while
`colors yellow through blue are compressed. Software that
`processes these images will yield Surface height informa
`tion. The amount of horizontal movement of a color is
`indicative of the Surface height. In this configuration a
`greate