throbber
WO 2010/145669
`
`PCT/DK2010/050148
`
`17
`
`period is denoted n. The registration of the pattern position for each individual image
`
`combined with the independently known pattern variation over all sensing element (i.e.
`
`obtaining knowledge of the pattern configuration) and the recorded images allows for
`
`5
`
`an efficient extraction of the correlation measure in each individual sensing element in
`the camera. For a light sensing element with label j, the n recorded light signals of that
`element are denoted /11 , ... , ln 1 • The correlation measure of that element, AP may be
`expressed as
`
`i=l
`Here the reference signal or weight function f is obtained from the knowledge of the
`
`n
`
`Aj = I h,j Ii,j
`
`pattern configuration. /has two indices ij. The variation of /with the first index is
`
`1 O
`
`derived from the knowledge of the pattern position during each image recording. The
`
`variation of /with the second index is derived from the knowledge of the pattern
`
`geometry which may be determined prior to the 3 D scanning.
`
`Preferably, but not necessarily, the reference signal /averages to zero over time, i.e.
`
`15
`
`for all ywe have
`
`n ""'f =0
`
`L
`
`l,J
`
`i=l
`to suppress the DC part of the light variation or correlation measure. The focus position
`
`corresponding to the pattern being in focus on the object for a single sensor element in
`
`the camera will be given by an extremum value of the correlation measure of that
`
`sensor element when the focus position is varied over a range of values. The focus
`
`20
`
`position may be varied in equal steps from one end of the scanning region to the other.
`
`To obtain a sharp image of an object by means of a camera the object must be in focus
`
`and the optics of the camera and the object must be in a fixed spatial relationship
`
`during the exposure time of the image sensor of the camera. Applied to the present
`
`25
`
`invention this should imply that the pattern and the focus should be varied in discrete
`
`steps to be able to fix the pattern and the focus for each image sampled in the camera,
`
`i.e. fixed during the exposure time of the sensor array. However, to increase the
`
`sensitivity of the image data the exposure time of the sensor array should be as high as
`
`the sensor frame rate permits. Thus, in the preferred embodiment of the invention
`
`30
`
`images are recorded (sampled) in the camera while the pattern is continuously varying
`
`(e.g. by continuously rotating a pattern wheel) and the focus plane is continuously
`
`0384
`
`Align EX1002 (Part 2 of 3)
`Align v. 3Shape
`IPR2022-00145
`
`

`

`WO 2010/145669
`
`PCT/DK2010/050148
`
`18
`
`moved. This implies that the individual images will be slightly blurred since they are the
`
`result of a time-integration of the image while the pattern is varying and the focus plane
`
`is moved. This is something that one could expect to lead to deterioration of the data
`
`quality, but in practice the advantage of concurrent variation of the pattern and the
`
`5
`
`focus plane is bigger than the drawback.
`
`In another embodiment of the invention images are recorded (sampled) in the camera
`
`while the pattern is fixed and the focus plane is continuously moved, i.e. no movement
`
`of the pattern. This could be the case when the light source is a segmented light
`
`10
`
`source, such as a segment LED that flashes in an appropriate fashion. In this
`
`embodiment the knowledge of the pattern is obtained by a combination of prior
`
`knowledge of the geometry of the individual segments on the segmented LED give rise
`
`to a variation across light sensing elements and the applied current to different
`
`segments of the LED at each recording.
`
`15
`
`In yet another embodiment of the invention images are recorded (sampled) in the
`
`camera while the pattern is continuously varying and the focus plane is fixed.
`
`In yet another embodiment of the invention images are recorded (sampled) in the
`
`20
`
`camera while the pattern and the focus plane are fixed.
`
`The temporal correlation principle may be applied in general within image analysis.
`
`Thus, a further embodiment of the invention relates to a method for calculating the
`
`amplitude of a light intensity oscillation in at least one (photoelectric) light sensitive
`
`25
`
`element, said light intensity oscillation generated by a periodically varying illumination
`
`pattern and said amplitude calculated in at least one pattern oscillation period, said
`
`method comprising the steps of:
`
`providing the following a predetermined number of sampling times during a pattern
`
`oscillation period:
`
`30
`
`o sampling the light sensitive element thereby providing the signal of said light
`
`sensitive element, and
`
`o providing an angular position and/or a phase of the periodically varying
`
`illumination pattern for said sampling, and
`
`calculating said amplitude(s) by integrating the products of a predetermined
`
`35
`
`periodic function and the signal of the corresponding light sensitive element over
`
`0385
`
`

`

`WO 2010/145669
`
`PCT/DK2010/050148
`
`19
`
`said predetermined number of sampling times, wherein said periodic function is a
`
`function of the angular position and/or the phase of the periodically varying
`
`illumination pattern.
`
`5
`
`This may also be expressed as
`
`A= I:J(p, )I,
`
`where A is the calculated amplitude or correlation measure, /is the index for each
`
`sampling, tis the periodic function, p, is the angular position/ phase of the illumination
`
`pattern for sampling /and 1. is the signal of the light sensitive element for sampling /.
`
`10
`
`Preferably the periodic function averages to zero over a pattern oscillation period, i.e.
`
`To generalize the principle to a plurality of light sensitive elements, for example in a
`
`sensor array, the angular position / phase of the illumination pattern for a specific light
`
`15
`
`sensitive element may consist of an angular position / phase associated with the
`
`illumination pattern plus a constant offset associated with the specific light sensitive
`
`element. Thereby the correlation measure or amplitude of the light oscillation in light
`
`sensitive element /may be expressed as
`A1 = I:J(01 + P, )1,, 1 ,
`
`20
`
`where 0, is the constant offset for light sensitive element j.
`
`A periodically varying illumination pattern may be generated by a rotating wheel with an
`
`opaque mask comprising a plurality of radial spokes arranged in a symmetrical order.
`
`The angular position of the wheel will thereby correspond to the angular position of the
`
`25
`
`pattern and this angular position may obtained by an encoder mounted on the rim of
`
`the wheel. The pattern variation across different sensor elements for different position
`
`of the pattern may be determined prior to the 3 D scanning in a calibration routine. A
`
`combination of knowledge of this pattern variation and the pattern position constitutes
`
`knowledge of the pattern configuration. A period of this pattern may for example be the
`
`30
`
`time between two spokes and the amplitude of a single or a plurality of light sensitive
`
`elements of this period may be calculated by sampling e.g. four times in this period.
`
`0386
`
`

`

`WO 2010/145669
`
`PCT/DK2010/050148
`
`20
`
`A periodically varying illumination pattern may generated by a Ranchi ruling moving
`
`orthogonal to the lines and the position is measured by an encoder. This position
`
`corresponds to the angular position of the generated pattern. Alternatively, a
`
`checkerboard pattern could be used.
`
`5
`
`A periodically varying illumination pattern may generated by a one-dimensional array of
`
`LEDs that can be controlled line wise.
`
`A varying illumination pattern may generated by a LCD or DLP based projector.
`
`1 O
`
`Optical correlation
`
`The abovementioned correlation principle (temporal correlation) requires some sort of
`
`registering of the time varying pattern, e.g. knowledge of the pattern configuration at
`
`each light level recording in the camera. However, a correlation principle without this
`
`registering may be provided in another embodiment of the invention. This principle is
`
`15
`
`termed "optical correlation".
`
`In this embodiment of the invention an image of the pattern itself and an image of at
`
`least a part of the object being scanned with the pattern projected onto it is combined
`
`on the camera. I.e. the image on the camera is a superposition of the pattern itself and
`
`20
`
`the object being probed with the pattern projected onto it. A different way of expressing
`
`this is that the image on the camera substantially is a multiplication of an image of the
`
`pattern projected onto the object with the pattern itself.
`
`This may be provided in the following way. In a further embodiment of the invention the
`
`25
`
`pattern generation means comprises a transparent pattern element with an opaque
`
`mask. The probe light is transmitted through the pattern element, preferably transmitted
`
`transversely through the pattern element. The light returned from the object being
`
`scanned is retransmitted the opposite way through said pattern element and imaged
`
`onto the camera. This is preferably done in a way where the image of the pattern
`
`30
`
`illuminating the object and the image of the pattern itself are coinciding when both are
`
`imaged onto the camera. One particular example of a pattern is a rotating wheel with
`
`an opaque mask comprising a plurality of radial spokes arranged in a symmetrical
`
`order such that the pattern possesses rotational periodicity. In this embodiment there is
`
`a well-defined pattern oscillation period if the pattern is substantially rotated at a
`
`35
`
`constant speed. We define the oscillation period as 2'fdro.
`
`0387
`
`

`

`WO 2010/145669
`
`PCT/DK2010/050148
`
`21
`
`We note that in the described embodiment of the invention the illumination pattern is a
`
`pattern of light and darkness. A light sensing element in the camera with a signal
`
`proportional to the integrated light intensity during the camera integration time 6twith
`label j, 11 is given by
`
`5
`
`t+ot
`
`h=Kj 7i(t'); (t')df
`
`t
`Here K\s the proportionality constant of the sensor signal, tis the start of the camera
`integration time, T, is the time-varying transmission of the part of the rotating pattern
`light sensing element, and s, is the time-varying light
`element imaged onto the/th
`intensity of light returned from the scanned object and imaged onto the /th light sensing
`element. In the described embodiment T, is the the step function substantially defined
`by 7i{t) = 0 for sin(at+~) > 0 and 7i{t) = 1 elsewhere. <p5 is a phase dependent on the
`position of the/th
`imaging sensor.
`
`10
`
`The signal on the light sensing element is a correlation measure of the pattern and the
`
`15
`
`light returned from the object being scanned. The time-varying transmission takes the
`
`role of the reference signal and the time-varying light intensity of light returned from the
`
`scanned object takes the role of the input signal. The advantage of this embodiment of
`
`the invention is that a normal CCD or CMOS camera with intensity sensing elements
`
`may be used to record the correlation measure directly since this appears as an
`
`20
`
`intensity on the sensing elements. Another way of expressing this is that the
`
`computation of the correlation measure takes place in the analog, optical domain
`
`instead of in an electronic domain such as an FPGA or a PC.
`
`The focus position corresponding to the pattern being in focus on the object being
`
`25
`
`scanned for a single sensor element in the camera will then be given by the maximum
`
`value of the correlation measure recorded with that sensor element when the focus
`
`position is varied over a range of values. The focus position may be varied in equal
`
`steps from one end of the scanning region to the other. One embodiment of the
`
`invention comprises means for recording and/or integrating and/or monitoring and/or
`
`30
`
`storing each of a plurality of the sensor elements over a range of focus plane positions.
`
`0388
`
`

`

`WO 2010/145669
`
`PCT/DK2010/050148
`
`22
`
`Preferably, the global maximum should be found. However, artifacts such as dirt on the
`
`optical system can result in false global maxima. Therefore, it can be advisable to look
`
`for local maxima in some cases.
`
`5
`
`Since the reference signal does not average to zero the correlation measure has a DC
`
`component. Since the DC part is not removed, there may exist a trend in DC signal
`
`over all focus element positions, and this trend can be dominating numerically. In this
`
`situation, the focus position may still be found by analysis of the correlation measure
`
`and/or one or more of its derivatives.
`
`10
`
`In a further embodiment of the invention the camera integration time is an integer
`number Mof the pattern oscillation period, i.e. ot= 2nMI OJ. One advantage of this
`embodiment is that the magnitude of the correlation measure can be measured with a
`
`better signal-to-noise ratio in the presence of noise than if the camera integration time
`
`15
`
`is not an integer number of the pattern oscillation period.
`
`In another further embodiment of the invention the camera integration time is much
`
`longer than pattern oscillation period, i.e. ot » 2r:cM I OJ. Many times the pattern
`oscillation time would here mean e.g. camera integration time at least 1 o times the
`oscillation time or more preferably such as at least 100 or 1000 times the oscillation
`
`time. One advantage of this embodiment is that there is no need for synchronization of
`
`camera integration time and pattern oscillation time since for very long camera
`
`integration times compared to the pattern oscillation time the recorded correlation
`
`measure is substantially independent of accurate synchronization.
`
`20
`
`25
`
`Equivalent to the temporal correlation principle the optical correlation principle may be
`
`applied in general within image analysis. Thus, a further embodiment of the invention
`
`relates to a method for calculating the amplitude of a light intensity oscillation in at least
`
`one (photoelectric) light sensitive element, said light intensity oscillation generated by a
`
`30
`
`superposition of a varying illumination pattern with itself, and said amplitude calculated
`
`by time integrating the signal from said at least one light sensitive element over a
`
`plurality of pattern oscillation periods.
`
`Spatial correlation
`
`0389
`
`

`

`WO 2010/145669
`
`PCT/DK2010/050148
`
`23
`
`The above mentioned correlation principles (temporal correlation and optical
`
`correlation) require the pattern to be varying in time. If the optical system and camera
`
`provides a lateral resolution which is at least two times what is needed for the scan of
`
`the object then it is possible to scan with a static pattern, i.e. a pattern which is not
`
`5
`
`changing in time. This principle is termed "spatial correlation". The correlation measure
`
`is thus at least computed with sensor signals recorded at different sensor sites.
`
`The lateral resolution of an optical system is to be understood as the ability of optical
`
`elements in the optical system, e.g. a lens system, to image spatial frequencies on the
`
`10
`
`object being scanned up to a certain point. Modulation transfer curves of the optical
`
`system are typically used to describe imaging of spatial frequencies in an optical
`
`system. One could e.g. define the resolution of the optical system as the spatial
`
`frequency on the object being scanned where the modulation transfer curve has
`
`decreased to e.g. 50%. The resolution of the camera is a combined effect of the
`
`15
`
`spacing of the individual camera sensor elements and the resolution of the optical
`
`system.
`
`In the spatial correlation the correlation measure refers to a correlation between input
`
`signal and reference signal occurring in space rather than in time. Thus, in one
`
`20
`
`embodiment of the invention the resolution of the measured 3 D geometry is equal to
`
`the resolution of the camera. However, for the spatial correlation the resolution of the
`
`measured 3 D geometry is lower than the resolution of the camera, such as at least 2
`
`times lower, such as at least 3 times lower, such as at least 4 times lower, such as
`
`least 5 times lower, such as at least 10 times lower. The sensor element array is
`
`25
`
`preferably divided into groups of sensor elements, preferably rectangular groups, such
`
`as square groups of sensor elements, preferably adjacent sensor elements. The
`
`resolution of the scan, i.e. the measured 3D geometry, will then be determined by the
`
`size of these groups of sensor elements. The oscillation in the light signal is provided
`
`within these groups of sensor elements, and the amplitude of the light oscillation may
`
`30
`
`then be obtained by analyzing the groups of sensor elements. The division of the
`
`sensor element array into groups is preferably provided in the data processing stage,
`
`i.e. the division is not a physical division thereby possibly requiring a specially adapted
`
`sensor array. Thus, the division into groups is "virtual" even though the single pixel in a
`
`group is an actual physical pixel.
`
`35
`
`0390
`
`

`

`WO 2010/145669
`
`PCT/DK2010/050148
`
`24
`
`In one embodiment of the invention the pattern posseses translational periodicity along
`
`at least one spatial coordinate.
`
`In a further embodiment of the invention the spatially
`
`periodic pattern is aligned with the rows and/or the columns of the array of sensor
`
`elements. For example in the case of a static line pattern the rows or columns of the
`
`5
`
`pixels in the camera may be parallel with the lines of the pattern. Or in the case of a
`
`static checkerboard pattern the row and columns of the checkerboard may be aligned
`
`with the rows and columns, respectively, of the pixels in the camera. By aligning is
`
`meant that the image of the pattern onto the camera is aligned with the "pattern" of the
`
`sensor element in the sensor array of the camera. Thus, a certain physical location and
`
`1 O
`
`orientation of the pattern generation means and the camera requires a certain
`
`configuration of the optical components of the scanner for the pattern to be aligned with
`
`sensor array of the camera.
`
`In a further embodiment of the invention at least one spatial period of the pattern
`
`15
`
`corresponds to a group of sensor elements. In a further embodiment of the invention all
`
`groups of sensor elements contain the same number of elements and have the same
`
`shape. E.g. when the period of a checkerboard pattern corresponds to a square group
`
`of e.g. 2x2, 3x3, 4x4, 5x5, 6x6, 7x7, 8x8, 9x9, 10x1 O or more pixels on the camera.
`
`20
`
`In yet another embodiment one or more edges of the pattern is aligned with and/or
`
`coincide with one or more edges of the array of sensor elements. For example a
`
`checkerboard pattern may be aligned with the camera pixels in such a way that the
`
`edges of the image of the checkerboard pattern onto the camera coincide with the
`
`edges of the pixels.
`
`25
`
`In spatial correlation independent knowledge of the pattern configuration allows for
`
`calculating the correlation measure at each group of light sensing. For a spatially
`
`periodic illumination this correlation measure can be computed without having to
`
`estimate the cosine and sinusoidal part of the light intensity oscillation. The knowledge
`
`30
`
`of the pattern configuration may be obtained prior to the 3 D scanning.
`
`In a further embodiment of the invention the correlation measure, A 1 within a group of
`sensor elements with label /is determined by means of the following formula:
`n
`Aj = LAA,j
`i=l
`
`0391
`
`

`

`WO 2010/145669
`
`PCT/DK2010/050148
`
`25
`
`Where n is the number of sensor elements in a group of sensors, t, = (f1 J, ... f ,i is the
`reference signal vector obtained from knowledge of the pattern configuration, and ~ =
`(?ii,, ... lnJ is input signal vector. For the case of sensors grouped in square regions with
`Nsensors as square length then n = f\l.
`
`5
`
`Preferably, but not necessarily, the elements of the reference signal vector averages to
`
`zero over space, i.e. for all /we have
`
`n 'f =0
`
`L
`
`i=l
`to suppress the DC part of the correlation measure. The focus position corresponding
`
`1,1
`
`to the pattern being in focus on the object for a single group of sensor elements in the
`
`1 0
`
`camera will be given by an extremum value of the correlation measure of that sensor
`
`element group when the focus position is varied over a range of values. The focus
`
`position may be varied in equal steps from one end of the scanning region to the other.
`
`In the case of a static checkerboard pattern with edges aligned with the camera pixels
`
`15
`
`and with the pixel groups having an even number of pixels such as 2x2, 4x4, 6x6, 8x8,
`
`1Ox10, a natural choice of the reference vector /would be for its elements to assume
`
`the value 1 for the pixels that image a bright square of the checkerboard and - 1 for the
`
`pixels that image a dark square of the checkerboard.
`
`20
`
`Equivalent to the other correlation principles the spatial correlation principle may be
`
`applied in general within image analysis. In particular in a situation where the resolution
`
`of the camera is higher than what is necessary in the final image. Thus, a further
`
`embodiment of the invention relates to a method for calculating the amplitude(s) of a
`
`light intensity oscillation in at least one group of light sensitive elements, said light
`
`25
`
`intensity oscillation generated by a spatially varying static illumination pattern, said
`
`method comprising the steps of:
`
`providing the signal from each light sensitive element in said group of light sensitive
`
`elements, and
`
`calculating said amplitude(s) by integrating the products of a predetermined
`
`30
`
`function and the signal from the corresponding light sensitive element over said
`
`group of light sensitive elements, wherein said predetermined function is a function
`
`reflecting the illumination pattern.
`
`0392
`
`

`

`WO 2010/145669
`
`PCT/DK2010/050148
`
`26
`
`To generalize the principle to a plurality of light sensitive elements, for example in a
`
`sensor array, the correlation measure or amplitude of the light oscillation in group /may
`
`be expressed as
`
`n
`
`A1 = L f{i, j)l,. J,
`
`l=l
`
`5
`
`where n is the number of sensor elements in group j, l1
`J is the signal from the lth sensor
`element in group /and
`f(i,j) is a predetermined function reflecting the pattern.
`
`Compared to temporal correlation, spatial correlation has the advantage that no moving
`
`pattern is required. This implies that knowledge of the pattern configuration may be
`
`10
`
`obtained prior to the 3 D scanning. Conversely, the advantage of temporal correlation is
`
`its higher resolution, as no pixel grouping is required.
`
`All correlation principles, when embodied with an image sensor that allows very high
`
`frame rates, enable 3 D scanning of objects in motion with little motion blur. It also
`
`15
`
`becomes possible to trace moving objects over time ("4D scanning"), with useful
`
`applications for example in machine vision and dynamic deformation measurement.
`
`Very high frame rates in this context are at least 500, but preferably at least 2000
`
`frames per second.
`
`20
`
`Transforming correlation measure extrema to 30 world coordinates
`
`Relating identified focus position(s) for camera sensor or camera sensor groups to 3 D
`
`world coordinates may be done by ray tracing through the optical system. Before such
`
`ray tracing can be performed the parameters of the optical system need to be known.
`
`25
`
`One embodiment of the invention comprises a calibration step to obtain such
`
`knowledge. A further embodiment of the invention comprises a calibration step in which
`
`images of an object of known geometry are recorded for a plurality of focus positions.
`
`Such an object may be a planar checkerboard pattern. Then, the scanner can be
`
`calibrated by generating simulated ray traced images of the calibration object and then
`
`30
`
`adjusting optical system parameters as to minimize the difference between the
`
`simulated and recorded images.
`
`0393
`
`

`

`WO 2010/145669
`
`PCT/DK2010/050148
`
`27
`
`In a further embodiment of the invention the calibration step requires recording of
`
`images for a plurality of focus positions for several different calibration objects and/or
`
`several different orientations and/or positions of one calibration object.
`
`5
`
`With knowledge of the parameters of the optical system, one can employ backward ray
`
`tracing technique to estimate the 2D -> 3D mapping. This requires that the scanner's
`
`optical system be known, preferably through calibration. The following steps can be
`
`performed:
`
`1. From each pixel of the image (at the image sensor), trace a certain number of rays,
`
`1 0
`
`starting from the image sensor and through the optical system (backward ray tracing).
`
`2. From the rays that emit, calculate the focus point, the point where all these rays
`
`substantially intersect. This point represents the 3 D coordinate of where a 2 D pixel will
`
`be in focus, i.e., in yield the global maximum of light oscillation amplitude.
`
`3. Generate a look up table for all the pixels with their corresponding 3 D coordinates.
`
`15
`
`The above steps are repeated for a number of different focus lens positions covering
`
`the scanner's operation range.
`
`Specular reflections
`
`High spatial contrast of the in-focus pattern image on the object is often necessary to
`
`20
`
`obtain a good signal to noise ratio of the correlation measure on the camera. This in
`
`turn may be necessary to obtain a good estimation of the focus position corresponding
`
`to an extremum in the correlation measure. This sufficient signal to noise ratio for
`
`successful scanning is often easily achieved in objects with a diffuse surface and
`
`negligible light penetration. For some objects, however, it is difficult to achieve high
`
`25
`
`spatial contrast.
`
`A difficult kind of object, for instance, is an object displaying multiple scattering of the
`
`incident light with a light diffusion length large compared to the smallest feature size of
`
`the spatial pattern imaged onto the object. A human tooth is an example of such an
`
`30
`
`object. The human ear and ear canal are other examples. In case of intra oral
`
`scanning, the scanning should preferably be provided without spraying and/or drying
`
`the teeth to reduce the specular reflections and light penetration. Improved spatial
`
`contrast can be achieved by preferential imaging of the specular surface reflection from
`
`the object on the camera. Thus, one embodiment of the invention comprises means
`
`35
`
`for preferential / selectively imaging of specular reflected light and/or diffusively
`
`0394
`
`

`

`WO 2010/145669
`
`PCT/DK2010/050148
`
`28
`
`reflected light. This may be provided if the scanner further comprises means for
`
`polarizing the probe light, for example by means of at least one polarizing beam
`
`splitter. A polarizing beam splitter may for instance be provided for forming an image of
`
`the object in the camera. This may be utilized to extinguish specular reflections,
`
`5
`
`because if the incident light is linearly polarized a specular reflection from the object
`
`has the property that it preserves its polarization state
`
`The scanner according to the invention may further comprise means for changing the
`
`polarization state of the probe light and/or the light reflected from the object. This can
`
`10
`
`be provided by means of a retardation plate, preferably located in the optical path. In
`
`one embodiment of the invention the retardation plate is a quarter wave retardation
`
`plate. A linearly polarized light wave is transformed into a circularly polarized light wave
`
`upon passage of a quarter wave plate with an orientation of 45 degrees of its fast axis
`
`to the linear polarization direction. This may be utilized to enhance specular reflections
`
`15
`
`because a specular reflection from the object has the property that it flips the helicity of
`
`a circularly polarized light wave, whereas light that is reflected by one or more
`
`scattering events becomes depolarized.
`
`The field of view (scanning length)
`
`20
`
`In one embodiment of the invention the probe light is transmitted towards the object in
`
`a direction substantially parallel with the optical axis. However, for the scan head to be
`
`entered into a small space such as the oral cavity of a patient it is necessary that the tip
`
`of the scan head is sufficiently small. At the same time the light out of the scan head
`
`need to leave the scan head in a direction different from the optical axis. Thus, a further
`
`25
`
`embodiment of the invention comprises means for directing the probe light and/or
`
`imaging an object in a direction different from the optical axis. This may be provided by
`
`means of at least one folding element, preferably located along the optical axis, for
`
`directing the probe light and/or imaging an object in a direction different from the optical
`
`axis. The folding element could be a light reflecting element such as a mirror or a
`
`30
`
`prism. In one embodiment of the invention a 45 degree mirror is used as folding optics
`
`to direct the light path onto the object. Thereby the probe light is guided in a direction
`
`perpendicular to the optical axis. In this embodiment the height of the scan tip is at
`
`least as large as the scan length and preferably of approximately equal size.
`
`0395
`
`

`

`WO 2010/145669
`
`PCT/DK2010/050148
`
`29
`
`One embodiment of the invention comprises at least two light sources, such as light
`
`sources with different wavelengths and/or different polarization. Preferably also control
`
`means for controlling said at least two light sources. Preferably this embodiment
`
`comprises means for combining and/or merging light from said at least two light
`
`5
`
`sources. Preferably also means for separating light from said at least two light sources.
`
`If waveguide light sources are used they may be merged by waveguides. However,
`
`one or more diffusers may also be provided to merge light sources.
`
`10
`
`Separation and/or merging may be provided by at least one optical device which is
`
`partially light transmitting and partially light reflecting, said optical device preferably
`
`located along the optical axis, an optical device such as a coated mirror or coated
`
`plate. One embodiment comprises at least two of said optical devices, said optical
`
`devices preferably displaced along the optical axis. Preferably at least one of said
`
`15
`
`optical devices transmits light at certain wavelengths and/or polarizations and reflects
`
`light at other wavelengths and/or polarizations.
`
`One exemplary embodiment of the invention comprises at least a first and a second
`
`light source, said light sources having different wavelength and/or polarization, and
`
`20
`
`wherein
`
`a first optical device reflects light from said first light source in a direction different from
`
`the optical axis and transmits light from said second light source, and
`
`a second optical device reflects light from said second light source in a direction
`
`different from the optical axis. Preferably said first and second optical devices reflect
`
`25
`
`the probe light in parallel directions, preferably in a direction perpendicular to the
`
`optical axis, thereby imaging different parts of the object surface. Said different parts of
`
`the object surface may be at least partially overlapping.
`
`Thus, for example light from a first and a second light source emitting light of different
`
`30
`
`wavelengths (and/or polarizations) is merged together using a suitably coated plate
`
`that transmits the light from the first light source and reflects the light from the second
`
`light source. At the scan tip along the optical axis a first optical device (e.g. a suitably
`
`coated plate, dichroic filter) reflects the light from the first light source onto the object
`
`and transmits the light from the second light source to a second optical device (e.g. a
`
`35
`
`mirror) at the end of the scan tip, i.e. further down the optical axis. During scanning the
`
`0396
`
`

`

`WO 2010/145669
`
`PCT/DK2010/050148
`
`30
`
`focus position is moved such that the light from the first light source is used to project
`
`an image of the pattern to a position below the first optical device while second light
`
`source is switched off. The 3 D surface of the object in the region below the first optical
`
`device is recorded. Then the first light source is switched off and the second light
`
`5
`
`source is switched on and the focus position is moved such that the light from the
`
`second light source is used to project an image of the pattern to a position below the
`
`second optical device. The 3D surface of th

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket