`Morgan
`
`73) Assignee:
`
`54) OPTICAL SENSOR FOR IMAGING AN
`OBJECT
`75 Inventor: Colin G. Morgan, Horspath, United
`Kingdom
`Oxford Sensor Technology Limited,
`Summertown, United Kingdom
`104,084
`21) Appl. No.:
`Feb. 6, 1992
`22 PCT Filed:
`PCT/GB92/00221
`86) PCT No.:
`Aug. 10, 1993
`S371 Date:
`S 102(e) Date: Aug. 10, 1993
`22 Filed:
`Aug. 10, 1993
`30)
`Foreign Application Priority Data
`Feb. 12, 1991 GB United Kingdom................. 9102903
`5ll Int. Cl.............................................. G01B 11/24
`52 U.S. C. ................................. 356/376; 250/201.7;
`250/561
`58) Field of Search ........................... 356/376, 375, 4;
`250/561, 201.7
`
`---
`
`56)
`
`References Cited
`U.S. PATENT DOCUMENTS
`4,629,324 12/1986 Stern .
`4,640,620 2/1987 Schmidt .
`5,151,609 9/1992 Nakagawa et al. ................. 356/376
`OTHER PUBLICATIONS
`Applied Optics, vol. 26, No. 12, (1987), pp. 2416-2420;
`T. R. Corle E.A.: “Distance Measurements By. Differ
`ential Confocal Optical Ranging'.
`Optical Engineering, vol. 29, No. 12, (1990), pp.
`1439-1444; Jian Li E.A.: "Improved Fourier Transform
`Profilometry For The Automatic Measurement Of
`Three-Dimensional Object Shapes”.
`Technische Rundschau, vol. 79, No. 41, (1987), pp.
`
`
`
`|||||||||||||
`US00538,236A
`5,381,236
`11
`Patent Number:
`45 Date of Patent:
`Jan. 10, 1995
`
`94-98; E. Senn: "Dreidimensionale Multipunktmessung
`Mit Strukturiertem Licht'.
`Applied Optics, vol. 29, No. 10, (1990), pp. 1474-1476;
`J. Dirickx E.A.: “Automatic Calibration Method For
`Phase Shift Shadow Moire Interferometry'.
`IEEE Transactions on Pattern Analysis and Machine,
`vol. 11, No. 11, (1989), pp. 1225-1228; Makoto Matsuki
`E.A.: "A Real-Time Sectional Image Measuring Sys
`tem. Using Time Sequentially Coded Grating Method”.
`IBM Technical Disclosure Bulletin, vol. 16, No. 2,
`(1973), pp. 433-444; J. R. Malin: “Optical Micrometer'.
`Primary Examiner-F. L. Evans
`Attorney, Agent, or Firm-Webb Ziesenheim Bruening
`Logsdon Orkin & Hanson
`57
`ABSTRACT
`The sensor comprises a structured light source (5, 6, 7)
`which is adjustable so as to interchange the positions of
`contrasting areas of the pattern it provides, a detector
`(1) which comprises an array of detector elements hav
`ing dimensions matched to the pattern produced by the
`light source, an optical system (2, 8) for projecting a
`primary image of the light source pattern onto an object
`(3) that is to be sensed and for forming a secondary
`image on the detector (1) of the primary image thus
`formed on the object (3), positioning means (4) for mov
`ing at least part (2) of the optical system so as to vary
`the focussing of the primary image on the object (3) and
`processing means (12) for analyzing signals produced
`by the detector (1) in conjunction with information on
`the adjustment of the optical system (2, 8). The optical
`arrangement is 'confocal so that, when the primary
`image is in focus on the object (3), the secondary image
`on the detector (1) is also in focus. The processing
`means (12) is arranged to analyse the images received
`by the detector (1) with the contrasting areas thereof in
`the interchanged positions to determine which parts of
`the images are in focus and hence determine the range
`of the corresponding parts of the object being viewed.
`16 Claims, 5 Drawing Sheets
`
`Google Ex. 1027, p. 1
`
`
`
`U.S. Patent
`
`Jan. 10, 1995
`
`Sheet 1 of 5
`
`5,381,236
`
`s-s-D
`\7
`-(x)
`
`FIG. 1
`(Prior Art)
`
`2
`
`cap
`
`FIG 2
`
`
`
`
`
`
`
`
`
`
`
`CCD DETECTOR 1.
`
`sloS)
`
`7
`
`
`
`PROJECTOR
`LAMP
`CONTROL
`
`Google Ex. 1027, p. 2
`
`
`
`U.S. Patent
`
`Jan. 10, 1995
`
`Sheet 2 of 5
`
`5,381,236
`
`POSITION 'N'
`
`
`
`POSITION 'N'
`
`3-D MODEL
`
`FIG 3
`
`Google Ex. 1027, p. 3
`
`
`
`U.S. Patent
`
`Jan. 10, 1995
`
`Sheet 3 of 5
`
`5,381,236
`
`
`
`FIG. 5
`
`FIG 7
`
`Google Ex. 1027, p. 4
`
`
`
`U.S. Patent
`
`Jan. 10, 1995
`
`Sheet 4 of 5
`
`5,381,236
`
`
`
`Google Ex. 1027, p. 5
`
`
`
`U.S. Patent
`
`Jan. 10, 1995
`
`Sheet 5 of 5
`
`5,381,236
`
`1B
`
`SENSOR HEAD
`
`
`
`
`
`
`
`
`
`
`
`COMPUTER
`
`12
`
`DISPLAY
`
`14
`
`FIG 1 O
`
`Google Ex. 1027, p. 6
`
`
`
`1.
`
`OPTICAL SENSOR FOR MAGNG AN OBJECT
`
`TECHNICAL FIELD
`This invention relates to an optical sensor and, more
`particularly, to a sensor used to sense the range of one
`or more parts of an object so as to form an image
`thereof.
`
`5
`
`30
`
`5,381,236
`2
`focus; and processing means for analysing signals pro
`duced by the detector in conjunction with information
`on the adjustment of the optical system, wherein the
`structured light source is adjustable so as to interchange
`the positions of contrasting areas of the pattern pro
`duced by the light source and in that the processing
`means is arranged to analyse the secondary images re
`ceived by the detector elements with the contrasting
`areas in the interchanged positions to determine those
`parts of the secondary images which are in focus on the
`detector and thereby determine the range of corre
`sponding parts of the object which are thus in focus.
`According to another aspect of the invention, there is
`provided a method of determining the range of at least
`part of an object being viewed using an optical sensor
`comprising: a structured light source which produces a
`pattern of contrasting areas; a detector which comprises
`an array of detector elements having dimensions
`matched to the pattern produced by the light source; an
`optical system which projects a primary image of the
`light source onto an object that is to be sensed and forms
`a secondary image on the detector of the primary image
`thus formed on the object; adjustment means which
`adjusts at least part of the optical system so as to vary
`the focussing of the primary image on the object, the
`arrangement being such that when the primary image is
`in focus on the object, the secondary image on the de
`tector is also in focus; and processing means which
`analyses signals produced by the detector in conjunc
`tion with information on the adjustment of the optical
`system, the method involving adjustment of the struc
`tured light source so as to interchange the positions of
`contrasting areas of the pattern produced by the light
`source and the processing means being arranged to
`analyse the secondary images received by the detector
`elements with the contrasting areas in the interchanged
`positions to determine those parts of the secondary
`images which are in focus on the detector and thereby
`determine the range of corresponding parts of the ob
`ject which are thus in focus.
`The invention thus transforms analysis of the image
`sensed by the detector into the temporal domain.
`Preferred and optional features of the invention will
`be apparent from the following description and from
`the subsidiary claims of the specification.
`BRIEF DESCRIPTION OF DRAWINGS
`The invention will now be further described, merely
`by way of example, with reference to the accompanying
`drawings, in which:
`FIG. 1 illustrates the basic concept of an optical sen
`sor such as that described in U.S. Pat. No. 4,629,324 and
`shows the main components and the optical pathways
`between them;
`FIG. 2 shows a box diagram of an optical sensor of
`the type shown in FIG. 1 with a particular control unit
`and signal processing arrangement according to one
`embodiment of the present invention;
`FIG. 3 illustrates how an image of an object being
`viewed by the type of system shown in FIGS. 1 and 2
`can be built up and displayed; FIGS. 4, 5 and 6 show
`alternative optical arrangements which may be used in
`the sensor;
`FIG. 7 shows an alternative form of beam splitter
`which may be used in the sensor;
`FIGS. 8(A) and 8(B) show a further embodiment of a
`sensor according to the invention;
`
`10
`
`5
`
`BACKGROUND ART
`A variety of different optical sensors are available for
`providing an image of objects within the field of view of
`the sensor. One such system known as a 'sweep focus
`ranger' uses a video camera with a single lens of very
`short depth of field to produce an image in which only
`a narrow interval of range in object space is in focus at
`any given time. By using a computer-controlled servo
`drive, the lens is positioned (or 'swept) with great accu
`racy over a series of positions so as to view different
`range 'slices' of an object. A three-dimensional image of
`20
`the object is then built up from these 'slices'. The system
`detects which parts of the object are in focus by analy
`sing the detected signal for high frequency components
`which are caused by features, such as edges or textured
`parts, which change rapidly across the scene. Because
`25
`of this, the system is not suitable for imaging plain or
`smooth surfaces, such as a flat painted wall, which have
`no such features.
`This limitation is common to all passive rangefinding
`techniques. One way to overcome the problem is to
`actively project a pattern of light onto the target objects
`which can then be observed by the sensor. If this pat
`tern contains high spatial frequencies, then these fea
`tures can be used by the sensor to estimate the range of
`otherwise plain surfaces. A particularly elegant way of
`35
`projecting such a pattern is described in U.S. Pat. No.
`4,629,324 by Robotic Vision Systems Inc.
`In this prior art, the sensor detects those parts of the
`target object that are in focus by analysing the image for
`features that match the spatial frequencies present in the
`40
`projected pattern. Various analysis techniques such as
`convolution and synchronous detection are described.
`However, such methods are potentially time consum
`ing. In an extension of these ideas a further patent by the
`same company U.S. Pat. No. 4,640,620 describes a
`45
`method which aims to overcome this problem by the
`use of a liquid crystal light valve device to convert the
`required high spatial frequency components present in
`the image into an amplitude variation that can be de
`tected directly.
`50
`The present invention aims to provide a simpler solu
`tion to this problem.
`DISCLOSURE OF INVENTION
`According to one aspect of the present invention,
`55
`there is provided an optical sensor comprising: a struc
`tured light source for producing a pattern of contrasting
`areas; a detector which comprises an array of detector
`elements having dimensions matched to the pattern
`produced by the light source; an optical system for
`projecting a primary image of the light source onto an
`object that is to be sensed and for forming a secondary
`image on the detector of the primary image thus formed
`on the object; adjustment means for adjusting at least
`part of the optical system so as to vary the focussing of
`65
`the primary image on the object, the arrangement being
`such that when the primary image is in focus on the
`object, the secondary image on the detector is also in
`
`Google Ex. 1027, p. 7
`
`
`
`10
`
`5
`
`5,381,236
`3
`4
`FIGS. 9(A) and 9(B) show another embodiment of a
`repeat distance, comprising contrasting areas, such as a
`sensor according to the invention; and
`series of light and dark bands. Such patterns can, for
`FIG. 10 shows a box diagram of a control unit and a
`example, be provided by slots or square cut-outs in the
`signal processor which may be used with the embodi
`filter 7. For such patterns, those parts of the image
`ment shown in FIG. 9.
`which are in focus on the object 3 being viewed pro
`duce a corresponding image on the detector 1 of light
`BEST MODE OF CARRYING OUT THE
`and dark bands whereas the out of focus parts of the
`INVENTION
`image rapidly break-up' and so produce a more uniform
`The optical sensor described herein combines the
`illumination of the detector 1 which is also significantly
`concepts of a 'sweep focus ranger of the type described
`less bright than the light areas of the parts of the image
`above with the concept of an active confocal light
`which are in focus.
`source as described in U.S. Pat. No. 4,629,324 together
`The structured pattern should preferably have as
`with means to transform the depth information analysis
`high a spatial frequency as possible, although this is
`into the temporal domain.
`limited by the resolution of the lens system 2 and the
`The term 'confocal is used in this specification to
`size of the detector array, as the depth resolution is
`describe an optical system arranged so that two images
`proportional to this spatial frequency.
`formed by the system are in focus at the same time. In
`FIG. 2 is a box diagram of a sensor of the type shown
`most cases, this means that if the relevant optical paths
`in FIG. 1 together with control and processing units as
`are “unfolded, the respective images or objects coincide
`used in an embodiment of the invention to be described.
`with each other.
`20 The light source 5 of the sensor is controlled by a pro
`The basic concept of a sensor such as that described
`jector lamp control unit 9, a piezo-electric grid control
`in U.S. Pat. No. 4,629,324 is illustrated in FIG. 1. The
`unit 10 is provided for moving the grid or filter 7 (for
`sensor comprises a detector 1 and a lens system 2 for
`reasons to be discussed later) and adjustment of the lens
`focussing an image of an object 3 which is to be viewed
`system 2 is controlled by a sweep lens control unit 11.
`onto the detector 1. Positioning means 4 are provided to
`5
`2
`adjust the position of the lens system 2 to focus different
`The control units 9, 10 and 11, together with the output
`of the CCD detector 1, are connected to a computer 12
`'slices' of the object 3 on the detector 1. Thus far, the
`provided with an image processor, frame grabber,
`sensor corresponds to a conventional 'sweep focus sen
`frame stores (computer memory corresponding to the
`sor. As described in U.S. Pat. No. 4,629,324, the sensor
`is also provided with a structured light source compris
`pixel array of the detector 1) and a digital signal proces
`30
`ing a lamp 5, a projector lens 6 and a grid or spatial filter
`sor (DSP) 13 for processing the signals received and
`7 together with a beam-splitter 8 which enables an
`providing appropriate instructions to the control units.
`image of the filter 7 to be projected through the lens
`The output of the signal processor may then be dis
`system 2 onto the object 3. The filter 7 and detector 1
`played on a monitor 14.
`are accurately positioned so that when a primary image
`In the sensor described herein, the detector 1 com
`3
`5
`of the filter 7 is focussed by the lens system 2 onto the
`prises an array of detector elements or pixels such as
`object 3, a secondary image of the primary image
`charged coupled devices (CCD's) or charged injection
`formed on the object 3 is also focussed by the lens sys
`devices (CID's). Such detectors are preferred over
`tem 2 onto the detector 1. This is achieved by position
`other TV type sensors because the precise nature of the
`ing the detector 1 and filter 7 equidistant from the beam
`detector geometry makes it possible to align the un
`splitter 8 so the system is "confocal'. An absorber 9 is
`folded optical path of the structured image with the
`also provided to help ensure that the portion of the
`individual pixels of the detector array.
`beam from the filter 7 which passes through...the beam
`The grid or spatial filter 7 consists of a uniform struc
`splitter 8 is absorbed and is not reflected onto the detec
`tured pattern with a high spatial frequency i.e. having a
`tor 1.
`small repeat distance, comprising contrasting areas,
`5
`The lens system 2 has a wide aperture with a very 4.
`such as a series of light and dark bands or a chequer
`small depth of focus and in order to form an image of
`board pattern. The spatial frequency of the grid pattern
`the object 3 the lens system 2 is successively positioned
`7 should be matched to the pixel dimensions of the
`at hundreds of discrete, precalculated positions and the
`detector 1, i.e. the repeat distance of the pattern should
`images received by the detector 1 for each position of
`0 ben pixels wide, where n is a small even integer e.g. two
`the lens system 2 are analysed to detect those parts of 5
`(which is the preferred repeat distance). The value of n
`the image which are in focus. When the images of the
`will be limited by the resolution of the lens system 2.
`grid 7 on the object 3 and on the detector 1 are in focus,
`If a chequer-board type pattern is used, the detector
`the distance between the lens system 2 and the parts of
`should be matched with the pattern in both dimensions,
`the object 3 which are in focus at that time can be calcu
`so for optimum resolution it should comprise an array of
`
`lated using the standard lens equation; which for a sim
`square rather than rectangular pixels as the lens system
`ple lens is:
`2 will have the same resolving power in both the x and
`y dimensions. Also, the use of square pixels simplifies
`the analysis of data received by the detector.
`The light and dark portion of the pattern should also
`be complementary so that added together they give a
`uniform intensity. The choice of pattern will depend on
`the resolution characteristics of the optical system used
`but should be chosen such that the pattern "breaks up as
`quickly and cleanly as possible as it goes out of focus.
`The pattern need not have a simple light/dark or clear
`/opaque structure. It could also comprise a pattern
`having smoothly changing (e.g. sinusoidal) opacity.
`
`where f is the focal length of the lens system 2, U is the
`object distance (i.e. the distance between the lens sys
`tem2 and the in focus parts of the object 3) and V is the
`image distance (i.e. the distance between the lens system
`2 and the detector 1).
`To enable areas of object which are in-focus to be 6
`5
`detected, the images formed on the object 3 and detec
`tor 1 are preferably in the form of a uniform structured
`pattern with a high spatial frequency, i.e. having a small
`
`5 5
`
`Google Ex. 1027, p. 8
`
`
`
`5,381,236
`5
`This would tend to reduce the higher frequency compo
`nents of the pattern so that it breaks up' more smoothly
`as it moves out of focus.
`In order to avoid edge effects in the image formed on
`the object 3 or detector 1, it is preferable for the grid
`pattern to comprise more repeat patterns than the CCd
`detector, i.e. for the image formed to be larger than the
`detector array 1.
`One possible form of grid pattern 7 is that known as
`a "Ronchi ruling resolution target which comprises a
`pattern of parallel stripes which are alternatively clear
`and opaque. A good quality lens system 2 (such as that
`used in a camera) will typically be able to resolve a
`pattern having between 50 and 125 lines/mm in the
`image plane depending upon its aperture, and special
`lenses (such as those used in aerial-photography) would
`be even better. The resolution can also be further im
`proved by limiting the wavelength band used, e.g. by
`using a laser or sodium lamp to illuminate the grid pat
`20
`tern 7. A typical CCD detector has pixels spaced at
`about 20 microns square so with two pixel widths, i.e.
`40 microns, corresponding to the pattern repeat dis
`tance, this sets a resolution limit of 25 lines/mm. Finer
`devices are likely to become available in the future.
`25
`The number of repeats of the structured pattern
`across the area of the object being viewed should pref
`erably be as high as possible for the lens system and
`detector array used. With a typical detector array of the
`type described above comprising a matrix of 512X512
`30
`pixels, the maximum number of repeats would be 256
`across the image. Considerably higher repeat numbers
`can be achieved using a linear detector array (described
`further below).
`The lower limit on the spatial frequency of the grid
`35
`pattern is set by the minimum depth resolution required.
`As the grid pattern becomes coarser, the number of
`repeats of the pattern across the image is reduced and
`the quality of the image produced by the sensor is de
`graded. An image with less than, say 40 pattern repeats
`across it is clearly going to provide only very crude
`information about the object being sensed.
`In the sensor described herein, the high spatial fre
`quency component of the light source is converted into
`the temporal domain. As indicated in FIG. 2, position
`45
`ing means may be provided for moving the grid pattern
`7. This is used to move the grid 7 in its own plane by one
`half of the pattern repeat distance between each frame
`grab. This movement is typically very small, e.g.
`around 20 microns (where the pattern repeats every 2
`50
`pixels), and is preferably carried out using piezo-electric
`positioning means. The effect of this movement is that
`in one position of the grid the unfolded optical path of
`the grid pattern overlaps the detector pixels in such a
`way that half the pixels correspond to light areas and
`55
`half to dark and when the grid is moved by one half
`repeat distance then the opposite situation exists for
`each pixel (the pattern of light and dark areas on the
`array of pixels corresponding to the pattern of the struc
`tured light source). Each pixel of the detector 1 is thus
`mapped to an area of the pattern produced by the light
`source which alternates between light and dark as the
`structured light source is moved. Pairs of images may
`then be captured in the first and second frame stores
`with the grid 7 in the two positions and the intensities
`65
`(i1 and i2) of corresponding pixels in the two frame
`stores determined so the following functions can be
`produced for each pixel:
`
`10
`
`15
`
`The sum of the intensities i1 and i2 is a measure of the
`brightness of that pixel and the difference a measure of
`the depth of modulation. Dividing the difference signal
`by the brightness gives a normalized measure of the
`high pass component, which is a measure of how 'in
`focus that pixel is. The sign of the term "i1-i2' will, of
`course, alternate from pixel to pixel depending whether
`i1 or i2 corresponds to a light area of the grid pattern.
`Those parts of the image which are in focus on the
`object 3 and on the detector 1, produce a pattern of
`light and dark areas which alternate as the grid 7 is
`moved. One of the signals i2 and i2 will therefore be
`high (for a bright area) and one will be low (for a dark
`area). In contrast, for parts of the image which are not
`in focus on the detector 1, the intensities i1 and i2 will be
`similar to each other and lower than the intensity of an
`in focus bright area.
`If the background illumination is significant, a correc
`tion can be applied by capturing an image with the light
`source switched off in a third frame store. The back
`ground intensity "i3' for each pixel can then be sub
`tracted from the values i1 and i2 used in the above
`equations.
`w
`In order to avoid spurious, noisy data, it is desirable
`to impose a minimum threshold value on the signal 'I'.
`Where I is very low, for example when looking at a
`black objector when the object is very distant, then that
`sample should be ignored (e.g. by setting M to zero or
`some such value).
`The process of constructing a complete 3D surface
`map of an object being viewed may thus proceed as
`follows:
`1) Clear the 3D surface model map file.
`2) Set the lens sweep to the starting position.
`3) For each lens sweep position three frames are
`captured into three frame stores as follows:
`1 = Illumination on, grid pattern in position 1
`2=Illumination on, grid pattern in position 2
`3=No Illumination
`4) The DSP is now used to perform the following
`functions on the raw image data: for each pixel (or
`group of pixels) the functions I and M described
`above are constructed.
`5) The background signal is then subtracted:
`i1:=i1-i3
`i2:=i2-i3
`6) Construct mean intensity I into frame store 3:
`I=i3:=i1--i2
`7) Construct modulation depth Mif I exceeds thresh
`old value into frame store 1:
`
`6
`
`Brightness
`High Pass Component
`
`I = i 1 - i2
`M = il - i2l/I
`
`IF 13 a min THEN
`M = i := il - i2/13
`ELSE
`M = i := 0
`
`(or could use I for i3)
`
`8) The function M can be displayed from frame store
`1 on the monitor (this corresponds to the in-focus
`contours which are shown bright against a dark
`background).
`9) Clean up this “M” data:
`a) Check for pixel to pixel continuity.
`
`Google Ex. 1027, p. 9
`
`
`
`5
`
`15
`
`5,381,236
`7
`8
`b) Look for the local maxima positions, interpolat
`Instead of moving part of the light source it is also
`ing to sub-pixel positions, then convert to corre
`possible to modulate the light source directly, e.g. by
`sponding object coordinates.
`using:
`1) a liquid crystal display (LCD) (e.g. of the type
`c) Construct data chains describing the in-focus
`contour positions.
`which can be used with some personal computers
`to project images onto an overhead projector)
`d) Compare these contours with the adjacent
`Sweep position contours.
`2) a small cathode ray tube (CRT) to write the pat
`tern directly
`e) Add this sweep position contours to the 3D
`3) magneto optical modulation (i.e. the Faraday Ef
`surface model map.
`fect) together with a polarized light.
`10) Proceed to next lens sweep position and repeat
`4) a purpose made, interlaced, fibre optic light guide
`the above stages until a complete surface model
`map is constructed.
`bundle with two alternating light sources.
`The intensity information signal 'I', which corre
`FIG. 3 illustrates how the lens system 2 is adjusted to
`sponds to the information obtained by a conventional
`focus on successive planes of the object 3, and a 3
`sweep focus ranger as the average of ill and i2 is effec
`dimensional electronic image, or range map, of the
`object is built up from these 'slices in the computer
`tively a uniform illumination signal (i.e. without a struc
`tured light source), is also helpful in interpreting the
`memory. The contours of the object 3 which are in
`data. Displaying the signal 'I' on a monitor gives a nor
`focus for any particular position of the lens system 2 can
`mal TV style image of the scene except for the very
`be displayed on the monitor 14. Other images of the
`small depth of focus inherent in the system. If desired,
`3-dimensional model built up in this way can also be
`20
`changing patterns of intensity found using standard
`displayed using well known image display and process
`ing techniques.
`edge detector methods can therefore be used to provide
`information on in-focus edges and textured areas of the
`The optical sensor described above is an active' sys
`object and this information can be added to that ob
`tem (i.e. uses a light source to illuminate the object
`tained from the ‘M’ signal (which is provided by the use
`being viewed) rather than passive (i.e. relying on ambi
`25
`ent light to illuminate the object). As it projects a pat
`of a structured light source).
`The technique of temporal modulation has the advan
`tern onto the object being viewed it is able to sense
`tage that as each pixel in the image is analysed indepen
`plain, un-textured surfaces, such as painted walls, floors,
`dently, edges or textures present on the object do not
`skin, etc., and is not restricted to edge or textured fea
`interfere with the depth measurement (this may not be
`tures like a conventional sweep focus ranger. The use of
`the case for techniques based upon spatial modulation).
`a pattern which is projected onto the object to be
`The technique can also be used with a chequer-board
`viewed and which is of a form which can be easily
`grid pattern instead of vertical or horizontal bands.
`analysed to determine those parts which are in focus,
`Such a pattern would probably break-up' more effec
`thus provides significant advantages over a conven
`tively than the bands as the lens system 2 moves out of
`tional 'sweep focus ranger. In addition, since the outgo
`35
`ing projected beam is subject to the same focussing
`focus and would therefore be the preferred choice. It
`Sweep as the incoming beam sensed by the detector 1,
`should be noted that the sensors described above use a
`structured light source of relatively high spatial fre
`the projected pattern is only focussed on those parts of
`quency to enable the detector to detect accurately when
`the object which are themselves in focus. This improves
`the image is in focus and when it "breaks-up’ as it goes
`on the resolving capability of the conventional sweep
`focus ranger by effectively "doubling up' the focussing
`out of focus. This is in contrast with some known range
`finding systems which rely on detecting a shift between
`action using both the detector 1 and the light source.
`two halves of the image as it goes out of focus.
`The symmetry of the optical system means that when
`The calculations described above may be performed
`the object is in focus, the spatial frequency of the signal
`pixel by pixel in software or by using specialist digital
`formed at the detector 1 will exactly equal that of the
`45
`grid pattern 7.
`signal processor (DSP) hardware at video frame rates.
`A display of the function M on a CRT would have the
`The scale of measurements over which the type of
`appearance of a dark screen with bright contours corre
`sensor described above can be used ranges from the
`sponding to the "in-focus' components of the object
`large, e.g. a few meters, down to the small, e.g. a few
`being viewed. As the focus of the lens system is swept,
`millimeters. It should also be possible to apply the same
`method to the very small, so effectively giving a 3D
`so these contours would move to show the in-focus
`microscope, using relatively simple and inexpensive
`parts of the object. From an analysis of these contours a
`equipment.
`3-dimensional map of the object can be constructed.
`A piezo-electric positioner can also be used on the
`In general, the technology is easier to apply at the
`detector 1 to increase the effective resolution in both
`Smaller rather than the larger scale. This is because for
`55
`the spatial and depth dimensions. To do this, the detec
`sensors working on the small scale, the depth resolution
`tor array is moved so as to effectively provide a smaller
`is comparable with the spatial resolution, whereas on
`pixel dimension (a technique used in some high defini
`the larger scale the depth resolution falls off (which is to
`tion CCD cameras) and thus allow a finer grid pattern
`be expected as the effective triangulation angle of the
`lens system is reduced). Also, being an active system,
`to be used.
`The use of piezo-electric devices for the fine position
`the illumination required will increase as the square of
`ing of an article is well known, e.g. in the high definition
`the distance between the sensor and the object being
`CCD cameras mentioned above and in scanning tunnel
`viewed. Nevertheless, the system is well suited to use as
`ling microscopes, are capable of very accurately posi
`a robot sensor covering a range up to several meters.
`tioning an article even down to atomic dimensions. As
`One of the problems with conventional microscopes
`65
`an alternative to using a piezo-electric positioner, other
`is the "fuzzyness” that results from the very small depth
`devices could be used such as a loudspeaker voice coil
`of focus. To try to overcome this, CCD cameras have
`positioning mechanism.
`been attached to microscopes to capture images directly
`
`30
`
`50
`
`Google Ex. 1027, p. 10
`
`
`
`10
`
`5
`
`5,381,236
`9
`10
`and software packages are now available which can
`If a structured light source having a pattern within a
`process a series of captured images and perform a de
`repeat distance greater than two pixels (i.e. n>2), the
`convolution operation to remove the fuzzyness. In this
`preferred solution is to use an additional imaging stage
`way images of a clarity comparable with those obtain
`as in FIG. 6 to effectively reduce n to two.
`able with a very much more expensive confocal scan
`However, larger CCD arrays are likely to become
`ning microscope are possible. The method described
`more readily available in the near future.
`above of capturing and differencing pairs of frames, also
`When the sensor is designed to work on the very
`enables clear cross-sectional images to be formed. This
`small scale, it is easier to sweep not just the lens system
`is in many ways easier than the software approach as the
`2 but the entire sensor relative to the object being
`data contains height information in a much more acces
`viewed (as in a conventional microscope). As the lens to
`sible form. By scanning over height as described (by
`detector distance remains constant, the field of view
`moving the lens system 2) a 3D image of the sample can
`angle and the magnification remain const