`
`PCT/EP2012/063687
`
`10
`
`registration/alignment of subscans, two subscans of the same surface may
`
`incorrectly look like two different surfaces.
`
`The near threshold distance may be such as 0.01 mm, 0.05 mm, 0.09 mm,
`
`0.10 mm, 0.15 mm, 0.20 mm etc.
`
`5
`
`In some embodiments a far threshold distance is defined, which determines a
`
`distance from the captured surface, where the volume outside the far
`
`threshold distance is not included in the excluded volume of a representation.
`
`10
`
`Thus the volume outside the far threshold distance is not included in the first
`
`excluded volume of the first 3D representation, and the volume outside the
`
`far threshold distance is not included in the second excluded volume of the
`
`second 3D representation.
`
`15
`
`According to this embodiment any acquired data or surface or surface points
`
`of the first or second representation, which is/are present or located outside
`
`the far threshold distance, is not used to determine or define the first or
`
`second excluded volume, respectively.
`
`20
`
`It is an advantage because a surface or surface points from a movable object
`
`or from another part of the tooth surface can actually be present outside the
`
`far threshold distance without being detected by the scanner, due to the
`
`geometry and optical properties of the scanner. The light rays from the
`
`scanner head may be transmitted in any directions and with any angle or
`
`25
`
`inclination from a normal plane of the scanner head, and therefore a light ray
`
`can be transmitted from the scanner head to a point which is placed behind
`
`the movable object or the other part of the tooth surface, when the movable
`
`object or the other part of the tooth surface is present partly in front of the
`
`scanner head.
`
`30
`
`011
`
`0810
`
`Align EX1002 (Part 3 of 3)
`Align v. 3Shape
`IPR2022-00144
`
`
`
`WO 2013/010910
`
`PCT/EP2012/063687
`
`11
`
`Thus the volume outside the far threshold distance is not included in the
`
`excluded volume, because in the volume outside the far threshold distance a
`
`surface can be present even though no surface is detected by the scanner.
`
`5
`
`The far threshold distance defines or determines a distance from the
`
`captured surface, where the volume or region within the far threshold
`
`distance is included in the excluded volume.
`
`Thus if utilizing or applying the far threshold distance, the excluded volume
`
`10
`
`for a representation will be smaller than if not applying the far threshold
`
`distance, and therefore less volume can be excluded.
`
`However, the advantage of applying a far threshold distance is that only
`
`volumes which can truly be excluded, will be excluded, meaning that the
`
`general scan data will have a higher quality.
`
`15
`
`Thus even though no surface or surface points has/have been detected in a
`
`volume or region between the scanner and the tooth surface, the whole
`
`region cannot be defined as excluded volume, because the light rays from
`
`and to the scanner may travel with inclined angles relative to a normal of the
`
`20
`
`scan head, which means that the scanner can detect a point on the tooth
`
`surface even though another part of the tooth is actually placed, at least
`
`partly, between the detected tooth surface and the scanner. Therefore a far
`
`threshold distance is defined, and no data detected outside this far threshold
`
`distance from the tooth surface is used to define the excluded volume of a
`
`25
`
`representation. Only data detected inside the far threshold distance is used
`
`to define the excluded volume, because only within this distance can one be
`
`certain that the data detected actually corresponds to the real physical
`
`situation.
`
`30
`
`The scanner may detect that no surface is present in the volume or region
`
`outside the far threshold distance between the tooth surface and the scanner,
`
`012
`
`0811
`
`
`
`WO 2013/010910
`
`PCT/EP2012/063687
`
`12
`
`but this data or information cannot be used to define the excluded volume of
`
`the representation, because there may actually be a movable object or
`
`another part of the tooth surface in this region or volume which the scanner
`
`overlooks because of its inclined light rays.
`
`5
`
`Furthermore, the scanner may overlook a surface part even though the
`
`surface part is in the scan volume. This can be caused by that the surface
`
`part is outside the focus region of the scanner, for example if the surface part
`
`is too close to the opening of the scanner head and/or scanner body, as the
`
`10
`
`focus region may begin some distance from the scanner head and/or
`
`scanner body. Alternatively and/or additionally this can be caused by the
`
`lightning conditions, which may not be optimal for the given material of the
`
`surface, whereby the surface is not properly illuminated and thus can
`
`become invisible for the scanner. Thus in any case the scanner may overlook
`
`15
`
`or look through the surface part. Hereby a volume in space may erroneously
`
`be excluded, since the scanner detects that no surface is present, and
`
`therefore a surface portion captured in this excluded volume in another 3D
`
`representation or scan would be disregarded. For avoiding that this happens,
`
`which would be unfavorably if the surface part was a true tooth surface, the
`
`20
`
`far threshold distance can be defined, such that the excluded volume
`
`becomes smaller, such that only volume which really can be excluded is
`
`excluded.
`
`It is an advantage that real surface points of a tooth are not erroneously
`
`disregarded, whereby fewer holes, i.e. regions with no scan data, are created
`
`25
`
`in the scans. Thus the excluded volume is reduced by means of the far
`
`threshold distance for avoiding
`
`that
`
`too much surface
`
`information
`
`is
`
`incorrectly disregarded.
`
`The light rays from the scan head of the scanner may spread or scatter or
`
`30
`
`disperse in any directions.
`
`013
`
`0812
`
`
`
`WO 2013/010910
`
`PCT/EP2012/063687
`
`13
`
`Even if an object, such as a movable object, is arranged between the scan
`
`head and the surface of a rigid object, e.g. a tooth, the scanner may still
`
`capture a surface point on the tooth surface which is present or hidden
`
`"under" the object, because of the angled or inclined light rays. A surface
`
`5
`
`point or area may just have to be visible for one or a small number of light
`
`rays from and/or to the scanner in order for that surface point or area to be
`
`detected.
`
`Since the far threshold distance determines a distance from the captured
`
`surface in a representation, where any acquired data or surface or surface
`
`10
`
`points, which is/are present or located outside the far threshold distance, is
`
`not used to define the excluded volume of the representation, any acquired
`
`data or surface or surface points in the volume between the far threshold
`
`distance and the scan head is not included in the definition of the excluded
`
`volume.
`
`15
`
`The actual distance of the far threshold may depend or be calculated based
`
`on the optics of the scanner. The far threshold distance may be a fixed
`
`number, such as about 0.5 mm, 1 mm, 2 mm, 3 mm, 4 mm, 5 mm, 6 mm, 7
`
`mm, 8 mm, 9 mm, 10 mm, 20 mm, 30 mm, 40 mm, 50 mm, 60 mm, 70 mm,
`
`80 mm, 90 mm, or 100 mm. Alternatively, the far threshold distance may be a
`
`20
`
`percentage or a fraction of the length of the scan volume, such as about
`
`20%, 25%, 30%, 35%, 40%, 45%, or 50% of the length of the scan volume,
`
`or such as½, 1/3, ¼, 1/5 of the length of the scan volume.
`
`The far threshold distance may be based on a determination of how far a
`
`distance from a detected point of the surface it is possible to scan, i.e. how
`
`25 much of the surface around a detected point that is visible for the scanner. If
`
`the visible distance in one direction from a surface point is short, then the far
`
`threshold distance will be smaller than if the distance in all directions from a
`
`surface point is long.
`
`30
`
`In some embodiments the first representation of at least part of a surface is a
`
`first subscan of at least part of the location, and the second representation of
`
`014
`
`0813
`
`
`
`WO 2013/010910
`
`PCT/EP2012/063687
`
`14
`
`at least part of the surface is a second subscan of at least part of the
`
`location.
`
`In some embodiments the first representation of at least part of a surface is a
`
`5
`
`provisional virtual 3D model comprising the subscans of the location acquired
`
`already, and the second representation of at least part of the surface is a
`
`second subscan of at least part of the location.
`
`In some embodiments acquired subscans of the location are adapted to be
`
`10
`
`added to the provisional virtual 3D model concurrently with the acquisition of
`
`the subscans.
`
`In some embodiments the provisional virtual 3D model is termed as the
`
`virtual 3D model, when the scanning of the rigid object is finished.
`
`15
`
`In some embodiments the method comprises:
`
`- providing a third 3D representation of at least part of a surface by scanning
`
`at least part of the location;
`
`- determine for the third 3D representation a third excluded volume in space
`
`20
`
`where no surface can be present;
`
`- if a portion of the surface in the first 3D representation is located in space in
`
`the third excluded volume, the portion of the surface in
`
`the first 3D
`
`representation is disregarded in the generation of the virtual 3D model,
`
`and/or
`
`25
`
`- if a portion of the surface in the second 3D representation is located in
`
`space in the third excluded volume, the portion of the surface in the second
`
`3D representation is disregarded in the generation of the virtual 3D model,
`
`and/or
`
`- if a portion of the surface in the third 3D representation is located in space
`
`30
`
`in the first excluded volume and/or in the second excluded volume, the
`
`015
`
`0814
`
`
`
`WO 2013/010910
`
`PCT/EP2012/063687
`
`15
`
`portion of the surface in the third 3D representation is disregarded in the
`
`generation of the virtual 3D model.
`
`In some embodiments the provisional virtual 3D model comprises the first
`
`5
`
`representation of at least part of the surface and the second representation of
`
`at least part of the surface, and where the third representation of at least part
`
`of the surface is added to the provisional virtual 3D model.
`
`Thus the timewise first acquired representation, which is not necessarily the
`
`first representation, and the timewise second acquired representation, which
`
`10
`
`is not necessarily the second representation, may be combined to create the
`
`provisional virtual 3D model, and each time a new representation is acquired
`
`or provided, the new representation may be added to the provisional virtual
`
`3D model, whereby the provisional virtual 3D model grows for each added
`
`representation.
`
`15
`
`In some embodiments the virtual 3D model is used for virtually designing a
`
`restoration for one or more of the patient's teeth.
`
`Thus the purpose of scanning is to obtain a virtual 3D model of the patient's
`
`20
`
`teeth. If the patient should have a restoration, e.g. a crown, a bridge, a
`
`denture, a partial removable etc., the restoration can be digitally or virtually
`
`designed on or relative to the 3D virtual model.
`
`In some embodiments the virtual 3D model is used for virtually planning and
`
`25
`
`designing an orthodontic treatment for the patient.
`
`In some embodiments the relative motion of the scanner and the rigid object
`
`is determined.
`
`30
`
`In some embodiments the relative motion of the scanner and the rigid object
`
`is determined by means of motion sensors.
`
`016
`
`0815
`
`
`
`WO 2013/010910
`
`PCT/EP2012/063687
`
`16
`
`If the scanner used for acquiring the sub-scans is a handheld scanner, then
`
`the relative position, orientation or motion of scanner and the object which is
`
`scanned must be known. The relative position, orientation and motion of the
`
`scanner can be determined by means of position, orientation and/or motion
`
`5
`
`sensors. However, if these sensors are not accurate enough for the purpose,
`
`the precise relative position of scanner and object can be determined by
`
`comparing the obtained 3D surfaces in the sub-scans, such as by means of
`
`align menUreg istration.
`
`A motion sensor is a device that can perform motion measurement, such as
`
`10
`
`an accelerometer. Furthermore the motion sensor may be defined as a
`
`device which works as a position and orientation sensor as well.
`
`A position sensor is a device that permits position measurement. It can be an
`
`absolute position sensor or a
`
`relative position sensor, also denoted
`
`displacement sensor. Position sensors can be linear or angular.
`
`15
`
`An orientation sensor is a device that can perform orientation measurement,
`
`such as a gyrosscope.
`
`In some embodiments the relative motion of the scanner and the rigid object
`
`is determined by registering/aligning the first representation and the second
`
`20
`
`representation.
`
`In some embodiments the first representation and the second representation
`
`are aligned/registered before the first excluded volume and the second
`
`excluded volume are determined.
`
`25
`
`Thus after the first and the second representation are provided, they may be
`
`aligned/registered, and after this, the first and second excluded volume may
`
`be determined, and then it is detected whether a portion of the surface in the
`
`first 3D representation or in the second 3D representation is located in space
`
`in the second excluded volume or in the first excluded volume, respectively,
`
`30
`
`such that such portion of the surface in the representation is disregarded in
`
`the generation of the virtual 3D model.
`
`017
`
`0816
`
`
`
`WO 2013/010910
`
`PCT/EP2012/063687
`
`17
`
`Alignment or registration may comprise bringing the 3D representations or
`
`subscans together in a common reference system, and then merging them to
`
`create the virtual 3D model or a provisional virtual 3D model. For each
`
`5
`
`representation or subscan which is aligned/registered to the provisional
`
`virtual 3D model, the model grows and finally it becomes the virtual 3D model
`
`of the object.
`
`In some embodiments the relative motion of the scanner and the rigid object
`
`10
`
`determined by means of the motions sensors is verified and potentially
`
`adjusted by registering/aligning the first representation and the second
`
`representation.
`
`In some embodiments motion sensors are used for an initial determination of
`
`15
`
`the relative motion of the scanner and
`
`the rigid object, and where
`
`registering/aligning is used for the final determination of the relative motion of
`
`the scanner and the rigid object.
`
`Thus in practice the motion sensors may be used as a first guess for the
`
`motion, and based on this the alignment/registration may be used for testing
`
`20
`
`the determined motion and/or determining the precise motion or adjusting the
`
`determined motion.
`
`In some embodiments the optical system of the scanner is telecentric.
`
`A telecentric system is an optical system that provides imaging in such a way
`
`25
`
`that the chief rays are parallel to the optical axis of said optical system. In a
`
`telecentric system out-of-focus points have substantially same magnification
`
`as in-focus points. This may provide an advantage in the data processing. A
`
`perfectly telecentric optical system may be difficult to achieve, however an
`
`optical system which is substantially telecentric or near telecentric may be
`
`30
`
`provided by careful optical design. Thus, when referring to a telecentric
`
`optical system it is to be understood that it may be only near telecentric.
`
`018
`
`0817
`
`
`
`WO 2013/010910
`
`PCT/EP2012/063687
`
`18
`
`As the chief rays in a telecentric optical system are parallel to the optical axis,
`
`the scan volume becomes rectangular or cylindrical.
`
`In some embodiments the optical system of the scanner is perspective.
`
`5
`
`If the optical system is a perspective system, the chief rays are angled
`
`relative to the optical axis, and the scan volume thus becomes cone shaped.
`
`Note that the scan volume is typically a 3D shape.
`
`In some embodiments a mirror in a scan head of the scanner provides that
`
`10
`
`the light rays from the light source in the scanner are transmitted with an
`
`angle relative to the opening of the scan head.
`
`The scan volume may be defined not as rectangular but rather as resembling
`
`a parallelogram.
`
`15
`
`The light reflected back from a point on the surface may be projected as rays
`
`forming a cone or as parallel rays.
`
`In some embodiments the 3D scanner is a hand-held scanner.
`
`The 3D scanner may for example be a hand-held intraoral scanner.
`
`20
`
`In some embodiments the scanner is a pinhole scanner.
`
`A pinhole scanner comprises a pinhole camera having a single small
`
`aperture. The size of the aperture may be such as 1/100 or less of the
`
`distance between it and the projected image. Furthermore, the pinhole size
`
`25 may be determined by the formula d=2-v(2f,~), where d is pinhole diameter, f
`
`is focal length, i.e. the distance from pinhole to image plane, and J\
`
`is the
`
`wavelength of light.
`
`It is an advantage to use the present method for detecting a movable object
`
`in a location in a pinhole scanner, since determining the first excluded
`
`30
`
`volume and the second excluded volume is very fast, easy and accurate due
`
`to the pinhole setup, where the camera and the light source/projected
`
`019
`
`0818
`
`
`
`WO 2013/010910
`
`PCT/EP2012/063687
`
`19
`
`pattern, respectively, of the scanner are well-defined points in space relative
`
`to the captured surface.
`
`Furthermore, if the scanner is a pinhole scanner, the excluded volume may
`
`5
`
`be bigger, compared to if the scanner is not a pinhole scanner. The reason
`
`for this is because no far threshold distance can or should be defined when
`
`using a pinhole scanner, since no volume between the scanner and the
`
`captured tooth surface may not be included in the excluded volume due to
`
`the geometry and optical properties of the scanner. The pinhole scanner
`
`10
`
`cannot overlook a surface or surface points from e.g. a movable object due to
`
`its geometry and optical properties.
`
`In some embodiments the scanner comprises an aperture, and the size of
`
`the aperture is less than 1/100 of the distance between it and the projected
`
`15
`
`image.
`
`This size of aperture corresponds to a pinhole scanner.
`
`In some embodiments the scanner comprises an aperture, and the size of
`
`the aperture is more than 1/100 of the distance between it and the projected
`
`20
`
`image.
`
`This size of aperture corresponds to a scanner which is not a pinhole
`
`scanner.
`
`25
`
`Further aspects
`
`According to another aspect of the invention, disclosed is a method for
`
`detecting movable objects in the mouth of a patient, when scanning the
`
`patient's set of teeth in the mouth by means of a 3D scanner for generating a
`
`30
`
`virtual 3D model of the set of teeth, wherein the method comprises:
`
`020
`
`0819
`
`
`
`WO 2013/010910
`
`PCT/EP2012/063687
`
`20
`
`- providing a first 3D representation of at least part of a surface by scanning
`
`at least part of the teeth;
`
`- providing a second 3D representation of at least part of the surface by
`
`scanning at least part of the teeth;
`
`5
`
`- determining for the first 3D representation a first excluded volume in space
`
`where no surface can be present;
`
`- determining for the second 3D representation a second excluded volume in
`
`space where no surface can be present;
`
`- if a portion of the surface in the first 3D representation is located in space in
`
`10
`
`the second excluded volume, the portion of the surface in the first 3D
`
`representation is disregarded in the generation of the virtual 3D model,
`
`and/or
`
`- if a portion of the surface in the second 3D representation is located in
`
`space in the first excluded volume, the portion of the surface in the second
`
`15
`
`3D representation is disregarded in the generation of the virtual 3D model.
`
`According to another aspect of the invention, disclosed is a method for
`
`detecting a movable object in a location, when scanning a rigid object in the
`
`location by means of a 3D scanner for generating a virtual 3D model of the
`
`20
`
`rigid object, wherein the method comprises:
`
`- providing a first representation of at least part of a surface by scanning the
`
`rig id object;
`
`- determining a first scan volume in space related to the first representation of
`
`25
`
`at least part of the surface;
`
`- providing a second representation of at least part of the surface by scanning
`
`the rigid object;
`
`- determining a second scan volume in space related to the second
`
`representation of at least part of the surface;
`
`30
`
`- if there is a common scan volume, where the first scan volume and the
`
`second scan volume are overlapping, then:
`
`021
`
`0820
`
`
`
`WO 2013/010910
`
`PCT/EP2012/063687
`
`21
`
`- determine whether there is a volume region in the common
`
`scan volume which in at least one of the first representation or
`
`the second representation is empty and comprises no surface;
`
`and
`
`5
`
`- if there is a volume region in the common scan volume which in
`
`at
`
`least one of
`
`the
`
`first
`
`representation or
`
`the second
`
`representation is empty and comprises no surface, then exclude
`
`the volume region by disregarding in the generation of the virtual
`
`3D model any surface portion in the second representation or in
`
`10
`
`the first representation, respectively, which is detected in the
`
`excluded volume region, since a surface portion detected in the
`
`excluded volume region represents a movable object which is
`
`not part of the rigid object.
`
`15
`
`According to another aspect of the invention, disclosed is a method for
`
`detecting a movable object in a location, when scanning a rigid object in the
`
`location by means of a 3D scanner for generating a virtual 3D model of the
`
`rigid object, wherein the method comprises:
`
`20
`
`- providing a first surface by scanning the rigid object;
`
`- determining a first scan volume related to the first surface;
`
`- providing a second surface by scanning the rigid object;
`
`- determining a second scan volume related to the second surface;
`
`where the first scan volume and the second scan volume are overlapping in
`
`25
`
`an overlapping/common scan volume;
`
`- if at least a portion of the first surface and a portion of the second surface
`
`are not coincident in the overlapping/common scan volume, then disregard
`
`the portion of either the
`
`first surface or the second surface
`
`in
`
`the
`
`overlapping/common scan volume which is closest to the focusing optics of
`
`30
`
`the 3D scanner, as this portion of the first surface or second surface
`
`represents a movable object which is not part of the rigid object.
`
`022
`
`0821
`
`
`
`WO 2013/010910
`
`PCT/EP2012/063687
`
`22
`
`According to another aspect of the invention, disclosed is a method for
`
`detecting a movable object in the mouth of the patient, when scanning the
`
`patient's set of teeth by means of a 3D scanner for generating a virtual 3D
`
`5 model of the set of teeth, wherein the method comprises:
`
`- providing a first surface by scanning the set of teeth;
`
`- determining a first scan volume related to the first surface;
`
`- providing a second surface by scanning the set of teeth;
`
`10
`
`- determining a second scan volume related to the second surface;
`
`where the first scan volume and the second scan volume are overlapping in
`
`an overlapping/common scan volume;
`
`- if at least a portion of the first surface and a portion of the second surface
`
`are not coincident in the overlapping/common scan volume, then disregard
`
`15
`
`the portion of either the
`
`first surface or the second surface
`
`in
`
`the
`
`overlapping/common scan volume which is closest to the focusing optics of
`
`the 3D scanner, as this portion of the first surface or second surface
`
`represents a movable object which is not part of the set of teeth.
`
`20
`
`According to another aspect of the invention, disclosed is a method for
`
`detecting movable objects recorded in subscans, when scanning a set of
`
`teeth by means of a scanner for generating a virtual 3D model of the set of
`
`teeth, where the virtual 3D model is made up of the already acquired
`
`subscans of the surface of the set of teeth, and where new subscans are
`
`25
`
`adapted to be added to the 3D virtual model, when they are acquired,
`
`wherein the method comprises:
`
`- acquiring at least a first subscan of at least a first surface of part of the set
`
`of teeth, where the at least first subscan is defined as the 3D virtual model;
`
`30
`
`- acquiring a first subscan of a first surface of part of the set of teeth;
`
`- determining a first scan volume of the first subscan;
`
`023
`
`0822
`
`
`
`WO 2013/010910
`
`PCT/EP2012/063687
`
`23
`
`- determining a scan volume of the virtual 3D model;
`
`- if the first scan volume of the first subscan and the scan volume of the
`
`virtual 3D model are at least partly overlapping in a common scan volume;
`
`then:
`
`5
`
`- calculate whether at least a portion of the first surface lies
`
`within the common scan volume;
`
`- calculate whether at least a portion of the surface of the virtual
`
`3D model lies within the common scan volume, and
`
`- determine whether at least a portion of a surface is present in
`
`10
`
`the overlapping volume only in one subscan and not the other
`
`subscan/3D virtual model;
`
`- if at least a portion of a surface is present in only one subscan,
`
`then disregard the portion of the surface in the overlapping
`
`volume which is closest to the focusing optics of the scanner,
`
`15
`
`since the portion of the surface represents a movable object
`
`which is not part of the set of teeth, and the portion of the surface
`
`is disregarded in the creation of the virtual 3D model of the set of
`
`teeth.
`
`20
`
`According to another aspect of the invention, disclosed is s method for
`
`detecting movable objects recorded in subscans, when scanning a set of
`
`teeth by means of a scanner for generating a virtual 3D model of the set of
`
`teeth, wherein the method comprises:
`
`25
`
`a) providing a first subscan of a first surface of part of the set of teeth;
`
`b) calculating a first scan volume of the first subscan;
`
`c) providing a second subscan of a second surface of part of the set of teeth;
`
`d) calculating a second scan volume of the second subscan; and
`
`e) if the first scan volume and the second scan volume are at least partly
`
`30
`
`overlapping in a common scan volume; then:
`
`024
`
`0823
`
`
`
`WO 2013/010910
`
`PCT/EP2012/063687
`
`24
`
`f) calculate whether at least a portion of the first surface lies
`
`within the common scan volume;
`
`g) calculate whether at least a portion of the second surface lies
`
`within the common scan volume, and
`
`5
`
`h) if at least a portion of the first surface or at least a portion of
`
`the second surface lie within the common scan volume, and the
`
`portion of the first surface or the portion of the second surface is
`
`located in space between the scanner and at least a portion of
`
`the second surface or at least a portion of the first surface,
`
`10
`
`respectively;
`
`then the portion of the surface represents a movable object
`
`which is not part of the set of teeth, and the portion of the surface
`
`is disregarded in the creation of the virtual 3D model of the set of
`
`teeth.
`
`15
`
`In some embodiments the method above further comprises:
`
`- providing a third subscan of a third surface of part of the set of teeth;
`
`- calculating a third scan volume of the third subscan;
`
`- if the third scan volume is at least partly overlapping with the first scan
`
`20
`
`volume and/or with the second scan volume in a common scan volume; then
`
`repeat steps f) - h) for the third subscan with respect to the first subscan
`
`and/or the second subscan.
`
`25
`
`30
`
`Further embodiments are disclosed in the following sections:
`
`Focus scanning and motion determination
`
`In some embodiments the 3D scanning comprises the steps of:
`
`generating a probe light,
`
`transmitting
`
`the probe
`
`light
`
`towards
`
`the object
`
`thereby
`
`illuminating at least a part of the object,
`
`025
`
`0824
`
`
`
`WO 2013/010910
`
`PCT/EP2012/063687
`
`25
`
`transmitting
`
`light
`
`returned
`
`from
`
`the object
`
`to a camera
`
`comprising an array of sensor elements,
`
`imaging on the camera at least part of the transmitted light
`
`returned from the object to the camera by means of an optical system,
`
`5
`
`varying the position of the focus plane on the object by means of
`
`focusing optics,
`
`obtaining at least one image from said array of sensor elements,
`
`determining the in-focus position(s) of:
`
`- each of a plurality of the sensor elements for a
`
`10
`
`sequence of focus plane positions, or
`
`- each of a plurality of groups of the sensor elements
`
`for a sequence of focus plane positions.
`
`There may be for example more than 200 focus plane images, such as 225
`
`15
`
`focus plane images, in a sequence of focus plane images used in generating
`
`a 3D surface. The focus plane images are 2D images.
`
`Image sensor(s), photo sensor and the like can be used for acquiring images
`
`in the scanner. By scanning is generally meant optical scanning or imaging
`
`20
`
`using laser light, white light etc.
`
`In some embodiments a sequence of focus plane images are depth images
`
`captured along the direction of the optical axis.
`
`25
`
`In some embodiments at least a part of the object is in focus in at least one of
`
`the focus plane images in a sequence of focus plane images.
`
`In some embodiments the time period between acquisition of each focus
`
`plane image is fixed/predetermined/known.
`
`30
`
`026
`
`0825
`
`
`
`WO 2013/010910
`
`PCT/EP2012/063687
`
`26
`
`Each focus plane image may be acquired a certain time period after the
`
`previous focus plane image was acquired. The focus optics may move
`
`between the acquisition of each image, and thus each focus plane image
`
`may be acquired in a different distance from the object than the previous
`
`5
`
`focus plane images.
`
`One cycle of focus plane image capture may be from when the focus optics
`
`is in position P until the focus optics is again in position P. This cycle may be
`
`denoted a sweep. There may such as 15 sweeps per second.
`
`10
`
`A number of 3D surfaces or sub-scans may then be combined to create a full
`
`scan of the object for generating a 3D model of the object.
`
`In some embodiments determining the relative motion of the scanner during
`
`15
`
`the acquisition of the sequence of focus plane images is performed by
`
`analysis of the sequence in itself.
`
`Motion detection by means of hardware
`
`20
`
`In some embodiments determining the relative motion of the scanner during
`
`the acquisition of the sequence of focus plane images is performed by
`
`sensors in and/or on the scanner and/or by sensors on the object and/or by
`
`sensors in the room where the scanner and the object are located.
`
`25
`
`The motion sensors may be small sensor such as microelectromechanical
`
`systems (MEMS) motion sensors. The motion sensors may measure all
`
`motion in 3D, i.e., both translations and rotations for the three principal
`
`coordinate axes. The benefits are:
`
`027
`
`0826
`
`
`
`WO 2013/010910
`
`PCT/EP2012/063687
`
`27
`
`- Motion sensors can detect motion, also vibrations and/or shaking. Scans
`
`such affected can e.g. be corrected by use of the compensation techniques
`
`described.
`
`5
`
`- Motion sensors can help with stitching and/or registering partial scans to
`
`each other. This advantage is relevant when the field of view of the scanner
`
`is smaller than the object to be scanned. In this situation, the scanner is
`
`applied for small regions of the object (one at a time) that then are combined
`
`to obtain the full scan. In the ideal case, motion sensors can provide the
`
`10
`
`required relative rigid-motion transformation between partial scans' local
`
`coordinates, because they measure the relative position of the scanning
`
`device in each partial scan. Motion sensors with limited accuracy can still
`
`provide a first guess for a software-based stitching/ registration of partial
`
`scans based on, e.g., the Iterative Closest Point class of algorithms, resulting
`
`15
`
`in reduced computation time.
`
`Even
`
`if
`
`it
`
`is
`
`too
`
`inaccurate
`
`to sense
`
`translational motion, a 3-axis
`
`accelerometer can provide the direction of gravity relative to the scanning
`
`device. Also a magnetometer can provide directional information relative to
`
`the scanning device, in this case from the earth's magnetic field. Therefore,
`
`20
`
`such devices can help with stitching/registration.
`
`In some embodiments the motion is determined by means of a texture image
`
`sensor having a depth of focus which is larger than the depth of focus of the
`
`focusing optics.
`
`25
`
`In some embodiments the motion i