throbber
Tele-3D — Developing a Handheld Scanner Using Structured Light Projection
`
`Jorge Dias
`Jorge Lobo
`Jo˜ao Filipe Ferreira
`Institute of Systems and Robotics, Coimbra Pole
`Departamento de Engenharia Electrot´ecnica, Universidade Coimbra, Polo II
`
`n,
`
`I
`
`I
`
`Image
`
`Plane
`
`Figure 1 - Generic structured light 3D scanner [1].
`
`It becomes obvious that the information extracted from
`the two-dimensional image-plane of a projective camera is
`not sufficient to completely reconstruct a three-dimensional
`scene. Thus, at least one additional restriction is needed to
`establish an univocal correspondence between the 3D point
`and its projection on the image-plane. There are several
`ways of achieving this goal, one of which being the use of
`projected structured light.
`It is clear that the light source is in fact projecting a plane
`of light, which can be mathematically represented by equa-
`tion 1.
`
`= 0  30 = 0
`
`(1)
`
`3775
`
`X Y Z 1
`
`2664
`
` a b
`
` d
`
`Using the geometry involved in the perspective projection
`of the world onto the camera’s image-plane and the light-
`plane’s restriction of equation 1, the 3D point in the scene
`can be uniquely related to its two-dimensional projection
`point in the image-plane of the camera by triangulation us-
`ing:
`
`10 = 0
`20 = 0
`30 = 0
`
`8<
`:
`For a more complete explanation of these principles,
`please refer to [1].
`In figure 2, some examples from our preceding research
`project with a non-portable structured light 3D scanner are
`presented. It shows clearly that, to achieve complete 3D
`
`(2)
`
`Abstract – Three-dimensional surface reconstruction using a
`handheld scanner is a process with great potential for use on
`different fields of research, commerce and industrial produc-
`tion. In this article we will describe the evolution of a project
`comprising the study and development of a system that imple-
`ments the aforementioned process based on two-dimensional
`images. We will present our current work on the development
`of a fully portable, handheld system using cameras, projected
`structured light and attitude and positioning measuring sen-
`sors — the Tele-3D scanner.
`
`I. INTRODUCTION
`The problem of 3D spatial structure recovery and recon-
`struction has been the object of discussion in the past two
`decades for people involved in areas as diverse as the ob-
`vious computer, physical, or medical sciences, and as the
`perhaps less obvious, but nevertheless important, subjects
`as documentary and film production, down to all industries
`in general [1].
`This article presents the study and development of a 3D in-
`formation recovery and reconstruction system (commonly
`known as 3D scanner or 3D digitiser – although the lat-
`ter is more encompassing and thus more correct, the first is
`more common and, for that reason, will be the designation
`to be used from now on), the Tele-3D handheld 3D scan-
`ning system. This project is an extension of our previous
`work (fully described in [1]), which concerned the devel-
`opment of a traditional, fixed laser 3D scanning system.
`We will present, consecutively, the theoretical background
`for our project, the system’s architecture and the implemen-
`tation issues we are currently addressing.
`
`II. THEORETICAL BACKGROUND
`In the following subsections we will introduce the system’s
`mathematical, geometrical and physical background.
`
`A. The Projected Structured Light 3D Scanner
`The first step for the recovery and reconstruction of three-
`dimensional scenes from two-dimensional images is to de-
`termine how to extract the required three-dimensional data
`from the two-dimensional information available on those
`images.
`The geometry involving a generic structured light 3D scan-
`ning system is presented in figure 1 [1].
`
`supported by the Tele-3D Project, CERN/FCT.
`This work is
`authors:
`Jo˜ao
`Filipe
`Ferreira,
`Jorge
`Corresponding
`fjfilipe,jorgeg@isr.uc.pt,
`URL:
`Dias,
`E-mail:
`www.isr.uc.pt/˜jorge.
`
`0001
`
`Exhibit 1023 page 1 of 4
`DENTAL IMAGING
`
`

`

`Y
`
`Z
`
`X
`
`<
`
`Pos./Attitude Sensor
`
`Y
`
`<
`
`Z
`
`X
`
`{R}
`
`Baseline
`
`{C}
`
`Z
`
`Y
`
`{W}
`
`{L}
`
`X
`
`X
`
`Z
`
`Y
`
`Figure 3 - Handheld architecture schematics.
`
`Gyroscopes and accelerometres are known as inertial sen-
`sors since they exploit the property of inertia, i.e. resistance
`to a change in momentum; to sense angular motion in the
`case of the gyro, and changes in linear motion in the case
`of the accelerometer. Linear velocity and position, and an-
`gular position, can be obtained by integration [3].
`At the most basic level, the inertial system simply performs
`a double integration of sensed acceleration to estimate po-
`sition. Clearly, time integration of measured accelerations
`corrupted by measurement errors will undoubtedly result in
`drift error in estimated position that will accumulate over
`time. This means that, either one must compensate some-
`how for that drift, compensation proved in practice to be
`virtually impossible to sustain, or one must rely exclusively
`on 3D registration to deal with translations; drift effects for
`rotations, on the other hand, can be effectively compen-
`sated [3].
`Another sensor which can be included to reset the accu-
`mulated drift in the attitude computation is the 3D mag-
`netic compass, since a good source for absolute orientation
`of a mobile system is the earth’s magnetic field. However
`the sensors’ accuracy is limited in the situation when the
`earth’s magnetic field is distorted [3].
`
`B.2 3D registration
`Consider an overview of our handheld system — a camera
`and a laser projector mounted in fixed position on a rigid
`structure that will be moved by hand so as to completely
`scan the 3D scene to recover. At any time, as can be in-
`ferred from figure 3, the camera and the laser projector’s
`referentials are fixed relatively to each other and can thus
`be locally calibrated beforehand.
`Image frames of the scene will be grabbed in sequence as
`the scan is performed and a set of 3D profiles will be deter-
`mined from each intersection of the projected light-plane
`with the 3D surfaces on the scene. These profiles, desig-
`nated as  for each sampling time  =  , represent, in
`fact, a set of ordered points x  sampled from those sur-
`faces. Contrary to what happened with the fixed positioned
`laser scanner of our previous work, the handheld scanner
`and its local referentials are moved and are rotated during
`
`Figure 2 - Example of a 3D reconstruction using a laser rangefinder from
`one point of view [1] – the necessity for more information to complete it
`is clear.
`
`scene reconstructions, it will be necessary to integrate more
`data taken from other points of view.
`
`B. 3D Information Integration
`
`In our present task of building a system with the largest
`possible degree of autonomy, the greatest challenge was to
`address data integration from several acquired sets of 3D
`measurements. Due to several performance issues [2], we
`have decided to implement a system using a hybrid attitude
`and position estimation procedure, where both information
`intrinsic to the acquired data and extrinsic information mea-
`sured from sensors are used to achieve data integration. We
`will describe the principles behind this implementation in
`the following subsections.
`
`B.1 Attitude and positioning
`
`There are basically three ways of measuring the system’s
`position and attitude:
`
`(cid:15) the use of contrasting landmarks, strategically placed
`so as to execute triangulations that allow determination
`of position and attitude changes;
`(cid:15) the transmitter/receiver approach, where a transmit-
`ter is placed somewhere on the scene to be recov-
`ered while a receiver, placed on the system, is regis-
`tering relative position and orientation changes using
`its readings from the transmission;
`(cid:15) the self-contained approach, where the system obtains
`its position and attitude measurements through a con-
`figuration that is independent from the scene to re-
`cover.
`
`Any transmitter/receiver system using the time-of-flight or
`phase-based properties of any physical reality, such as elec-
`tromagnetic or sound waves, can be used to implement the
`second approach, as can magnetic systems using physical
`properties such as the Hall effect.
`However, to implement self-contained position and atti-
`tude measuring systems, it is necessary to resort to more
`specific sensors.
`
`0002
`
`Exhibit 1023 page 2 of 4
`DENTAL IMAGING
`
`

`

`the scanning process. Since no scaling or reflections are
`involved, each position and attitude change of the system
`between consecutive sampling instants can be modelled as
`a rigid-body transformation [4]-[6], a combination of a ro-
`tation with a translation expressed by
`
`x = R1;x1  ~1;
`
`(3)
`
`where x corresponds to any sampled point at sampling
`time  =  and R and ~ are, respectively, the rotation ma-
`trix and the translation vector that compose the transforma-
`tion, with coordinates referred to any of the system’s local
`referentials (for example, figure 3’s fg or fCg).
`Evidently, for complete integration of acquired data into
`a coherent reconstruction of the scene, computation of
`all such rigid transformations of any chosen system’s lo-
`cal referential is needed so as to perform 3D registra-
`tion. The three-dimensional registration process has been
`defined in several ways depending on the implementation
`(check [7], [4], [8] for examples), but can be formalised in
`a more generic form as the transformation of sets of three-
`dimensional measurements into a common coordinate sys-
`tem.
`A 3D registration technique can be described mainly by
`how it addresses 3 crucial problems [4], [8], [6]:
`
`1. Choice of feature space (i.e., type of 3D measure-
`ments used: points, curves, planes, voxel images, etc.);
`2. Choice of transformation (including rotation repre-
`sentation — by an orthonormal matrix, by quater-
`nions, etc.)
`and, consequently, of the objective-
`function to be optimised and its parameter vector;
`3. Feature matching (determining correspondences be-
`tween features is mandatory, since registration will be
`impossible without them) and global optimisation.
`
`A preferred registration method seems to be the Iterative
`Closest Point algorithm (ICP) and its variations, a tech-
`nique devised by Besl and McKay and discussed in [8]. It
`involves a series of line searches in a 7-parameter space
`spanned by a rotation quaternion and translation vector that
`model the corresponding rigid-body transformation, guar-
`anteeing local convergence which will be a good solution
`if given an appropriate initial state for global matching [4],
`[8]. If the feature set is chosen to be a set of 3D points, for
`example, the original ICP algorithm operates as follows,
`given the point sets  and where  (cid:26) [7], [8]:
`
`Nearest point search: For each point  in  , determine the
`closest point  on (supposedly a match)
`Compute registration: Evaluate the rigid-body transforma-
`tion that solves the least-square distance minimisation
`problem stated in equation 4 [6].
`
`III. SYSTEM ARCHITECTURE
`
`On figure 3, we present the handheld scanner’s architec-
`ture. A rigid structure with a straight angle boomerang
`shape has the camera mounted on one side and the laser
`projector on the other; a handling grip is placed at the centre
`of mass of the structure. Coupled with the laser projector is
`the attitude and positioning device, using inertial and mag-
`netic sensors. Also on figure 3, all referentials directly or
`indirectly involved on the scanning process are represented:
`the camera’s referential, fCg, the laser projector’s referen-
`tial, fg, the position and orientation measuring sensor’s
`referential, fRg, and the world’s absolute referential fW g.
`
`IV. PRESENT STATE OF IMPLEMENTATION
`
`We will now describe several implementation issues we
`are currently addressing, namely system calibration and 3D
`information integration issues.
`
`A. Calibration issues
`
`For our handheld 3D scanner project, the first issue we ad-
`dressed was the optimisation of our calibration procedures.
`Complete system calibration involves determining the po-
`sition and orientation of the camera, the laser projector and
`the position and orientation measuring sensor relatively to
`the chosen world coordinate system [9]. In other words, the
`calibration procedure encompasses the determination of all
`coordinate systems that are directly involved in the hand-
`held 3D scanner’s operation and, consequently, all rigid
`transformations between them (please refer to figure 3).
`With this purpose in mind, a robust camera calibration
`software package was developed using Intel’s Open Source
`Computer Vision Library [10]. The OpenCV’s calibration
`method, mainly based on [11], is an iterative algorithm ap-
`plied to the pinhole model with radial and tangential dis-
`tortion coefficients. It uses a chessboard pattern to supply
`3D points with well-known coordinates and operates as fol-
`lows [10]:
`
`1. Find homography for all points on a series of images.
`2. Initialise intrinsic parameters; set distortion parame-
`ters to 0.
`3. Find extrinsic parameters for each image of calibrat-
`ing pattern.
`4. Make main optimisation by minimising error of pro-
`jection points with all parameters.
`
`As this procedure allows determining the rigid transforma-
`tion between the world’s referential and the camera’s refer-
`ential, W TC , for each point of view involved in camera
`calibration, it is possible to use that particular information
`to determine the sensor’s position and orientation through
`the rigid transformation RTC, which is in turn determined
`using
`
`W TC =W TT :T TR:RTC
`
`(5)
`
`if using a
`where fT g is the transmitters referential,
`transmitter-receiver approach, or the startup referential, if
`using a self-contained approach. T TR is measured directly
`by the sensor, making thus W TT and RTC the unknown
`
`(4)
`
`(cid:13)(cid:13)(cid:13)
`
`Ri1;ixi1  ~i1;i xi(cid:13)(cid:13)(cid:13)
`
`2
`
`Xi
`
`=1
`
` i
`
`Transform: Apply transformation to all points in set 
`Iterate: Repeat previous steps until convergence
`
`0003
`
`Exhibit 1023 page 3 of 4
`DENTAL IMAGING
`
`

`

`(6)
`
`REFERENCES
`
`V. DISCUSSION AND CONCLUSIONS
`Practical areas of use for 3D scanning systems, as de-
`scribed in this article’s introductory section, will push to-
`wards a revolution in this field of research and small,
`portable, fast, reliable and cost-effective systems will be
`available as technologies advance. The current challenge
`is, thus, to optimise solutions to bring out the best of each
`implementation. This task, however, is anything but triv-
`ial. Perhaps each implementation will only work as in-
`tended for particular instances of the 3D reconstruction
`problem, therefore becoming tailor-made solutions for par-
`ticular types of scanning.
`It is thus the authors’ conviction that a lot of experimental
`work in this field is still to come.
`
`[1]
`
`[2]
`
`[3]
`
`Jo˜ao Ferreira and Jorge Dias, “A 3D Scanner - Three-Dimensional
`Reconstruction From Multiple Images”,
`in Proc. Controlo 2000
`Conf. on Automatic Control. University of Minho, Portugal, 2000,
`pp. 690–695, Student Forum.
`
`P. H´ebert and M. Rioux, “Toward a hand-held laser range scanner:
`Integrating observation-based motion compensation”,
`in Proceed-
`ings of IS&T/SPIE’s 10th Annual Symposium on Electronic Imag-
`ing (Photonics West); Conference 3313: Three-Dimensional Image
`Capture and Applications, San Jose, Ca, USA, January 1998.
`
`Jorge Lobo, Lino Marques, Jorge Dias, Urbano Dias, and Anibal T.
`de Almeida, Sensors for Mobile Robot Navigation, pp. 50–81, Num-
`ber 236 in Autonomous Robotic Systems - (Lecture Notes in Control
`and Information Sciences). Springer-Verlag, 1998, ISBN 1-85233-
`036-8.
`
`[4] Michel A. Audette, Frank P. Ferrie, and Terry M. Peters, “An Al-
`gorithmic Overview of Surface Registration Techniques for Medi-
`cal Imaging”, Medical Image Analysis, vol. 4, no. 3, pp. 201–217,
`September 2000.
`
`[5]
`
`Jacques Feldmar and Nicholas Ayache, “Rigid, Affine and Locally
`Affine Registration of Free-Form Surfaces”, Research Report 2220,
`Institut National De Recherche En Informatique Et En Automatique
`(INRIA), March 1994, Project Epidaure.
`
`[6] O. D. Faugeras and M. Hebert, “The Representation, Recognition
`and Locating of 3-D Objects”, The International Journal of Robotics
`Research, vol. 5, no. 3, pp. 27–52, Fall 1986, Massachusetts Institute
`of Technology.
`
`“Model Building from 3D Surface Measurements”,
`[7] A. Hilton,
`Web page - http://www.ee.surrey.ac.uk/Research/VSSP/3DVision/
`model building/model.html, December 1997, Tutorial.
`
`[8]
`
`Paul J. Besl and Neil D. McKay, “A Method for Registration of
`3-D Shapes”, IEEE Transactions on Pattern Analysis and Machine
`Intelligence, vol. 14, no. 2, pp. 239–256, February 1992.
`
`[9] R.W. Prager, R.N. Rohling, A.H. Gee, and L. Berman, “Automatic
`calibration for 3-D free-hand ultrasound”, Tech. Rep. CUED/F-
`INFENG/TR 303, Cambridge University Department of Engineer-
`ing, September 1997.
`
`[10] Intel Corporation, USA, Open Source Computer Vision Library,
`1999-2001, Reference Manual.
`
`[11] Zhengyou Zhang, “A Flexible New Technique For Camera Calibra-
`tion”, Tech. Rep., Microsoft Research, 2000.
`
`matrices to determine (totalling 24 unknowns, 12 of which
`linearly independent). It is therefore necessary for calibra-
`tion to obtain at least 2 different equations as stated in equa-
`tion 5 to determine these unknowns, which is possible by
`measuring and computing the remaining variables through
`different positionings of the system relatively to a calibra-
`tion plane containing the aforementioned chessboard pat-
`tern.
`Through the knowledge of W TC it is also possible to ob-
`tain the rigid transformation between the laser’s referential
`and the camera’s referential, C T, using a procedure which
`is very similar to what is stated in [9] for the single-wall
`phantom, with
`
`3775
`
`XC
`YC
`
`0 1
`
`2664
`
`=C T1
`
`
`
`:W T1
`
`C
`
`:
`
`3775
`
`X
`
`0 Z
`
`
`1
`
`2664
`
`where C T is the unknown matrix, and X C, YC and W TC
`are the known variables/matrix. Bear in mind that these
`matrices are known to be invertible, since they represent
`rigid transformations: these are always invertible in their
`homogeneous, linearised form.
`The laser plane representation as stated in equation 1 is de-
`termined easily, since the laser’s referential has been cho-
`sen so as to coincide the laser plane with its XZ plane; that
`being so, the plane’s coefficients affected to the camera’s
`referential are readily computed through their rigid trans-
`formation with C T, knowing that its coefficients affected
`to the laser’s referential, are, in fact, a = = d = 0 and
`b = 1.
`
`B. Other issues
`Several studies concerning the system’s set-up, the regis-
`tration methods and positioning devices are being made.
`Some important issues have already become clear from
`these studies:
`
`1. For the registration methods to work, profiles must
`overlap, i.e., a same region of the 3D scene must be
`scanned at least twice; since profiles are finite, these
`must also cross inside the scene’s boundaries (see [2])
`— this is due to the matching features constraint, that
`implies that, given two feature sets, one must be a sub-
`set of the other.
`2. In order to simplify the model for registration and to
`adapt it for use with the positioning and attitude mea-
`surement devices, the rigid-body transformation ex-
`pression can be reformulated in a manner which de-
`couples translation computation from that of rotations
`by referring the coordinates to the respective centroids
`of each point set (see [4]). The translation from sets A
`to B of matching points would then be given by:
`
`Axj
`
`(7)
`
`Xj
`1 
`
`=1
`
`Bxj
`
`BRA
`
`Xj
`1 
`
`=1
`
`B~A =
`
`3. The 3D scenes must be quasi-static for the system to
`work properly, making scanning of organic subjects,
`such as human bodies, difficult to achieve.
`
`0004
`
`Exhibit 1023 page 4 of 4
`DENTAL IMAGING
`
`

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket