`A Rapid Coordinate Transformation Method Applied
`in Industrial Robot Calibration Based on
`Characteristic Line Coincidence
`
`Bailing Liu, Fumin Zhang *, Xinghua Qu and Xiaojia Shi
`
`State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University, Tianjin 300072,
`China; liubailing@tju.edu.cn (B.L.); quxinghua@tju.edu.cn (X.Q.); shixiaojia@tju.edu.cn (X.S.)
`* Correspondence: zhangfumin@tju.edu.cn; Tel.: +86-22-2740-8299; Fax: +86-22-2740-4778
`
`Academic Editors: Xin Zhao and Yajing Shen
`Received: 22 December 2015; Accepted: 14 February 2016; Published: 18 February 2016
`
`Abstract: Coordinate transformation plays an indispensable role in industrial measurements,
`including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied
`methods of coordinate transformation are generally based on solving the equations of point clouds.
`Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices.
`In this paper, a novel coordinate transformation method is proposed, not based on the equation
`solution but based on the geometric transformation. We construct characteristic lines to represent the
`coordinate systems. According to the space geometry relation, the characteristic line scan is made to
`coincide by a series of rotations and translations. The transformation matrix can be obtained using
`matrix transformation theory. Experiments are designed to compare the proposed method with other
`methods. The results show that the proposed method has the same high accuracy, but the operation
`is more convenient and flexible. A multi-sensor combined measurement system is also presented
`to improve the position accuracy of a robot with the calibration of the robot kinematic parameters.
`Experimental verification shows that the position accuracy of robot manipulator is improved by
`45.8% with the proposed method and robot calibration.
`
`Keywords: coordinate transformation; robot calibration; photogrammetric system; multi-sensor
`measurement system
`
`1. Introduction
`
`Multi-sensor measurement systems usually have different coordinate systems. The original data
`must be transformed to a common coordinate system for the convenience of the subsequent data
`acquisition, comparison and fusion [1,2]. The transformation of coordinate systems is applied in many
`fields, especially vision measurement and robotics. For example, two images need to have a unified
`coordinate system for image matching [3]. In camera calibration, the coordinate systems of the image
`plane and the object plane need to be unified for the inner parameter calculation [4]. In robot systems,
`the coordinate system of the robot twist needs to be transformed to the tool center position (TCP)
`to obtain the correct pose of robot manipulators [5,6]. A minor error introduced by an imprecise
`coordinate transformation could cause problems such as the failure of image matching and track
`breaking [1]. Especially in an error accumulating system such as series industry robots, the coordinate
`transformation error would accumulate in each step and thereby decrease the position accuracy of the
`robot manipulator. Therefore, research on coordinate transformation has been of interest to researchers
`in recent years.
`
`Sensors 2016, 16, 239; doi:10.3390/s16020239
`
`www.mdpi.com/journal/sensors
`
`sensors
`
`RoboticVisionTech EX2011
`ABB v. RoboticVisionTech
`IPR2023-01426
`
`
`
`Sensors 2016, 16, 239
`
`2 of 16
`
`Industrial robots are well-known to have weak position accuracy compared with their repeatability
`accuracy. The positioning accuracy degrades with the number of axes of the robotic arm due to error
`accumulation. Various methods have been presented to improve the position accuracy of robots,
`such as establishing a kinematic model of the robot and calibrating the kinematic parameters [7].
`Denavit and Hartenberg [8] first proposed the D-H model, which was revised to a linear model by
`Hayati [9]. It provides the basis for the kinematic calibration. Due to the geometry and non-geometry
`errors of the robot, the traditional robot self-calibration method based on the D-H model cannot
`accurately describe the robot pose. To avoid the influence of the robot body, many researchers have
`utilized external measuring instruments to calibrate the robot online [10,11]. To achieve the aim of
`calibration, the primary process is to unify the coordinate systems of a calibrated instruments and
`the robot. Only in this way is it possible to use the measurement results to correct the kinematic
`parameters of the robot. With an inaccurate coordinate transformation method, the transformation
`error might merge into the revised kinematics parameters, thereby failing to improve the positioning
`accuracy of the robot through the calibration of kinematic parameters. Therefore, an accurate method
`of coordinate transformation is indispensable in the field of robot calibration. The well-developed
`and widely-used methods of coordinate transformation at present might be classified into several
`categories: the Three-Point method, Small-Angle Approximation method, Rodrigo Matrix method,
`Singular Value Decomposition (SVD) method, Quaternion method and Least Squares method [2].
`The Three-Point method uses three non-collinear points in space to construct an intermediate reference
`coordinate system [12,13]. The transformation relationship between the initial coordinate system and
`the target coordinate system is obtained by their relationship relative to the intermediate reference
`coordinate system. Depending on the choice of the public points, the accuracy of the Three-Point
`method might be unstable. The Small-Angle Approximation method means that the rotation matrix
`can be simplified by using the approximate relationship of a trigonometric function (sinθ “ θ, cosθ “ 1)
`when the angle between the two coordinate systems is small (less than 5˝). It is more suitable for the
`coordinate transformation of small angles. The Rodrigo matrix is a method of constructing a rotation
`matrix by using the anti-symmetric matrix [14,15]. Despite its high accuracy and good stability, the
`algorithm might be complex and difficult. The Singular Value Decomposition method (SVD) is a
`matrix decomposition method that can solve the minimization of the objective function based on
`the minimum square error sum [16]. The method is accurate and easy to implement, but it might be
`difficult to work out the rotation matrix under a dense public point cloud. The Quaternion method
`uses four element vectors (q0, q1, q2, q3) to describe the coordinate rotation matrix [17,18]. The aim of
`the algorithm is to solve for the maximum eigenvalue and the corresponding feature vector when
`the quadratic is minimized. It is a simple and precise method, but there might be no solution due
`to the use of ill conditioned matrices. In practice, complex calculations and unstable results would
`make the application more difficult and complicated. Therefore, researchers are searching for a simpler
`and more stable method of coordinate transformation. For example, Zhang et al. proposed a practical
`method of coordinate transformation in robot calibration [19]. This method rotates three single axes of
`the robot to calculate the normal vectors in three directions, combined with the data of the calibration
`sensor. Then, combined with the own readings of the robot, the rotation matrix and translation matrix
`are obtained. The method avoids the need to solve an equation and complex calculations, but it might
`be affected by any manufacturing errors of the robot and requires a calibration sensor with a large
`measuring range that can cover the full working range of the robot.
`
`2. Online Calibration System of Robot
`
`Industrial robots have the characteristics of high repeatability positioning accuracy and low
`absolute positioning accuracy. This is due to the structure of the robot, manufacturing errors, kinematic
`parameter error and environmental influence [10]. To improve the absolute positioning accuracy of the
`robot, the use of an external sensor to measure the position of the robot manipulator it any effective
`approach. This paper proposes an on-line calibration system for the kinematic parameters of the robot
`
`
`
`Sensors 2016, 16, 239
`
`3 of 16
`
`using a laser tracker and a close-range photogrammetric system, as Figure 1 shows. According to the
`differential equations constructed by the kinematic parameters of each robot axis, the final mathematic
`model of kinematic parameters of the robot is established. The position errors of the robot manipulator
`are obtained by comparing the coordinates in the robot base coordinate system and the measurement
`sensor system. Then, the errors, including the coordinate transformation error, target installation
`error and position and angle errors of the robot kinematic parameters, are separately corrected. In the
`robot calibration, on the one hand, the coordinate transformation error directly affects the final error
`correction of the kinematic parameters. On the other hand, the coordinate systems of sensors are
`often required to transform in the on-line combined measurement system. Therefore, the premise of
`obtaining the position errors of a robot manipulator is to unify the coordinate systems of the various
`measurement sensors by an accurate, fast and stable coordinate transformation algorithm.
`
`Figure 1. Online calibration system of robot kinematic parameters.
`
`In combination with the characteristics of the robot, we propose a practical coordinate
`transformation method. It extracts the characteristic lines from the point clouds in different coordinate
`systems. According to the theory of space analytic geometry, the rotation and translation parameters
`needed for the coincidence of the characteristic line scan be calculated. Then, the coordinate
`transformation matrix is calculated. The coincidence of the characteristic lines represents the
`coincidence of the point clouds as well as the coincidence of the two coordinate systems.
`This method has some advantages. First, it does not require the solution of equations and
`complex calculations. Second, because the transformation matrix is obtained from the space geometry
`relationships, it would not be affected by robot errors or other environmental factors. The result is
`accurate and stable. Third, it does not require a sensor with a large field of view. Fourth, the algorithm
`is small and fast without occupying processor time and resources, and can be integrated into the host
`computer program. It could be applied easily in measurement coordinate systems that often need
`to change.
`
`3. Methods of Online Calibration System
`
`3.1. Method of Coordinate Transformation
`
`Suppose that S is a cubic point cloud in space. Point cloud M is the form of S located in the
`coordinate system of the sensor OSXSYSZS. N is the form of S located in the robot base coordinate
`system OrXrYrZr. M' represents the point cloud M transformed from the coordinate system of the
`r.
`sensor OSXSYSZS to the robot base coordinate system OrXrYrZr with the transformation matrix TS
`r. Then,
`The difference between N and M' is the transformation error caused by the transfer matrix TS
`
`
`
`
`
`Sensors 2016, 16, 239
`
`4 of 16
`
`the coincidence of the coordinate systems OSXSYSZS and OrXrYrZr can be expressed as the coincidence
`of the two point clouds N and M'. For simplifying this mathematical model of the transformation
`process, we establish several characteristic lines instead of each point cloud. As verified by experiment,
`at least two characteristic lines are required to ensure the transformation accuracy.
`In Figure 2, two points A1 and A2 are chosen to be linked to the characteristic line A. Points B1
`and B2 form characteristic line B. Similarly, in point cloud N, the corresponding points A1'and A2'
`form line A', and points B1' and B2' form line B'. To achieve the coincidence of lines A and A', line A
`must be rotated around an axis in space. The rotated axis is the vector C which is perpendicular to the
`plane constructed by lines A and A'. As Figure 3 shows, the process of a vector rotating around an
`arbitrary axis can be divided into a series of rotations around the axis X, Y, Z. The following are the
`decomposition steps.
`
`Figure 2. The schematic diagram of coordinate transformation method.
`
`Figure 3. Schematic diagram of a vector rotated around an arbitrary axis.
`
`Take the first coincidence of Lines A and A' as an example:
`
`(a)
`
`Tpx1, y1, z1q “
`
`Translate the rotation axis to the coordinate origin. The corresponding transformation matrix can
`be calculated as:
`0 ´a0
`0
`0 ´b0
`0
`1
`1 ´c0
`0
`0
`0
`0
`0
`1
`where, (a0, b0, c0) is the coordinates of the center point of line A.
`
`»———– 1
`
`fiffiffiffifl
`
`(1)
`
`
`
`Sensors 2016, 16, 239
`
`Rotate the axis α1 degrees to Plane XOZ.
`
`»———– 1
`
`Rxpα1q “
`
`fiffiffiffifl
`
`5 of 16
`
`(2)
`
`c1b
`2 ` c1
`b1
`, where, (a1, b1, c1) are the coordinates of vector C, as Figure 3b shows.
`
`,
`
`2
`
`sinα1 “
`
`0
`0
`0
`cosα1 ´sinα1
`0
`0
`0
`sinα1
`cosα1
`0
`0
`0
`0
`1
`α1 is the angle between the axis and plane XOZ. It can be obtained by cosα1 “
`b1b
`2 ` c1
`2
`b1
`Rotate the axis β1 degrees to coincide with Axis Z.
`
`$’’’’’&’’’’’%
`
`»———–
`
`fiffiffiffifl
`
`0
`cosβ1
`1
`0
`´sinp´β1q 0
`0
`0
`
`sinp´β1q 0
`0
`0
`cosβ1
`0
`0
`1
`
`a1
`
`2
`
`.
`
`»———– cosθ1
`
`´sinθ1
`0
`0
`
`Rzpθ1q “
`
`fiffiffiffifl
`
`sinθ1
`cosθ1
`0
`0
`
`0
`0
`1
`0
`
`0
`0
`0
`1
`
`(b)
`
`(c)
`
`(d)
`
`(e)
`
`(f)
`
`Ryp´β1q “
`b
`b
`where, β1 is the angle between the rotation axis and axis Z. It can be obtained by
`2 ` c1
`2
`b1
`cosp´β1q “ cosβ1 “
`b
`2 ` b1
`2 ` c1
`a1
`sinp´β1q “ ´sinβ1 “ ´
`2 ` b1
`2 ` c1
`2
`a1
`Rotate the axis θ1 degrees around Axis Z, as shown in Figure 3d.
`
`where θ1 is the angle between lines A and A', which can be obtained by θ1 “ă Ñ
`¨ Ñ
`A1
`arccosp
`A1
`Rotate the axis by reversing the process of Step (c)
`0 ´sinβ1
`1
`0
`0
`cosβ1
`0
`0
`
`0
`sinβ1
`0
`
`ˇˇˇˇˇˇˇˇÑ
`
`ˇˇˇˇq.
`
`Rypβ1q “
`
`ˇˇˇˇÑ
`
`A
`
`ÑA
`
`»———– cosβ1
`»———– 1
`
`where, β1 is as the same as in step (c).
`Rotate the axis by reversing the process of Step (b).
`
`Rxp´α1q “
`
`0
`cosα1
`0
`0 ´sinα1
`0
`0
`
`0
`sinα1
`cosα1
`0
`
`0
`0
`0
`1
`
`where, α1 is as the same as in step (b).
`
`0
`0
`0
`1
`
`fiffiffiffifl
`fiffiffiffifl
`
`(3)
`
`(4)
`
`ÑA
`
`1 ą“
`
`(5)
`
`(6)
`
`A,
`
`
`
`Sensors 2016, 16, 239
`
`(g)
`
`Rotate the axis by reversing the process of Step (a)
`
`fiffiffiffifl
`
`0
`1
`0
`0
`
`0
`0
`1
`0
`
`a0
`b0
`c0
`1
`
`6 of 16
`
`(7)
`
`»———– 1
`
`0
`0
`0
`
`Tp´x1,´y1,´z1q “
`
`where, (a0, b0, c0) is as the same as in step (a).
`Combining all of the previous steps, the final transformation matrix Trt1 of the first parallel (lines
`A and A') is expressed as:
`Trt1 “ Tp´x1,´y1,´z1q ¨ Rxp´α1q ¨ Rypβ1q ¨ Rzpθ1q ¨ Ryp´β1q ¨ Rxpα1q ¨ Tpx1, y1, z1q
`
`(8)
`
`Through the rotation matrix Trt1 calculated by Equation (8), the points Pi(x, y, z) in point cloud M
`can generate a new point cloud M1 by Equation (9).
`P1
`i px, y, zq “ Trt ¨ Pi px, y, zq
`Then, the characteristic line A of the new point cloud M1 is parallel with the characteristic line A'
`of point cloud N, as Figure 4a shows.
`
`(9)
`
`Figure 4. The processes of the coordinate transformation method.
`
`Based on the new point cloud M1 and point cloud N, the rotation matrix Trt2, which make the Line
`B of Point cloud M1 parallel with Line B' of Point cloud N, can be calculated through Equations (1)–(8):
`Trt2 “ Tp´x2,´y2,´z2q ¨ Rxp´α2q ¨ Rypβ2q ¨ Rzpθ2q ¨ Ryp´β2q ¨ Rxpα2q ¨ Tpx2, y2, z2q
`
`(10)
`
`Through the rotation matrix Trt2, the points Pi(x, y, z) in point cloud M1 can generate a new point
`cloud M2 again by Equation (9). Then, the characteristic line B of the new point cloud M2 is parallel
`with the characteristic line B' of point cloud N, as Figure 4b shows.
`Since the point clouds are cubic, the characteristic lines are the diagonal lines. So, BKA, B'KA'.
`Since, B//B'. Then, BKA', B'KA. Therefore, the parallel Line B and B' are perpendicular to Line A and
`
`
`
`
`
`Sensors 2016, 16, 239
`
`7 of 16
`
`A'. There is an angle θ between Line A of point cloud M2 and Line A' of point cloud N, so the Line
`B of point cloud M2 is chosen as the rotation axis. The angle between Line A of point cloud M2 and
`Line A' of point cloud N is chosen as the rotation angle. The point cloud M2 is rotated by the above
`parameters. Then, the Line A of point cloud M2 and Line A' of point cloud N are parallel, like Line B
`of point cloud M2 and Line B' of point cloud N. Similarly, the rotation matrix Trt3 can be calculated by
`Equations (1)–(8):
`Trt3 “ Tp´x3,´y3,´z3q ¨ Rxp´α3q ¨ Rypβ3q ¨ Rzpθ3q ¨ Ryp´β3q ¨ Rxpα3q ¨ Tpx3, y3, z3q
`
`(11)
`
`The points Pi(x,y,z) in point cloud M2 can generate a new point cloud M3 by Equation (9), which
`are parallel with the point cloud N, as Figure 4c shows. In order to make coincident the point cloud
`M3 and point cloud N, the translation matrix Tr needs to be calculated by the two center points of Line
`A and A'. The new point cloud M' can be generated after translated by Tr. Therefore, through a series
`of simple rotations and translation, the two point clouds N and M' are coincident, as Figure 4d shows.
`The final transformation matrix is shown as Equation (12). The result, as a necessary preparation step,
`can then be used in robot calibration:
`
`Trt “ Trt3 ¨ Trt2 ¨ Trt1 ` Tr
`
`(12)
`
`3.2. Method of Robot Calibration
`
`»———– r1p
`
`Assume that Bp “
`
`The actual kinematic parameters of the robot deviate from their nominal values, which is referred
`to as kinematic errors [10]. The kinematic parameter calibration of a robot is an effective way to improve
`the absolute position accuracy of the robot manipulator. A simple robot self-calibration method based
`on the D-H model is described as follows. Reference [20] gives a more detailed description.
`pxp
`r2p
`r3p
`r5p
`r6p
`pyp
`r4p
`r8p
`r9p
`pzp
`r7p
`0
`0
`0
`1
`of the photogrammetric system, where r1p „ r9 p are the attitude parameters and pxp „ pzp are the
`position parameters. Through transformation from the coordinate system of the measurement sensor
`pxo
`r2o
`r3o
`r5o
`r6o
`pyo
`r8o
`r9o
`pzo
`0
`0
`1
`
`fiffiffiffifl is the pose of a certain point in the coordinate system
`»———– r1o
`fiffiffiffifl
`
`r4o
`r7o
`0
`
`OpXpYpZp to the robot base coordinate system OoXoYoZo, the point pose Bo “
`
`Bo “ Trt ˆ Bp
`where, Trt is the transformation matrix, which can be obtained by the method described in Section 3.1.
`Given the six DOF robot in the lab, the transformation matrix from the robot tool coordinate
`system to the robot base coordinate system is expressed as:
`0 “ T1
`1 L Tnn´1 L TNN´1 pN “ 6q
`
`TN
`0 T2
`In this system, the cooperation target of the measurement sensor, which is set up at the end axis
`of the robot, should be considered as an additional axis, Axis 7. Then, the transformation matrix from
`Axis 6 to Axis 7 is:
`
`can be obtained by Equation (12):
`
`
`
`»———– 1
`
`0
`0
`0
`
`0
`1
`0
`0
`
`0
`0
`1
`0
`
`fiffiffiffifl
`
`tx
`ty
`tz
`1
`
`6 “
`T7
`
`(13)
`
`(14)
`
`(15)
`
`
`
`Sensors 2016, 16, 239
`
`8 of 16
`
`where tx, ty, tz are the translation vectors, which can be measured previously. Therefore, according to
`the kinematic model of the robot, the typical coordinates of the robot manipulator in the robot base
`coordinate system OOXOYOZO is expressed as:
`
`¸
`
`Ti
`i´1
`
`¨ Bt
`
`(16)
`
`˜
`
`7ÿ
`
`i“1
`
`Bo “
`
`where Bt is the point pose in the robot tool coordinate system, and Bo is the point pose from the robot
`tool coordinate system to the robot base coordinate system.
`In the robot calibration, the kinematic parameters are the most significant impact factors, which
`usually means the link parameters of the robot. In the D-H model, the link parameters include the
`length of the link a, the link angle α, the joint displacement d and the rotation angle of the joint θ. With
`the disturbances of the four link parameters, the position error matrix for adjacent robot axes dTi
`i´1
`can be expressed as:
`
`∆ai ` BTi
`∆αi ` BTi
`∆θi ` BTi
`i´1 “ BTi
`i´1Bdi
`i´1Bai
`i´1Bαi
`i´1Bθi
`i´1q´1 ¨ BTi
`i´1Bqi
`where ∆θi,∆αi,∆ai and ∆di are the small errors of link parameters. Suppose that Aqi “ pTi
`where, q represents the link parameters (a, d, α, θ).
`If every two adjacent axes are influenced by the link parameters, the transformation matrix from
`the robot base coordinate system to the coordinate system of the robot manipulator can be expressed as:
`
`dTi
`
`∆di
`
`(17)
`
`,
`
`0 “ Nź
`
`0 ` dTN
`TN
`
`i“1
`
`´
`
`
`Ti
`
`i´1 ` dTii´1
`
`¯
`
`´
`
`“ Nź
`
`i“1
`
`¯
`
`
`Ti
`
`i´1 ` Tii´1
`
`∆i
`
`pN “ 6q
`
`(18)
`
`(19)
`
`»—– kx
`
`»—– kx
`
`kx
`tx
`ky
`tx
`kz
`tx
`
`kx
`ty
`ky
`ty
`kz
`ty
`
`kx
`tz
`ky
`tz
`kz
`tz
`
`where, TN
`0 is the typical transformation matrix from the robot base coordinate system to the coordinate
`system of the robot manipulator and dTN
`0 is the error matrix caused by the link parameters. Through
`expanding dTN
`0 and performing a large number of simplifications and combinations, Equation (18)
`can be simplified as:
`0 “ T1
`∆θ1 ` T1
`∆α1 ` T1
`∆a1 ` T1
`∆d1
`dTN
`0 Aθ1TN
`0 Aα1TN
`0 Aa1TN
`0 Ad1TN
`1
`1
`1
`1
``T2
`∆θ2 ` T2
`∆α2 ` T2
`∆a2 ` T2
`∆d2
`0 Aθ2TN
`0 Aα2TN
`0 Aa2TN
`0 Ad2TN
`2
`2
`2
`2
`
`
`
`
`0 AαN∆αN ` TN`L ` TN0 AθN∆θN ` TN 0 AaN∆aN ` TN0 AdN∆dN
`Suppose that kiq “ Ti
`0AqiTN
`1 , where, q represents the four link parameters. The position error of
`the robot manipulator can be simplified as given in Equation (20):
`∆p “ rdtxdtydtzsT
`L kx
`kx
`kx
`kx
`kx
`1a
`1d
`1α
`6θ
`2θ
`1θ
`“
`ky
`ky
`ky
`L ky
`ky
`ky
`1a
`1d
`1α
`1θ
`6θ
`2θ
`L kz
`kz
`kz
`kz
`kz
`kz
`1a
`1d
`1α
`6θ
`2θ
`1θ
`r∆θ1∆α1∆a1∆d1∆θ2 ¨¨¨ ∆d6∆tx∆ty∆tzsT
`“ Bi∆qi
`where, ∆ p is the position error of the robot manipulator. dtx, dty, dtz are the Cartesian coordinate
`L kx
`tz
`1θ
`components of the position error and Bi “
`M O M
`kz
`L kz
`tz
`1θ
`
`fiffifl¨
`fiffifl is the parameter matrix related to the
`
`(20)
`
`
`
`Sensors 2016, 16, 239
`
`9 of 16
`
`typical position value of the robot manipulator. In this paper, because the DOF of the series robot
`is 6, ∆qi “ r∆θ1 „ ∆tzs includes 24 kinematics parameters of the robot a1 „ a6, d1 „ d6, α1 „ α6,
`θ1 „ θ6 and three translation error variables of T6
`7. Therefore, there are 27 parameters of the robot
`that need to be calibrated. In Equation (20), the left side of equation is the position error at each
`point, as measured by the measurement sensor, and the right side is the kinematics errors that need
`to be corrected. These errors can be revised by the least squares method in the generalized inverse
`matrix sense.
`
`4. Experiments and Analysis
`
`Through the designed experiments, we show how to use the proposed coordinate transformation
`method to achieve the coordinate transformation of the on-line robot calibration system. Using
`verification experiments, we determine the result of the robot calibration using the proposed method.
`For evaluating the performance of the proposed method, it is compared with four other common
`methods of coordinate transformation under the same experimental conditions.
`
`4.1. Coordinate Transformationin an On-line Robot Calibration System
`
`The on-line robot calibration system we constructed includes an industrial robot, a photographic
`system and a laser tracker as shown in Figure 4. The model of the robot in lab is the KR 5 arc from
`KUKA Co. Ltd. (Augsburg, Germany), one of the world's top robotic companies. Its arm length is
`1.4 m and the working envelope is 8.4 m3. For covering most of the robot working range, the close
`range photogrammetric system in the lab, TENYOUN 3DMoCap-GC130 (Beijing, China), requires a
`field of view of more than 1 mˆ 1 mˆ 1 m without any dead angle. To achieve the goal of on-line
`measurement, a multi-camera system is needed. We used a multi-camera system symmetrically formed
`by four CMOS cameras with fixed focal lengths of 6 mm. The laser tracker in the lab, FARO Xi from
`FARO Co, Ltd. (Lake Mary, FL, USA) is a well-known high accuracy instrument whose absolute
`distance measurement (ADM) is 10 µm ˘ 1.1 µm/mL. The laser beam can easily be lost in tracking
`because of barriers or the acceleration of the target, which would cause minor errors. Therefore, we
`combine the laser tracker with the photographic system to improve the measurement accuracy and
`stability and thereby make full use of the advantages of the high accuracy of the laser tracker and the
`free light-of-sight of the photographic system. After proper data fusion, the two types of data from
`the photographic system and the laser tracker can be gathered together. The method of data fusion
`and the verified experimental result are detailed in reference [21]. In the experiment, 80 points in the
`public field of the robot and the photogrammetric system are picked to build a cube of 200 mm ˆ
`200 mm ˆ 200 mm. The reason for building a cube is to facilitate the selection of characteristic lines
`and the calculation of coincidence parameters. The two targets of the photogrammetric system and
`laser tracker are installed together with the end axis of the robot by a multi-faced fixture. To obtain
`accurate and stable data, the robot stops for 7 s at each location, and the sensors measure each point
`20 times, providing an adequate measurement time for the photographic system and laser tracker.
`The experimental parameters of the photogrammetric system are an exposure time of 15 us, a frequency
`of 10 fps and a gain of 40, based on experience.
`According to Equations (1)–(7) and the experimental data, we can obtain the parameters of
`the transformation matrix shown in Table 1, where ai–θi. are the parameters for the coincidence of
`characteristic lines in Equations (1)–(7).
`
`
`
`Sensors 2016, 16, 239
`
`10 of 16
`
`Table 1. Calculated Results of Coincidence Parameters (Units: mm, ˝).
`
`Robot to Laser Tracker
`Robot to Photogrammetric System
`a1
`c1
`b1
`a1
`b1
`c1
`α1
`β1
`θ1
`α1
`β1
`θ1
`´2588.9 81326 ´62472 127.53˝ 1.4461˝ 256.282˝ ´3.0135 11.132 ´5.8921 117.89˝ 13.456˝ 0.007˝
`a2
`b2
`c2
`b2
`c2
`a2
`α2
`β2
`θ2
`α2
`β2
`θ2
`87.979˝ 37.588˝ 208.161˝ ´6.143
`0.26899 ´5.9267 177.4˝
`45.997˝ 0.004˝
`´30491 ´39586 1397.2
`a3
`b3
`c3
`b3
`c3
`a3
`α3
`β3
`θ3
`α3
`β3
`θ3
`38.555˝ 40.198˝ 176.953˝ 200.03
`159.99 ´200.07 141.35˝ 37.984˝ 0.005˝
`210.37 ´155.16 194.69
`
`According to Equations (8)–(10), the transformation matrices from the robot base coordinate
`system to the coordinate system of the sensors are calculated as:
`
`»– 0.99917951
`
`0.03352107
`´0.02272970
`0
`
`Trtrp “
`
`´0.02245170
`0.03370790
`´0.99940061 ´0.00864662
`´0.99971054
`0.00788692
`0
`0
`
`1003.54380
`167.88234
`984.54935
`1
`
`fifl Trtrl “
`
`»– 0.999178
`
`0.033635 ´0.02262 ´832.501
`´0.9994
`0.007803
`131.4773
`0.033819
`´0.02235 ´0.00856 ´0.99971
`1004.76
`0
`0
`0
`1
`
`fifl
`
`where, Trtrp is the transformation matrix from the robot base coordinate system to the coordinate
`system of the photogrammetric system. Trtrl is the transformation matrix from the robot base
`coordinate system tothe coordinate system of the laser tracker.
`By means of the above transformation matrix, we can obtain the point cloud coordinates
`transformed from the coordinate system of the robot to that of the sensors by Equation (12). Both
`the origin coordinates before and after transformation as well as the transformation error are shown
`in Table 2, where, Px,Py,Pz and Rx,Ry,Rz are three components of the original coordinates in two
`different coordinate systems. Tx,Ty,Tz are the coordinates of points transformed from the robot base
`coordinate system to the sensor coordinate system, and ∆x, ∆y, ∆ z are the three components of the
`transformation error.
`
`Table 2. Coordinates of Point clouds after Transformation (Units: mm).
`
`Rz
`875
`875
`875
`875
`
`Robot to Photogrammetric System
`Photogrammetric system
`Robot
`Rx
`Px
`Py
`Pz
`Ry
`895
`42.728
`138.567 109.566
`30
`961
`108.751 140.724 108.418
`30
`1028
`175.846 143.007 107.153
`30
`1095
`242.882 145.388 105.845
`30
`76 points are ignored
`error
`Transformation result
`∆y
`∆x
`∆z
`Tx
`Ty
`Tz
`138.590 109.751 ´0.247 ´0.023 ´0.185
`42.975
`108.921 140.822 108.277 ´0.170 ´0.098
`0.141
`175.866 143.088 106.780 ´0.020 ´0.081
`0.373
`242.812 145.355 105.282
`0.070
`0.033
`0.563
`76 points are ignored
`
`Rz
`875
`875
`875
`875
`
`Robot to Laser Tracker
`Laser tracker
`Robot
`Ly
`Ry
`Lx
`1048.620 29.944
`30
`1114.679 29.985
`30
`1181.646 29.955
`30
`1248.689 29.935
`30
`
`Rx
`Lz
`895
`875.077
`961
`874.995
`1028
`874.989
`1095
`874.791
`76 points are ignored
`error
`Transformation result
`∆x
`∆y
`Ty
`Tz
`∆z
`Tx
`875.024 ´0.004 ´0.023
`1048.624 29.967
`0.053
`0.029 ´0.017
`1114.625 29.956
`875.012
`0.054
`0.011 ´0.012
`1181.624 29.944
`875.001
`0.022
`0.003 ´0.099
`1248.624 29.932
`874.890
`0.065
`76 points are ignored
`
`It is can be calculated from Table 2 that the average values of the transformation error between
`the coordinate systems of the robot and photogrammetric system are ∆ x = 0.106 mm, ∆y = ´0.062 mm
`and ∆z = 0.013 mm. The average values of the transformation error between the coordinate systems of
`the robot and laser tracker are ∆x = ´0.015 mm, ∆y = 0.041 mm and ∆z =0.023 mm. Figures 5 and 6
`show that the transformation error of the photogrammetric system is approximately 10 times greater
`than that of the laser tracker. As in the earlier presentation, the nominal measurement accuracy of the
`photogrammetric system is 10´2 mm and that of the laser tracker is 10´3 mm. The results illustrate
`
`
`
`Sensors 2016, 16, 239
`
`11 of 16
`
`that the transformation accuracy has the same order of magnitude as that of the measurement sensor.
`This indicates that the transformation error is so small that it would not influence the accuracy of the
`sensors. The transformation method can also make the error distribution of the low precision sensor
`more uniform to improve the transformation accuracy and the accuracy of the robot calibration.
`
`Figure 5. Transformation error from robot to photogrammetric system.
`
`Figure 6. Transformation error from robot to laser tracker.
`
`4.2. Position Error of Robot after Coordinate Transformation and Calibration
`
`Experiments are designed to calibrate the kinematic parameters of the robot. The measurement
`system is shown in Figure 7. Sixty points in space are used for the calibration. After the coordinate
`transformation, the position errors between the sensor and the robot manipulator are obtained.
`A constraint method based on the minimum distance error is adopted to calibrate the robot kinematic
`parameters [20,21]. Twenty seven kinematic parameters, including 24 link parameters and three
`parameters of the fixture, are corrected. Then, 60 correct positions are calculated using the calibrated
`robot kinematic parameters. To evaluate the performance of the proposed method, we use a
`group of calibration results, which adopts a different coordinate transformation method and the
`same robot calibration algorithm, as a comparison. The position errors of the robot after the
`coordinate transformation and calibration are shown in Figure 8. δ1 is the position error after the
`coordinate transformation with the other method [19], and δ2 is the position error after the coordinate
`transformation with the proposed method (Characteristic Line method).
`
`
`
`
`
`
`
`Sensors 2016, 16, 239
`
`12 of 16
`
`Figure 7. Measurement system.
`
`Figure 8. Position errors after robot calibration.
`
`In position measurement, the distance between two points is often used to evaluate the position
`b
`accuracy, which is called the root mean square (RMS) error expressed by:
`pxi
`1 ´ xiq2 ` pyi
`1 ´ yiq2 ` pzi
`1 ´ ziq2
`RMSdi “
`It can be indicated from Figure 5 that the average RMS using the other coordinate transformation
`method is δ1 “ 0.436 mm. while the average RMS using the proposed method is δ2 “ 0.200 mm.
`The position accuracy is improved by 45.8% using the Characteristic Line method.
`To evaluate the accuracy distribution of the robot for different areas of the working range, a new
`set of testing data are utilized in a demonstration experiment. The coordinates of the center of robot
`calibration region O is (750, 0, 1000) in the robot base coordinate system. Taking O as the center of the
`circle, 200 mm as the radius, in this region the positioning accuracy of the robot would be the highest.
`To verify the distribution of the robot accuracy in the non-calibration regions, five positions O1 (1000,
`150, 550), O2 (780, 710, 900), O3 (780, 870, 410), O4 (600, ´800, 860), O5 (940, ´960, 450) are chosen.
`
`(21)
`
`
`
`
`
`
`
`Sensors 2016, 16, 239
`
`13 of 16
`
`Taking the five positions as the centers of the circle, 200 mm as the radius, in each region 60 points
`are chosen to calibrate robot as same as the previous calibration experiment. The position errors in
`different regions are shown in Table 3.
`
`Table 3. The RMS of position error calibrated in the different regions.
`
`Region
`Position error/mm
`
`O
`0.200
`
`O1
`0.330
`
`O2
`0.360
`
`O3
`0.271
`
`O4
`0.335
`
`O5
`0.319
`
`It is indicated from Table 3 that the average RMS of robot position error within the calibration
`region is 0.200 mm. The position error outside the calibration region is about 0.323 mm. It is proved
`that the calibration accuracy isn't consistent in the whole working range of robot. Therefore, this
`calibration method is more applicable to a smaller working range of the robot.
`
`4.3. Accuracy of Coordinate Transformation Method
`
`To obtain the accuracy of the proposed coordinate transformation method, an experiment is
`designed using the laser tracker. The laser tracker is placed at two different stations to m

Accessing this document will incur an additional charge of $.
After purchase, you can access this document again without charge.
Accept $ ChargeStill Working On It
This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.
Give it another minute or two to complete, and then try the refresh button.
A few More Minutes ... Still Working
It can take up to 5 minutes for us to download a document if the court servers are running slowly.
Thank you for your continued patience.

This document could not be displayed.
We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.
You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.
Set your membership
status to view this document.
With a Docket Alarm membership, you'll
get a whole lot more, including:
- Up-to-date information for this case.
- Email alerts whenever there is an update.
- Full text search for other cases.
- Get email alerts whenever a new case matches your search.

One Moment Please
The filing “” is large (MB) and is being downloaded.
Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!
If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document
We are unable to display this document, it may be under a court ordered seal.
If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.
Access Government Site