`International Conference on Robotics and Biomimetics
`December 19 -23, 2009, Guilin, China
`
`Construction of Vision-based Manipulation System for 3D
`Industrial Objects
`
`Guangtao Zhao, Ying Jia and Zhicai Ou
`
`Abstract— Many applications call for robots to perform the
`tasks that the robot can accurately grasp and assemble many
`kinds of industrial parts. Using robots to perform these tasks
`can afford human safety as well as the high productivity. In
`this paper, an industrial robot system aiming to assemble the
`industrial parts with high precision is presented. The paper
`focuses on the design of the robot control system and the
`precise extraction of the position-and-orientation of the
`industrial parts. Finally, experiments are presented to show
`the efficiency of the system, in which sets of automotive parts,
`such as axletrees, bearings, pistons and pins are grasped from
`the conveyor and assembled together precisely.
`
`INTRODUCTION
`I.
`The application of vision-based industrial robotic assembly
`system is becoming increasingly widespread, especially in
`the capability of object grasping and assembly. In
`application of industrial assembly line, stability, effective
`manipulation and operation speed are very important to
`complete the whole task. This fact necessitates the high
`performance for the industrial assembly robotic system.
`And a typical industrial robot system should include the
`following functions: object identification, grasping, path
`planning and assembly. Aiming to the above, an industrial
`robot system with high precision is presented and this paper
`is focused on the design of the control system and the
`extraction of 3D position information for the target object.
`In the present industrial application of robotic assembly
`system, works in visual control can be classified into three
`main categories based on the control structure and the
`definition of the error signal. The first is the 3D visual
`control structure[1]. In this structure, the input of the error
`signal is calculated in three-dimension Cartesian coordinate.
`The features extracted from the image are utilized to
`estimate the pose of the target with respect to the camera.
`The second category is the 2D visual control structure [2] [3]
`& [4]. In this structure, the control error function is defined
`in the image space. The control input is defined neither in
`task space coordinates nor in joint coordinates since a
`closed loop scheme is performed directly in image plane.
`For this method, it is necessary to compute the image
`Jacobian matrix which establishes the relationships between
`the changes of
`the object and
`the changes of
`the
`corresponding image features. Different from the above two
`techniques, the third approach which is called 2-1/2-D
`
`Guangtao Zhao is with the Institute of Automation, Chinese Academy
`of Science.
`Ying Jia is with the College of Science, Minzu University of China.
`Zhicai Ou is with the Institute of Automation, Chinese Academy of
`Sciences.
`
`visual control structure[5] is defined by the control error
`function, in part in the image space and in part in the
`Cartesian space.
`Compared with the control structure mentioned above, a
`simplified image-based visual control system will be
`presented in this paper. With the assumption of a fixed
`height between the camera and the object, this visual
`control system provides a fast computation to meet the
`real-time operation requirement. What’s more, the shift
`position of the object in the image frame is proportional to
`that in the world frame.
`On the other hand, the grasping manipulation is one of
`the crucial steps for the assembly system. Considering the
`subsequent assembly operation with high precision, the
`industrial parts need to be grasped with a proper position
`and a proper orientation and this requires the accurate
`localization information for the target object. Therefore, it
`is necessary to precisely figure out the position and
`orientation of the parts to be grasped. Regarding to this,
`much work has been presented. Kefalea [6] presented an
`approach to localize the objects by combining the evidence
`provided by different visual cues, edges and colors. A
`method was presented by Speth et al. [7], who used the
`images of the target from different points of view to recover
`critical 3D information such as size, location and pose.
`Besides using the visual information, Haidacher and
`Hirzinger [8] proposed an approach for object localization
`by utilizing the contact measurements. Despite this, there
`are still problems and challenges for reliability and
`accuracy of the parts localization.
`In this paper, a robotic system for assembling different
`kinds of industrial parts is proposed. The target of this
`system is to grasp the industrial parts, and further to put
`them together with high precision (0.1mm). Considering the
`constraints and requirements of the actual assembly line, a
`simplified and efficient visual control system and a 3D
`grasping model are adopted to achieve fast, stable and
`efficient grasp and assemblage.
`This paper is organized as follows. Section II gives a
`brief introduction to the robotic assembly system. In
`Section III, a simplified and efficient visual control system
`is analyzed and in Section IV the efficient visual-based
`grasping model is presented. Section V gives the other
`important compositions of the system. Experimental results
`are showed in Section VI and conclusions are drawn in the
`last Section.
`
`II. SYSTEM FUNCTIONAL DESCRIPTION
`The practical target of the system is to make the system
`
`978-1-4244-4775-6/09/$25.00 © 2009 IEEE.
`1051
`Authorized licensed use limited to: Sterne Kessler Goldstein Fox. Downloaded on December 28,2023 at 00:59:35 UTC from IEEE Xplore. Restrictions apply.
`
`RoboticVisionTech EX2014
`ABB v. RoboticVisionTech
`IPR2023-01426
`
`
`
`able to search, identify, grasp and assemble the different
`industrial parts by itself. The robotic system consists of a
`six d.o.f. FANUC
`robot
`arm
`endowed with a
`camera-in-hand, a two-fingered hand, a conveyor and some
`designed fixtures. The hardware framework of the system is
`depicted by Fig. 1.
`
`choose the related grasping strategy according to the pose
`information of the object; (4) To control the robot to grasp
`the target parts with proper strategy; (5) To perform the
`corresponding assembly task aiming at different parts. In
`this paper, we focus on the construction of the system and
`the localization of the 3D target parts.
`
`Camera
`
`Robot Hand
`
`Industrial
`Workpieces
`
`(a)
`
`Fixtures
`
`Robot Gripper
`
`(b)
`Fig. 1. The Hardware Framework of the Assembly System. (a) shows
`the robot, the workpieces placed on the conveyor and the camera. (b) the
`fixtures for helping the assembly operation
`In Fig. 2, the whole software framework of the robotic
`assembly system is described. The manipulation process
`includes the following steps: (1) To search above the
`conveyor for the industrial parts and classify them; (2) To
`obtain the position and orientation of the target parts; (3) To
`
`Fig. 2. The framework of the software in the system
`
`III.
`
`
`
` ,( yxI
`
`)
`
`dxdy
`
`
`
` )(tR
`
` (5)
`
`IMAGE MOMENTS AS VISUAL FEATURES FOR THE VISION
`CONTROL STRUCTURE
`In the industrial assembly, the important performance is
`high speed and robustness which make the robot system
`more complicated. In the industrial system, however, there
`are some types of properties which help to make the
`construction of the robot system simple and efficient: (1)
`There are often only a few kinds of industrial parts to be
`identified. (2) The accurate geometric parameters of
`industrial parts are usually known. (3) The industrial parts
`often have distinctive features (holes and corners).
`Image moments were widely utilized for pattern
`recognition in the past. In our assembly system, the image
`moments are used because they can give a generic and
`geometric representation of the target part whatever the
`shape of the part is simple or complex. The key point to use
`the image moments is to try to avoid the image singularities
`that occur when redundant image features are utilized.
`The analytical form of the image Jacobian matrix about
`the image moments was determined in [9]. Here, the visual
`features using image moments are selected to manipulate
`the piston and peg, the bearing and axletree.
`A.
`Image Moments and Their Interaction Matrix
`)(tR
` be the
`For a binary or segmented image, let
`observed part image at time t. Then, the image moment of
`the target part is defined by
`³³
` )(tm
`
`
`ij
` )
` yxI ,(
`
`i yx
`.j
`where
`xg
`10 / mm
` and
`If the centroid of the object are
`00
`yg
`01 / mm
`, the centered moments with respect to the
`00
`centroid of the target object is
`³³
`
`
` )(tP
`
`y
`y
`dxdy
`x
`x
`)
`(
`()
`i
`ij
`g
`g
` )(tR
`
`Here, we take into account the relationship between the
`time variation of the moments and the relative kinematic
`V
` ),( Zv
`
`v
` , ,vvv
`
`[
`
`]
`, where
` and
`screw
`x
`y
`z
`x ZZZZ
`[
`,
`]
`,
` represent
`and
`translational
`the
`y
`z
`rotational velocity, respectively [10]
`PP
`ij
`m
`vL
`VL ij
`,
`ijm
`ij
` is the interaction matrix related to the origin
`
`
`
`j
`
` (6)
`
` (7)
`
`ijmL
`where
`ijLP
` the interaction matrix with respect to the
`and
`centered moments.
`
`1052
`Authorized licensed use limited to: Sterne Kessler Goldstein Fox. Downloaded on December 28,2023 at 00:59:35 UTC from IEEE Xplore. Restrictions apply.
`
`
`
`IV. VISION-BASED OBJECT MODELING METHOD
`Position and orientation information of the industrial
`parts are often needed to decide how to grasp them and
`operate them. In addition, the precision of the obtained
`information has a great impact on the grasping speed and
`accuracy of the robot system. Therefore, how to obtain the
`information efficiently and quickly is one of the key issues
`in the system.
`Because of the central perspective projection, it is not
`accurate to directly compute the orientation of the target
`parts based on the image points. In our system, the camera
`lens is always parallel to the conveyer so the points in the
`conveyor plane and its corresponding image points meet the
`relation of simple-ratio invariability. Since that, an arbitrary
`angle constructed by the points in the conveyer plane is
`equal to the corresponding angle constructed by their image
`points. In addition, the accurate geometric parameters of
`industrial parts are known and the camera lens is calibrated
`ahead of time so the distance between the image plane and
`:
`the conveyer is known too. Let
` denote the set of all
`Z the set of
`3-D vertices of the industrial parts,
`cS
`pS
` and
` the
`corresponding image points in the plane,
`plane of the conveyor. The following procedure is adopted:
`:
`}jy
`}ix and {
`1. To classify the point set
` into {
` in
`order to make
`¦
`y S¦
`x
`{ }i
`
`{ }j
`
`. (1)
`
`C
`
` and
`
`CS
`
`i
`j
`Here, the position of image points of
`to change.
`{ }jy
`, apply the following
`2. Aiming at every point in
`3D grasping model to rectify its position.
`Based on the above procedure, an example is analyzed.
`wA and
`wB
`Two vertices
` of a part are like in Fig.3 and
`wA in the
`the robot’s task is to grasp it from the point
`,cT
`cA ,
`vertical direction. As depicted in Fig. 5, the points
`cS , and the points
`,pT
`pA ,
`wB
`cO
` and
` are in the plane
`pS . Additionally,
`pB
`pO
`pO
` is
` and
` are in the plane
`the principal point of the image,
` is the focus,
` is
`LO
`CO
`the intersection point of the optical axis and the plane
`.CS
`
`{ }ix
`
` is not necessary
`
`PT
`
`Py
`PS
`
`PA
`E
`
`PB
`
`PO
`
`LO
`
`Px
`
`The grasping
`direction
`WA
`
`WB
`
`B. Controller Design
`
`Assembly robot
`
`WZ
`
`Target object
`
`WT
`
`WY
`
`WX
`Fig. 3. The camera is not just over the workpiece but should obtain the
`position and orientation of the workpiece at a glance.
`
`The manipulation of the industrial parts is achieved by
`the use of a three-degree-of freedom stage with two
`translation and one rotation motion. To provide visual
`information,
`three moments-based visual features are
`gx
`gy
`selected. They are
` and
`, which are the center of
`gravity of an industrial part in the image, and the object
`D
`. These three visual features are given as
`orientation
`follows:
`
`(cid:3)(8)
`
`¹·
`
`©§
`
`arctan
`
`21
`
`
`D
`
`01
`
`,
`
`mm
`
`
`
`y
`
`g
`
`,
`
`mm
`
`10
`
`
`
`x
`
`g
`
`P
`2
`¸¸
`¨¨
`11
`PP
`
`20
`02
`00
`00
`Through the development of the image moments, we
`obtain the following relationship between the velocity of
`the stage motion and the feature change obtained in image
`coordinates
`
`»»» ¼º
`zyx
`
`vv
`««« ¬ª
`»»» ¼º
`
`g
`
`
`
`g
`
`
`
`««« ¬ª
`
`
`
`»»» ¼º
`
`
`
`gg
`yx
`««« ¬ª
`
` (9)
`
`Z
`
`Zf
`y
`/
`0
`g
`
`Zf
`x
`0
`/
`g
`
`
`D
`0
`0
`1
`gz
` is the depth of the gravity center. Since the
`where
`interaction matrix is upper triangular, it presents partially
`decoupling properties. Therefore, the image singularities
`problem is solved since the jacobian matrix is full rank all
`the time.
`In addition, due to image processing error and model
`computation error, the open-loop visual control is not
`accurate and robust enough
`to meet
`the
`industrial
`requirement. Thus, visual servo which is a close-loop
`control is adopted in our system (showed in Fig. 4 ).
`Desired
`Feature
`
`Transitional Error
`' '
`
`,x
`y
`(
`)
`
`Rotational error
`T'
`
`Work-Piece
`Pose
`
`Visual
`Control
`
`error
`
`+_
`
`Actual Feature
`
`Feature
`Extraction
`
`Captured
`image
`
`Fig. 4. Visual control configuration
`
`CS
`CT
`CA
` Fig. 5. Illustration of obtaining the orientation of the industrial parts
`
`CO
`
`1053
`Authorized licensed use limited to: Sterne Kessler Goldstein Fox. Downloaded on December 28,2023 at 00:59:35 UTC from IEEE Xplore. Restrictions apply.
`
`
`
`
`
`arctan(
`
`Bp
`
`)
`
` (3)
`
`Based on Fig. 5, it is obvious that the image of the line
`pA B . Generally, we have p
`w wA B is
`segment
`the
`
`pA B as the orientation of p
`w wA B . However,
`orientation of
`
`in the assembly system, the correct grasping orientation is
`pT Bp
`identical with the orientation
` and the error angle is
`E.
`the angle
`pT
` is not known. Therefore,
`For this problem, the point
`the key point of this problem is to accurately find out the
`pT
`point
` in the image.
`Based on the geometric knowledge, we find out that
`A O
`O T
`v (2)
`
`
`r
`P
`P
`P P
`i
`A O
`O T
`C C
`C C
`ir
` is related to the distance of camera lens to the
`Here
`conveyer and it can be computed by camera calibration.
`Thus, the grasping direction can be computed as follows.
`
`y
`y
`D
`
`Tp
`
`x
`x
`Tp
`Bp
`Why does the robot system result in the error angle E?
`Through analyzing, the following reason is found out. Due
`to the central perspective projection, the image of the
`pT Bp
`pA B . If the camera focal p
`w wA B is not the
` but the
`
`
`f and the distance between the image plane and
`length is
`E is:
`d
`the conveyor plane is
`, the error angel
`
`
`
`
`TB
`TAd
`cw
`cw
`
`
`
`
`2
`TAd
`TA
`(
`cw
`cw
`
` E
`
`arccos(
`
`(
`
`TB
`cw
`
`2
`
`)
`
`
`
`)
`
`2
`
`
`
`2
`
`)
`
`TO
`(
`c
`c
`
`is known as the hand-eye calibration.
`Based on [11], the hand-eye calibration can be resolved
`by moving the robot more times and observing the
`corresponding motion of the camera and the robot at the
`same time.
`B. Searching for and Matching Objects with the Hausdorff
`Distance
`Before the robot grasps the objective part, the important
`tasks are to recognize them fast and efficiently.
`In our robotic assembly system, the matching method
`based on the Hausdorff distance has been widely applied in
`recent years. This method is quite tolerant of small position
`errors such as those that occur with edge detectors and other
`feature extraction methods. Given two finite point sets
` "
` "
`A
`a
`a
`B
`b
`b
`,
`}p
`, }q
`1{ ,
`1{ ,
` and
`,
`the Hausdorff
`Distance is defined as
`
`
`
`
`
`
`(H A B,
`
`,h A B h B A, ,
`
`
`) max
`
`
`
`
`a b
`,h A B
`
`where
`max min
`.
`
`
`b B
`a A
` In our assembly robotic system, the modified Hausdorff
`distance [12] is used as the matching model to recognize the
`different industrial parts. It is defined as
`1
`
`
`
` BAh( ,
`ba
`
`)
`in
`N
`Bb
`Aa
`A
` is the number of the sets
`
`
`
`
`
`
`
` (10)
`
` (11)
`
`A .
`
`m
`¦
`
`Where
`
`AN
`
`C. The Assembly of the Industrial Parts
`In the system, two kinds of devices for the parts assembly
`are designed to achieve the assembly task. Regarding to the
`assembly of the axletree and bearing, it is enough to only
`apply the mechanical devices. However, it is difficult to
`assemble the piston and peg because of interference fit. In
`this case, the principle of attractive region is used to meet
`the requirement with high precision as 0.1mm.
`
`VI. EXPERIMENTAL RESULTS
`Many experiments about the assembly of automotive
`parts are tested in our designed robotic system. The sets of
`the automotive parts include the bearing-axletree set and the
`piston-peg set. In our system, a Nikon color CCD camera is
`mounted on the robot end-effector. The acquired image is
`704u
`576
` pixels, and contains a lot of distortion and blur.
`According to the specific task, the whole manipulation is
`divided into two stages: the off-line stage and the on-line
`stage.
`z At the off-line stage, the following important steps
`should be completed. They are: (1) the hand-eye
`calibration which finds out the transformation matrix
`between the robot gripper and the camera in order to
`accurately control the hand; (2) to specify the base
`position of the automotive parts to be grasped and
`establish their templates of the objective parts.
`z At the on-line stage, the motion of the robot is
`
`)
`(4)
`Based on the equation (5), the influencing factors are:
`w cA T between the point
`z The vertical distance
`CS .
`and the plane
`cT
`cO
`.
` and
`z The distance of the point
`z The distance between the lens and the conveyor.
`w cA T and
`the
`Through
`the analysis,
`the distance
`cO Tc
`distance
` are proportional to the error angel. However,
`
`the distance between the lens and the conveyor is the
`contrary to the above two points. But, the image of the part
`to be grasped becomes small when the distance become
`longer. Therefore, a proper distance is chosen to meet our
`requirement.
`
`wA
`
`V. THE OTHER IMPORTANT COMPOSITIONS OF THE SYSTEM
`In the assembly system, there are still some important
`compositions except for the whole system configuration.
`They are the following three sections.
`A. Hand-Eye Calibration
`To accurately control the displacements of the robot, the
`relative position and orientation (transformation matrix)
`between the camera and the robot hand must be computed
`before the application of the assembly system. This problem
`
`1054
`Authorized licensed use limited to: Sterne Kessler Goldstein Fox. Downloaded on December 28,2023 at 00:59:35 UTC from IEEE Xplore. Restrictions apply.
`
`
`
`controlled by the closed-loop visual system. The
`manipulation for the robotic system to grasp the
`target parts includes the following four steps: (1) To
`search for the parts to be operated; (2) To recognize
`different kinds of the parts; (3) To obtain the position
`and orientation information of the parts; (4) To
`operate the parts based on the vision feedback and
`pre-defined strategies.
`In this section, the experiments of bearing-axletree and
`piston-peg assembly are taken as the examples so as to
`illustrate the whole assembly system and validate the
`presented vision-based modeling method.
`A. Off-line Preparations before Experiment
`Considering the actual requirement, the CCD camera is
`mounted about 300mm from the conveyor. Respectively,
`we take the base images of the automotive parts: bearings,
`axletrees, pistons and pegs.
`Using the method proposed by [13], we obtain the
`camera
`distortion
`parameters
`which
`are
`
`
`
`
`k
`e
`k
`e
`and
`. In order
`1.280739
`007
`2.366564
`012
`1
`2
`to finish hand-eye calibration, we move the robot twice and
`observe the changes about the position of the camera and
`the changes of the robot end-effector. So as to obtain a
`unique solution, the two robot end-effector movements
`must have different axes of rotation and their angles of
`S. Let
`1A and
`1B
` be the first
`rotation must not be 0 or
`2A
`movement of the camera and robot end-effector, and let
`2B
`and
` be the second movement of the camera and robot
`1A ,
`, 1B
`2A ,
`end-effector. The numerical values of the
`2B
`are:
`
`X
`
`
`
`ª
`«
`«
`«
`«
`¬
`
`
`0.2167
`
`0.9736
`0.0520
`0
`
`
`
`0.9717
`0.2105
`0.1071
`0
`
`
`0.1405
`0.0840
`0.9865
`0
`
`
`19.3037
`179.3834
`
`11.6820
`1
`
`º
`»
`»
`»
`»
`¼
`
`B. On-line Operation of the Robot System
`Based on the assembly procedure showed in Fig. 2, two
`kinds of assembly experiments are taken as the examples to
`validate the efficiency of the whole assembly system. The
`following is the details.
`1) Searching for and Classifying Industrial Parts
`As mentioned in section IV, the parts recognition method
`based on the modified Hausdorff Distance is applied in the
`searching process. While the robot moves the camera to
`scan the conveyor for finding target parts, the system
`performs template matching to classify the parts.
`2) Obtaining the Orientation by Vision-based Model
`Based on the new 3D grasping model proposed in this
`paper, the orientation of the axletree is obtained. In Fig. 6,
`two axletrees are placed randomly on the conveyor and the
`white lines stands for their orientation accurately through
`the model.
`Through many experiments, the relation between the
`error angle E and the impact factors is proved.
`
`(b)
`(a)
`Fig. 6. Orientation of the axletree by applying the vision-based model.
`The light lines in the two figures indicate the orientations of the axletrees.
`
`3) Obtaining the Position with Image Moments
`In this robotic system, it is important to obtain the
`corresponding position relationship between the present
`position and the base position in their images. Fig. 7 shows
`the centers of the axletree, bearing, and piston in the images.
`In experiments, the error is about 5mm. But it doesn’t
`influence the grasp and assembly of the part.
`
` and
`
`(b)
`(c)
`(a)
`Fig. 7. The center of the automotive parts by applying image moment. (a)
`the axletree. (b) the bearing. (c) the piston
`
`º
`»
`»
`»
`
`»¼
`
`º»
`
`º»»»»¼
`
`
`
`
`
`
`A
`2
`
`
`
`
`
`B
`2
`
`1055
`Authorized licensed use limited to: Sterne Kessler Goldstein Fox. Downloaded on December 28,2023 at 00:59:35 UTC from IEEE Xplore. Restrictions apply.
`
`ª«
`
`««¬
`
`«
`
`ª
`«
`«
`«
`
`«¬
`
`
`
`A
`1
`
`B
`1
`
`ª«
`
`48.4978
`0.0271
`0.9653 0.2598
`
`1.1852
`0.0341
`0.2606
`0.9648
`
`7.8113
`0.9991
`0.0172 0.0400
`1
`0
`0
`0
`0.9659 0.2588 0.0000 0
`0.2588
`0.9659 0.0000 0
`0.0000
`0.0000 1.0000 0
`1
`0
`0
`0
`
`20.8673
`0.9993
`0.0139 0.0333
`«
`
`0.0187
`0.9889 0.1477
`13.2461
`»
`«
`»
`
`0.0308
`23.4041
`0.1482 0.9885
`«
`»
`0
`0
`0
`1
`¬
`¼
`
`ª
`º
`0.9848 0.0000
`0.1736
`20
`»
`«
`0.0000 1.0000
`0.0000
`20
`«
`»
`»
`«
`0.1736 0.0000
`0.9848
`0
`«
`»
`0
`0
`0
`1
`¬
`¼
`XB
`1A X
`two
`Thus, with
`the
`equations
`1
`XB
`A X
`[14], the hand-eye calibration result is:
`2
`2
`
`
`
`success rate reaches to 90 percent when using the feedback
`information once.
`
`VII. CONCLUSION
`In this paper, an industrial robotic assembly system is
`presented to complete the grasp and assembly operation
`with high precision as 0.1mm. In order to achieve the above
`task, a simplified and efficient visual control structure is
`proposed to achieve the fast and robust control. In addition,
`a novel 3D grasping model is designed to accurately obtain
`the position and orientation of the parts. What’s more, the
`presented method is general for different kinds of the 3D
`industrial parts. In the end, the experiments about grasping
`and assembling of the bearing and axletree are presented to
`prove the efficiency of the designed robotic assembly
`system.
`
`[7]
`
`REFERENCES
`[1] W.J. Wilson, C.C. Williams Hulls and G.S. Bell, “Relative
`End-effector Control Using Cartesian Position Based Visual
`Servoing”, IEEE Transactions on Robotics and Automation, vol. 12,
`no. 5, pp. 684-696. 1996
`[2] N.J. Cowan and D. E. Chang, “Geometric Visual Servoing”, IEEE
`Transactions on Robotics, vol. 21, no. 6, pp. 1128-1138. 2005
`[3] P.I. Corke and S.A. Hutchinson, “A New Partitioned Approach to
`Image-Based Visual Servo Control”, IEEE Transactions on Robotics
`and Automation, vol. 17, no. 4. pp. 507-515. 2001,
`[4] R. Kelly, R. Carelli, O. Nasisi, B. Kuchen and F. Reyes, “Stable
`Visual Servoing of Camera-in-Hand Robotic Systems”, IEEE/ASME
`Transactions on Mechatronics, vol. 5. no. 1, pp. 39-48. 2000
`[5] E. Malis, F. Chaumette and S. Boudet, “2½D Visual Servoing”, IEEE
`Transactions on Robotics and Automation, vol. 15, no.2 pp. 238-250.
`1999
`[6] E. Kefalea, “Object Localization and Recognition for a Grasping
`Robot”, IECON '98, vol. 4, pp. 2057 – 2062, 1998.
`J. Speth, A.Morales, P.J.Sanz, “Vision-Based Grasp Planning of 3D
`Objects by Extending 2D Contour Based Algorithms”, IROS, pp.
`2240 – 2245, 2008.
`[8] S. Haidacher, G. Hirzinger, “Estimating Finger Contact Location and
`Object Pose from Contact Measurements in 3D Grasping”, ICRA, vol.
`2, pp. 1805 - 1810, 2003.
`[9] F. Chaumette, “Image moments: A general and useful set of features
`for visual servoing,” IEEE Trans. Robot., vol.20, no.4, pp.713-723,
`Aug. 2004
`[10] Espiau B, Chaumette F, Rives P, “A new approach to visual servoing
`in robotics”, IEEE Transactions on Robotics and Automation.
`8(3):313-326, 1992
`[11] Y.C. Shiu and S. Ahmad, “Calibration of Wrist-Mounted Robotic
`Sensors by Solving Homogeneous Transform Equations of the Form
`AX=XB”. IEEE Trans. Robotics and Automation. vol.5 no.1.
`pp.16-29. 1989.
`[12] Kwon Oh-Kyu, Sim Dong-Gyu, Park Rae-Hong, “Robust Hausdorff
`distance matching algorithms using Pyramidal structures”, Pattern
`Recognition, 34(7): 2005-2013, 2001
`[13] Guangtao Zhao, Hong Qiao, Zhicai Ou, “A method for calibrating
`camera lens distortion with cross-ratio invariability in welding seam
`system”, IEEE Conference of Intelligent Robotics and Applications
`[14] Shiu Y.C. and Ahmad S. Calibration of Wrist-Mounted Robotic
`Sensors by Solving Homogeneous Transform Equations of the Form
`AX=XB. IEEE Trans. Robotics and Automation 1989; 5(1): 16-29.
`
`Piston and peg
`
`crank
`
`Place the bearing
`
`Insert the peg
`
`Place the axle
`
`(a)
`(b)
`Fig. 8. The axletree-bearing grasping experiments
`
`Fig. 9. The axletree-bearing assembly experiments
`4) Grasping and Assemble the Industrial Parts
`The whole assembly process is illustrated in Fig. 8. The
`sets of bearing-axletree and piston-peg are assembled with
`all the techniques mentioned in this paper: searching for the
`industrial parts, classifying the parts, localizing the axletree
`and the bearing, grasping the parts and assembling them.
`Fig. 8 shows the process of the parts grasping. Fig. 9 shows
`the assembly of the different parts.
`C. Brief Summery of the Assembly System
`In the assembly system with high precision, the assembly
`experiments of the axletree-bearing and the piston-peg are
`performed successfully by the presented methods and
`procedure.
`Further experiments show that (1) The success rate is 75
`percent without visual feedback information; (2) The
`
`1056
`Authorized licensed use limited to: Sterne Kessler Goldstein Fox. Downloaded on December 28,2023 at 00:59:35 UTC from IEEE Xplore. Restrictions apply.
`
`

Accessing this document will incur an additional charge of $.
After purchase, you can access this document again without charge.
Accept $ ChargeStill Working On It
This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.
Give it another minute or two to complete, and then try the refresh button.
A few More Minutes ... Still Working
It can take up to 5 minutes for us to download a document if the court servers are running slowly.
Thank you for your continued patience.

This document could not be displayed.
We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.
You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.
Set your membership
status to view this document.
With a Docket Alarm membership, you'll
get a whole lot more, including:
- Up-to-date information for this case.
- Email alerts whenever there is an update.
- Full text search for other cases.
- Get email alerts whenever a new case matches your search.

One Moment Please
The filing “” is large (MB) and is being downloaded.
Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!
If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document
We are unable to display this document, it may be under a court ordered seal.
If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.
Access Government Site