Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No.01CH37164)
DOI: 10.1109/robot.2001.932945
|View full text |Cite
|
Sign up to set email alerts
|

An iterative approach to the hand-eye and base-world calibration problem

Abstract: In visual servoing applications using a position-based approach and an end-effector-mounted camera, the position and orientation of the camera with respect to the end-effector must be known. This information is frequently represented in the form of a Homogeneous Transformation Matrix (HTM). For special "noise-free" cases, a closed-form solution for this calibration problem can be determined. However, in the real world, such a solution is not adequate and a least-squares approach or an adaptive algorithm must b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
40
0

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 50 publications
(40 citation statements)
references
References 10 publications
0
40
0
Order By: Relevance
“…This means that the robot controller is receiving commands directly in the Cartesian space, which makes the control plant much simpler compared to the image-based servoing category [3]. However, the position-based servoing requires camera calibration, robot calibration, and hand-eye calibration [6].…”
Section: Purdue Line Tracking Systemmentioning
confidence: 99%
See 1 more Smart Citation
“…This means that the robot controller is receiving commands directly in the Cartesian space, which makes the control plant much simpler compared to the image-based servoing category [3]. However, the position-based servoing requires camera calibration, robot calibration, and hand-eye calibration [6].…”
Section: Purdue Line Tracking Systemmentioning
confidence: 99%
“…The real-time visual servoing implementation used in this study was developed at the Purdue Robot Vision Lab using a subsumptive, hierarchical, and distributed vision-based architecture for smart robotics [3,6,16,17]. This is a robust, advanced dynamic visual servoing implementation with a high level of fault tolerance to non-cooperative conditions such as severe occlusions and sudden illumination changes.…”
Section: Introductionmentioning
confidence: 99%
“…In all real testing scenarios performed, the system needed to extract reliable image coordinates of the feature points. To achieve that, the system relied on a very accurate calibration procedure [3]. Given that, the system could then return the image coordinates of the feature points at each time instant t. In the case of the simulated scenarios, a camera simulator was also implemented using C++ classes.…”
Section: A Homographymentioning
confidence: 99%
“…The relationship between these two sets of vectors can be expressed as mi = t +OR* mi (3) where t (t) is the translation between the two frames, and°R * (t) is the rotation matrix which brings 8 derivation of the control inputs using a quaternion formulation. The conversion from rotation and translation into a quaternion form as well as other derivations omitted here can be found in [9].…”
Section: Introductionmentioning
confidence: 99%
“…In this formulation, A and B are the homogeneous transformations measured directly from sensors. Quite a few AX = Y B solvers have been proposed in the literature [9,11,12,14,15,17,25]. Most of the existing AX = XB and AX = Y B solvers deal with the case where there is an exact correspondence between the data pairs A i and B i .…”
Section: A Related Workmentioning
confidence: 99%