2006
DOI: 10.3233/his-2005-2404
|View full text |Cite
|
Sign up to set email alerts
|

Robotic eye-to-hand coordination: Implementing visual perception to object manipulation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2008
2008
2021
2021

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 51 publications
0
4
0
Order By: Relevance
“…The rotation matrix R is written as R = R ( ). (8) However, the relationship between the robot planar translation t and the 2D translation (Δ , Δ ) is not straightforward as in the rotation case. The robot planar translation is in essence the transformation of the 2D translation in image coordinate frame into the world coordinate frame.…”
Section: The Camera Ego-motion From Image Registrationmentioning
confidence: 99%
See 1 more Smart Citation
“…The rotation matrix R is written as R = R ( ). (8) However, the relationship between the robot planar translation t and the 2D translation (Δ , Δ ) is not straightforward as in the rotation case. The robot planar translation is in essence the transformation of the 2D translation in image coordinate frame into the world coordinate frame.…”
Section: The Camera Ego-motion From Image Registrationmentioning
confidence: 99%
“…As a result, the robot camera position and orientation (pose) with respect to the table plane is known. This can be accomplished in many ways, such as touching the table plane using a tactile sensor [8]; thereby, the table plane distance from the camera can be obtained. Moreover, many plane detection methods can be used, for example [11].…”
Section: Problem Descriptionmentioning
confidence: 99%
“…In ordinary applications, cameras are fixed and the object moves. 25 Consequently, 3D tracking of an object is done based on fixed cameras assumption. However, the situation is vice versa.…”
Section: Stereovisionmentioning
confidence: 99%
“…[16] NNs are also widely utilized in visual servoing applications for scene analysis, object classification, pattern recognition, etc., as a front-end controller. [17,18] In some visual servoing tasks, NNs are applied in robot control to deal with the uncertainties in kinematics, dynamics, Jacobian matrices, and model uncertainties during object manipulation. [19][20][21][22][23] Kelly et al [24] deliberated visual servoing of a robot manipulator in a fixed-camera structure in which the NNs estimate the inverse perceptual kinematic mapping.…”
Section: Introductionmentioning
confidence: 99%