2008 Canadian Conference on Computer and Robot Vision 2008
DOI: 10.1109/crv.2008.30
|View full text |Cite
|
Sign up to set email alerts
|

Eye-In-Hand Visual Servoing for Accurate Shooting in Pool Robotics

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 13 publications
0
5
0
Order By: Relevance
“…The w H g is the current pose of end effector in the O w À X w Y w Z w . The grasping model equation (25) of 4-R(2-SS) parallel robot is also improved by motion error obtained from hand-eye calibration based on differential motion as follows:…”
Section: Improved Eye-to-hand Model and Model Solving Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…The w H g is the current pose of end effector in the O w À X w Y w Z w . The grasping model equation (25) of 4-R(2-SS) parallel robot is also improved by motion error obtained from hand-eye calibration based on differential motion as follows:…”
Section: Improved Eye-to-hand Model and Model Solving Methodsmentioning
confidence: 99%
“…To achieve accurate and stable grasping of fruit, the end effector of parallel robot needs to move to the position of fruit and grasp the fruit with an optimal grasping pose in the fruit sorting system based on 4-R(2-SS) parallel robot with 4-DOF. Assuming that the optimal grasping pose is H p , to make the end-effector change from the current pose H g to the optimal grasping pose H p accurately, the H p needs to be transformed and represented as w H p in the O w À X w Y w Z w , as shown in equation (25).…”
Section: Hand-eye Calibration and Grasping Pose Calculation Based On Improved Eye-to-hand Model And Model Solving Methods For 4-r(2-ss) Pmentioning
confidence: 99%
See 1 more Smart Citation
“…When the robot is servoed to its shot position, as determined by the GVS, it accumulates error. By analyzing the LVS image, and comparing the line connecting the current cue and object ball centers with the ideal line, it is possible to calculate transformations which can correct for the robot positioning error [14].…”
Section: A Lvs Correctionmentioning
confidence: 99%
“…We have developed two different methods to align the robot position with the LVS ideal line [14]. The simpler of the two, called the image-based method, is an iterative method based purely on 2D LVS image data.…”
Section: A Lvs Correctionmentioning
confidence: 99%