2016 International Conference on Instrumentation, Control and Automation (ICA) 2016
DOI: 10.1109/ica.2016.7811489
|View full text |Cite
|
Sign up to set email alerts
|

An image-based visual servo control system based on an eye-in-hand monocular camera for autonomous robotic grasping

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
6
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 9 publications
0
6
0
Order By: Relevance
“…In [ 6 ], a robotic system endowed with only a single camera mounted in eye-in-hand configuration was used for the ball catching task. In [ 7 ], a robot with an eye-in-hand monocular camera was used for autonomous object grasping. The methods of [ 5 , 6 , 7 ] can resolve the stated problem.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In [ 6 ], a robotic system endowed with only a single camera mounted in eye-in-hand configuration was used for the ball catching task. In [ 7 ], a robot with an eye-in-hand monocular camera was used for autonomous object grasping. The methods of [ 5 , 6 , 7 ] can resolve the stated problem.…”
Section: Introductionmentioning
confidence: 99%
“…In [ 7 ], a robot with an eye-in-hand monocular camera was used for autonomous object grasping. The methods of [ 5 , 6 , 7 ] can resolve the stated problem. The eye-in-hand concept enables the camera to maneuver along with the robotic arm, which improves the trajectory prediction precision in an open space, especially when the object is near the robot.…”
Section: Introductionmentioning
confidence: 99%
“…8 At present, most of the research work on visual servoing based on feature points uses objects with simple geometric features and backgrounds as the research objects. [9][10][11][12][13][14] For instance, four black dots on a flat square part were used as image feature points, and the visual servoing of a 10-DOF articulated maintenance arm robot was simulated. 9 Four corners of a green rectangle were used as image feature points to verify the effectiveness of the IBVS control method based on the eye-in-hand monocular camera.…”
Section: Introductionmentioning
confidence: 99%
“…9 Four corners of a green rectangle were used as image feature points to verify the effectiveness of the IBVS control method based on the eye-in-hand monocular camera. 10 In order to improve the accuracy of image processing, four low-noise and the same feature shapes were carefully designed on A4 paper, which were used as image features for incremental visual servoing research. 11 There are also many literatures that selected the four corners of a black rectangular block, 12 four dots on a square object, 13 and four dots on A4 paper 14 as image features to carry out related IBVS research.…”
Section: Introductionmentioning
confidence: 99%
“…Additionally, some related works make use of stereo cameras to track a target, in which both cameras are placed at a predetermined distance and rotation, calculating a whole 3D reconstruction of the scene [18,19,20] using Epipolar Geometry [21]. A single-camera system can also be deployed [22] simulating a stereo system, either using markers [23], or previously establishing the separation parameters among two images [24], knowing the relationship between the key-points of both images, necessary to build the Epipolar Geometry. Nobakht and Liu [25] proposed a method to estimate the position of the camera with respect to the world, using a known object to compute the position by Epipolar Geometry.…”
Section: Introductionmentioning
confidence: 99%