2017
DOI: 10.3390/robotics6030014
|View full text |Cite
|
Sign up to set email alerts
|

Automated Assembly Using 3D and 2D Cameras

Abstract: 2D and 3D computer vision systems are frequently being used in automated production to detect and determine the position of objects. Accuracy is important in the production industry, and computer vision systems require structured environments to function optimally. For 2D vision systems, a change in surfaces, lighting and viewpoint angles can reduce the accuracy of a method, maybe even to a degree that it will be erroneous, while for 3D vision systems, the accuracy mainly depends on the 3D laser sensors. Comme… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
3
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 19 publications
1
3
0
Order By: Relevance
“…The model predictions reached parity with human accuracy levels. This indicated that the quality and consistency of demonstrations became a limiting factor and was a likely reason that our reported accuracy numbers were in line with some results reported for some similar systems [12,51,52] and significantly worse than some others [50]. The differences in applications and the accuracy of the ground truths made a comparison across applications difficult.…”
Section: Discussionsupporting
confidence: 85%
See 1 more Smart Citation
“…The model predictions reached parity with human accuracy levels. This indicated that the quality and consistency of demonstrations became a limiting factor and was a likely reason that our reported accuracy numbers were in line with some results reported for some similar systems [12,51,52] and significantly worse than some others [50]. The differences in applications and the accuracy of the ground truths made a comparison across applications difficult.…”
Section: Discussionsupporting
confidence: 85%
“…Finally, translation and rotation error are widely used. For an industrial assembly task, accuracies of <1 mm and <1 • were reported [50]; the prediction error in the pose of automotive objects was 1 cm and <5 • [12]; pose estimation for industrial work-pieces achieved an accuracy <10 mm and <2 • [51]; and for a bin-picking task, translation error was <23 mm and rotation <2.26 • [52].…”
mentioning
confidence: 99%
“…The system first found a rough estimate of the target object posed with the 3D camera, and then the two-dimensional camera on the robotic arm was utilized to define a fine estimate of the target. This approach avoids the viewing range limitation of robotic cameras by estimating the rough location first to guide the robotic arm to the working area and demonstrates a cooperative structure of fixed cameras and robotic cameras to position the target accurately [14]. In addition, with the popularization of deep learning approaches based on computer vision, a learning-based approach to hand-eye coordination for robotic grasping has been developed to enhance the awareness of collaborative pick and place robotic operations.…”
Section: Vision-guided Robots In Manufacturingmentioning
confidence: 99%
“…Feature matching algorithm can be divided into point-based matching algorithm (Kleppe et al , 2017; Yu et al , 2019), line-based matching algorithm (Miraldo et al , 2015; Lopez et al , 2015) and edge-based matching algorithm (Cai et al , 2016). The feature points are easy to extract by using point feature operator.…”
Section: Literature Reviewmentioning
confidence: 99%