2020
DOI: 10.3390/s20030706
|View full text |Cite
|
Sign up to set email alerts
|

Depth Image–Based Deep Learning of Grasp Planning for Textureless Planar-Faced Objects in Vision-Guided Robotic Bin-Picking

Abstract: Bin-picking of small parcels and other textureless planar-faced objects is a common task at warehouses. A general color image–based vision-guided robot picking system requires feature extraction and goal image preparation of various objects. However, feature extraction for goal image matching is difficult for textureless objects. Further, prior preparation of huge numbers of goal images is impractical at a warehouse. In this paper, we propose a novel depth image–based vision-guided robot bin-picking system for… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
20
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 36 publications
(20 citation statements)
references
References 47 publications
(80 reference statements)
0
20
0
Order By: Relevance
“…With the development of deep learning, some researchers try in the direction of learning-based object-agnostic sampling methods using neural networks. Jiang et al (2020) proposes a deep convolutional neural network (DCNN) to predict the set of grasp points from the input depth-image. Inspired by Varley et al (2015) that obtains grasps on pixels, Morrison et al (2018a) presents a Generative Grasping CNN (GG-CNN) which generates candidates immediately on pixelwise.…”
Section: Object-agnostic Samplingmentioning
confidence: 99%
“…With the development of deep learning, some researchers try in the direction of learning-based object-agnostic sampling methods using neural networks. Jiang et al (2020) proposes a deep convolutional neural network (DCNN) to predict the set of grasp points from the input depth-image. Inspired by Varley et al (2015) that obtains grasps on pixels, Morrison et al (2018a) presents a Generative Grasping CNN (GG-CNN) which generates candidates immediately on pixelwise.…”
Section: Object-agnostic Samplingmentioning
confidence: 99%
“…The final goal is always determination of robot gripper coordinates, corresponding to appropriate grasp points. There are even examples of research aimed at grasp point determination without prior object segmentation [ 44 ]. Therefore, the object identification rules in extended VCD format contain not only conditions allowing to identify the objects to be manipulated by robot, but also the rules for calculation of “object parameters” (in the above example those rules are contained in lines 8 through 10).…”
Section: Integration Of Voice Command Description Format (Vcd) Andmentioning
confidence: 99%
“…The one-stage methods are usually faster and suitable for real-time applications, but the overall recognition accuracy may be lower than the two-stage methods. Object pose estimation is commonly based on RGB-D image [ 11 ] or 3D model [ 12 , 13 ] to calculate the 3D pose of object. Manuelli et al [ 14 ] proposed the keypoint [ 15 ] method, which is used to detect human skeleton and find the relative keypoints, and combined it with local dense geometric information from a point cloud.…”
Section: Introductionmentioning
confidence: 99%