2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2016
DOI: 10.1109/iros.2016.7759114
|View full text |Cite
|
Sign up to set email alerts
|

High precision grasp pose detection in dense clutter

Abstract: Abstract-This paper considers the problem of grasp pose detection in point clouds. We follow a general algorithmic structure that first generates a large set of 6-DOF grasp candidates and then classifies each of them as a good or a bad grasp. Our focus in this paper is on improving the second step by using depth sensor scans from large online datasets to train a convolutional neural network. We propose two new representations of grasp candidates, and we quantify the effect of using prior knowledge of two forms… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
255
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 275 publications
(258 citation statements)
references
References 17 publications
2
255
0
1
Order By: Relevance
“…Rather than directly operating on randomised images, RCAN [8] is a recent approach that instead translate randomised rendered images into their equivalent non-randomised, canonical versions, producing superior results on a complex sim-to-real grasping task. Rather than operating on RGB images, other works have instead used depth images to cross the domain gap [37], [38]; however, in our tasks, the colour of an object is an important feature when inferring what object the robot needs to interact with, particularly when the geometry of the objects are very similar. In our work, we show that domain randomisation can be leveraged to transfer the ability to infer actions from human demonstrations.…”
Section: Related Workmentioning
confidence: 99%
“…Rather than directly operating on randomised images, RCAN [8] is a recent approach that instead translate randomised rendered images into their equivalent non-randomised, canonical versions, producing superior results on a complex sim-to-real grasping task. Rather than operating on RGB images, other works have instead used depth images to cross the domain gap [37], [38]; however, in our tasks, the colour of an object is an important feature when inferring what object the robot needs to interact with, particularly when the geometry of the objects are very similar. In our work, we show that domain randomisation can be leveraged to transfer the ability to infer actions from human demonstrations.…”
Section: Related Workmentioning
confidence: 99%
“…To exemplify the ability of our approach to improve grasping performance we use a PR2 robot to perform grasps planned by using Grasp Pose Detection (GPD) [32], which predicts a series of 6-DOF candidate grasp poses given a 3D point cloud for a 2-finger grasp. The reachability of the proposed candidate grasps are checked using MoveIt!…”
Section: E Graspingmentioning
confidence: 99%
“…Since we have three types of features and three axes to project, we have nine channels in total. For classifier, we use the LeNet [11] structure which is a common structure for grasp pose classification and ranking [7], [10]. The output of the classifier is the binary label {graspable, not graspable} associated with the confidence scores.…”
Section: Grasp Representation and Classificationmentioning
confidence: 99%