2022
DOI: 10.1109/lra.2022.3142401
|View full text |Cite
|
Sign up to set email alerts
|

REGRAD: A Large-Scale Relational Grasp Dataset for Safe and Object-Specific Robotic Grasping in Clutter

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 29 publications
(19 citation statements)
references
References 44 publications
0
13
0
Order By: Relevance
“…To solve the problem of data, self-supervised learning and unsupervised learning are extensively explored in recent years [15,111], including some excellent works in robotic grasping [17,18,154,188,265]. It is also possible to train grasping policies in physical simulators and then transfer to the real world [103,105,257,258,273,278]. By enough data, the performance of data-driven approaches substantially outperforms the classical methods.…”
Section: Introductionmentioning
confidence: 99%
“…To solve the problem of data, self-supervised learning and unsupervised learning are extensively explored in recent years [15,111], including some excellent works in robotic grasping [17,18,154,188,265]. It is also possible to train grasping policies in physical simulators and then transfer to the real world [103,105,257,258,273,278]. By enough data, the performance of data-driven approaches substantially outperforms the classical methods.…”
Section: Introductionmentioning
confidence: 99%
“…Finding reliable grasps is therefore not limited to reason about the physical interaction between gripper and object, but also challenges the system to understand the arrangement of objects in the scene and identify an appropriate picking sequence. While many researches have provided excellent datasets on each individual part such as SynPick [1] for pose estimation and gripper-object interaction, REGRAD [2] for relationship reasoning, SuctionNet-1Billion [3] for vacuum grasping or Dex-Net 4.0 [4] for ambidextrous grasping, the problems were only addressed from one side or did not target automation. Because robotic picking is a multi-stage complex problem, solving the problem from one aspect will miss the details of other aspects.…”
Section: Introductionmentioning
confidence: 99%
“…Collecting such comprehensive data from real robot experiments [5], [6] or manually [7] would be too expensive and time prohibitive and as such intractable from a scalability perspective. Motivated by earlier work in the field of synthetic data generation [1], [2] and inspired from the recent rise in the concept of metaverses, we introduce MetaGraspNet: a large-scale photo-realistic physics-based bin picking dataset for ambidextrous grasping and bin scene reasoning. By providing rich semantic scene labels such as amodal segmentation masks or object layout graphs together with heterogeneous grasp labels, object poses, and keypoint labels, MetaGraspNet challenges picking systems to take the next step towards multi-gripper usage and cognitive understanding of bin scenes.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…With environment layout labels, pick planning can be trained more intelligently to avoid object damages [14,16,8]. However, there are only a few datasets [14,8,13] providing layout labels. Most of the datasets [14,8] have limited training value as they lack in data size, depth information, as well as segmentation labels.…”
Section: Introductionmentioning
confidence: 99%