2020
DOI: 10.1109/lra.2020.2965865
|View full text |Cite
|
Sign up to set email alerts
|

GRASPA 1.0: GRASPA is a Robot Arm graSping Performance BenchmArk

Abstract: The use of benchmarks is a widespread and scientifically meaningful practice to validate performance of different approaches to the same task. In the context of robot grasping the use of common object sets has emerged in recent years, however no dominant protocols and metrics to test grasping pipelines have taken root yet. In this paper, we present version 1.0 of GRASPA, a benchmark to test effectiveness of grasping pipelines on physical robot setups. This approach tackles the complexity of such pipelines by p… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 29 publications
(22 citation statements)
references
References 27 publications
0
22
0
Order By: Relevance
“…With the improvement of point cloud processing methods (Fischler and Bolles, 1981 ; Rusu et al, 2010 ; Rusu and Cousins, 2011 ; Aldoma et al, 2012 ; Chen et al, 2016 ) and the introduction of CNN based on point cloud as input (Wu et al, 2015 ; Qi et al, 2017a , b ), point clouds have become increasingly common for those tasks based on visual perception. Meanwhile, as more and more contributions on datasets of grasping based on point cloud (Goldfeder et al, 2009 ; Calli et al, 2015a , b , 2017 ; Kappler et al, 2015 ; Mahler et al, 2016 , 2017 ; Depierre et al, 2018 ; Bauza et al, 2019 ; Bottarel et al, 2020 ; Fang H.-S. et al, 2020 ), robotic dexterous grasping based on point cloud and deep learning set off a tremendous wave of research in the field of robotics.…”
Section: Introductionmentioning
confidence: 99%
“…With the improvement of point cloud processing methods (Fischler and Bolles, 1981 ; Rusu et al, 2010 ; Rusu and Cousins, 2011 ; Aldoma et al, 2012 ; Chen et al, 2016 ) and the introduction of CNN based on point cloud as input (Wu et al, 2015 ; Qi et al, 2017a , b ), point clouds have become increasingly common for those tasks based on visual perception. Meanwhile, as more and more contributions on datasets of grasping based on point cloud (Goldfeder et al, 2009 ; Calli et al, 2015a , b , 2017 ; Kappler et al, 2015 ; Mahler et al, 2016 , 2017 ; Depierre et al, 2018 ; Bauza et al, 2019 ; Bottarel et al, 2020 ; Fang H.-S. et al, 2020 ), robotic dexterous grasping based on point cloud and deep learning set off a tremendous wave of research in the field of robotics.…”
Section: Introductionmentioning
confidence: 99%
“…Researchers could be tempted to solve specific challenges by deploying more sensors (e.g., using multiple cameras to cope with occlusion), but this increases the setup costs and complexity, affecting the reproducibility. In similar research fields (e.g., hand and object pose estimation, grasping, and reinforcement learning), the proposal of datasets and benchmarks has favored reproducibility (Hodaň et al, 2018;Armagan et al, 2020;Bottarel et al, 2020;James et al, 2020). However, we observe a lack of standard datasets and benchmarks for complex and dexterous hand-object interaction.…”
Section: Discussionmentioning
confidence: 92%
“…Two robotic grasping benchmarks are provided in this special issue. In [3], the benchmark identifies the limits and capabilities of the robotic system (e.g., workspace limits, payload limits) and allows to normalize the results accordingly. The benchmark in [4] provides a rigorous procedure to assess the performance of grasp planning algorithms, while minimizing the effects of other elements in the grasping pipeline.…”
Section: B Benchmarks On Robotic Graspingmentioning
confidence: 99%
“…The benchmark in [4] provides a rigorous procedure to assess the performance of grasp planning algorithms, while minimizing the effects of other elements in the grasping pipeline. While [3], [4] are focused on grasping from table top, [5] provides a benchmark for a more industrial application, i.e., bin-picking, considering pick-andplace of fruit and vegetables.…”
Section: B Benchmarks On Robotic Graspingmentioning
confidence: 99%