2021
DOI: 10.48550/arxiv.2104.11446
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

OCRTOC: A Cloud-Based Competition and Benchmark for Robotic Grasping and Manipulation

Abstract: In this paper, we propose a cloud-based benchmark for robotic grasping and manipulation, called the OCR-TOC benchmark. The benchmark focuses on the object rearrangement problem, specifically table organization tasks. We provide a set of identical real robot setups and facilitate remote experiments of standardized table organization scenarios in varying difficulties. In this workflow, users upload their solutions to our remote server and their code is executed on the real robot setups and scored automatically. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 32 publications
0
7
0
Order By: Relevance
“…Our work is concerned with rearranging objects, an area that has a long history in robotics [31,32,34,51,54] but has recently gained traction in the vision and learning communities [2,19,39,67] thanks to the advances in simulation platforms. The works most relevant to ours are that of Labbé et al [34] and NeRP [51], which also address the rearrangement task with the goal state specified by an image.…”
Section: Related Workmentioning
confidence: 99%
“…Our work is concerned with rearranging objects, an area that has a long history in robotics [31,32,34,51,54] but has recently gained traction in the vision and learning communities [2,19,39,67] thanks to the advances in simulation platforms. The works most relevant to ours are that of Labbé et al [34] and NeRP [51], which also address the rearrangement task with the goal state specified by an image.…”
Section: Related Workmentioning
confidence: 99%
“…To illustrate, we provide the matching evaluations on objects with simulated images outside the training set in Table 7. Twenty unseen objects are selected from the OCRTOC dataset (Liu et al 2021), in which half of them have the same class as YCB-Video objects but with different shapes or textures (seen class), and other objects with novel class have not been seen in training (unseen class). The evaluation protocol is similar to real-real matching in 4.1.…”
Section: Model Efficiency and Generalizationmentioning
confidence: 99%
“…Twenty unseen objects are selected from the OCRTOC dataset (Liu et al 2021), in which half of them have the same class as YCB-Video objects but with different shapes or textures (seen class), which can be seen in Figure 14. The left with novel classes has not been seen in training (unseen class), which are visualized in Figure 15.…”
Section: A5 Model Generalizationmentioning
confidence: 99%
“…Vehicle Navigation CommonRoad [15] 2017 × × × × × Robot@Home [16] 2017 × × × × × Multi-Agent Path-Find Benchmark [17] 2019 × × × × × MAVBench [18] 2020 × × × × × BARN [19] 2020 × × × Bench-MR [20] 2021 × × × × PathBench [21] 2021 × × × General Robotics OMPLBenchmarks [22] 2015 × × × × × Robobench [23] 2016 × × Roboturk (Teleoperation database) [24] 2019 × × × RLBench [25] 2020 × OCRTOC [26] 2021 × Robot Manipulation ACRV picking benchmark [2] 2017 × × RoboNet [27] 2019 × × × GraspNet [28] 2020 × × × × × × Brown Planning Benchmarks [29] 2020 × × Aerial Manipulation [30] 2020 × × × Bimanual Manipulation Benchmark [31] 2020 × × In-hand manipulation benchmark [32] 2020 × × × × ProbRobScene [33] 2021…”
Section: Sensed Representation Articulated Robotsmentioning
confidence: 99%
“…The second category of datasets is focused on general robotics. These works aim at covering broad robotic categories like providing datasets and tools for remote teleoperation [24] or object rearrangement [26]. While many papers are concentrating on learning-based approaches [25], there is also a trend towards more reproducibility, for example by using containerization [21] to ease comparison over different operating systems or configurations.…”
Section: Sensed Representation Articulated Robotsmentioning
confidence: 99%