2019 International Conference on Robotics and Automation (ICRA) 2019
DOI: 10.1109/icra.2019.8793917
|View full text |Cite
|
Sign up to set email alerts
|

EasyLabel: A Semi-Automatic Pixel-wise Object Annotation Tool for Creating Robotic RGB-D Datasets

Abstract: Developing robot perception systems for recognizing objects in the real-world requires computer vision algorithms to be carefully scrutinized with respect to the expected operating domain. This demands large quantities of ground truth data to rigorously evaluate the performance of algorithms. This paper presents the EasyLabel tool for easily acquiring high quality ground truth annotation of objects at the pixellevel in densely cluttered scenes. In a semi-automatic process, complex scenes are incrementally buil… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
80
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 86 publications
(80 citation statements)
references
References 23 publications
0
80
0
Order By: Relevance
“…This is the common approach at large companies working on the challenge of autonomous driving, for example, Google 1 or Uber. 2 A similar approach has been taken for the robotics environment to build up a dataset of indoor scenes in [9]. However, this approach is limited, since it will be difficult to create examples of all possible scenarios that one might ever encounter.…”
Section: Situated Robot Perception: Embodied Ai From the View Ofmentioning
confidence: 99%
See 1 more Smart Citation
“…This is the common approach at large companies working on the challenge of autonomous driving, for example, Google 1 or Uber. 2 A similar approach has been taken for the robotics environment to build up a dataset of indoor scenes in [9]. However, this approach is limited, since it will be difficult to create examples of all possible scenarios that one might ever encounter.…”
Section: Situated Robot Perception: Embodied Ai From the View Ofmentioning
confidence: 99%
“…With the detection of larger structures such as cupboards, desks and tables, and the detection of surfaces, the task of detecting objects is enhanced with the dimension to find clusters of data points that stick out of the plane and possibly present one or more objects [9]. This simplifies object detection or can be viewed as presenting a second step of verification to the detection step.…”
Section: Situated Robot Perception: Embodied Ai From the View Ofmentioning
confidence: 99%
“…This method includes an interactive tool to correct prediction errors. EasyLabel [20] is a semi-automatic method for annotating objects on the RGB-D table-top setting. Label-Fusion [21] is another semi-automatic method for generating large quantities of semantic labels from RGB-D videos.…”
Section: A Semantic Segmentationmentioning
confidence: 99%
“…The experimental results show that our method outperforms the existing approaches and establishes new state-of-the-art results for both datasets. In order to further consolidate the effectiveness of our method, we adapt an object segmentation dataset, called Object Clutter Indoor Dataset (OCID) [13], to the instance recognition task to further evaluate RCFusion. OCID has been recently released to provide object scenes with high level of clutter and occlusion, arguably two of the biggest challenges faced by robotic visual perception systems [14].…”
Section: Introductionmentioning
confidence: 99%
“…Accuracy (%) of DECO[7] and variations of RCFusion on Object Clutter Indoor Dataset[13]. "RCFusion -res5" refers to the the variation of RCFusion without extracting features from multiple layers, i.d.…”
mentioning
confidence: 99%