2020 IEEE International Conference on Robotics and Automation (ICRA) 2020
DOI: 10.1109/icra40945.2020.9196543
|View full text |Cite
|
Sign up to set email alerts
|

Robot-Supervised Learning for Object Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 8 publications
(10 citation statements)
references
References 25 publications
0
10
0
Order By: Relevance
“…3), and we outperform recent self-supervised approaches with similar weak constraints on the setup by a large margin. Despite not using camera calibration, manipulator key point registration and additional depth data like [9] we still reach similar performance on a joint subset of objects (82.72% vs. 84.61% mIoU).…”
Section: A Evaluation Of Self-supervised Object Segmentationmentioning
confidence: 78%
See 2 more Smart Citations
“…3), and we outperform recent self-supervised approaches with similar weak constraints on the setup by a large margin. Despite not using camera calibration, manipulator key point registration and additional depth data like [9] we still reach similar performance on a joint subset of objects (82.72% vs. 84.61% mIoU).…”
Section: A Evaluation Of Self-supervised Object Segmentationmentioning
confidence: 78%
“…We hypothesize that due to the thin shape of the object the network finds it difficult to establish correspondences between two consecutive frames and identify it as moving object. This would also explain the superior performance of [9] on this item, as their approach does not rely on identification by motion.…”
Section: A Evaluation Of Self-supervised Object Segmentationmentioning
confidence: 96%
See 1 more Smart Citation
“…where W are the trainable network parameters, d 1 is the ground truth object depth at z 1 (6), and f d ∈ R is the predicted depth. To use the normalized distance input z (10), we modify (11) and define a normalized depth loss as…”
Section: Normalized Relative Depth Lossmentioning
confidence: 99%
“…Alternatively, RGB cameras are less expensive and more ubiquitous than 3D sensors, and there are many more datasets and methods based on RGB images [8,17,27]. Thus, even when 3D sensors are available, RGB images remain a critical modality for understanding data and identifying objects [11,52].…”
Section: Introductionmentioning
confidence: 99%