2020 IEEE International Conference on Robotics and Automation (ICRA) 2020
DOI: 10.1109/icra40945.2020.9197518
|View full text |Cite
|
Sign up to set email alerts
|

Clear Grasp: 3D Shape Estimation of Transparent Objects for Manipulation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
207
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 183 publications
(236 citation statements)
references
References 39 publications
0
207
0
Order By: Relevance
“…7 Examples of object (scene) segmentation: (left) a 3D input pointcloud; (right) a network prediction; The colors of the points represent the object labels (adapted from [26]) commonplace in every household, this is an issue that surely needs to be addressed. S. Sajjan et al, [171] took a step towards addressing such limitations for object grasping and manipulation. In particular, they proposed a deep learning approach, named ClearGrasp [171], for estimating accurate 3D geometry of transparent objects from a single RGB-D image (which is a combination of an RBG image and a corresponding depth image) for robotic manipulation.…”
Section: Object Detectionmentioning
confidence: 99%
See 1 more Smart Citation
“…7 Examples of object (scene) segmentation: (left) a 3D input pointcloud; (right) a network prediction; The colors of the points represent the object labels (adapted from [26]) commonplace in every household, this is an issue that surely needs to be addressed. S. Sajjan et al, [171] took a step towards addressing such limitations for object grasping and manipulation. In particular, they proposed a deep learning approach, named ClearGrasp [171], for estimating accurate 3D geometry of transparent objects from a single RGB-D image (which is a combination of an RBG image and a corresponding depth image) for robotic manipulation.…”
Section: Object Detectionmentioning
confidence: 99%
“…S. Sajjan et al, [171] took a step towards addressing such limitations for object grasping and manipulation. In particular, they proposed a deep learning approach, named ClearGrasp [171], for estimating accurate 3D geometry of transparent objects from a single RGB-D image (which is a combination of an RBG image and a corresponding depth image) for robotic manipulation.…”
Section: Object Detectionmentioning
confidence: 99%
“…Visual Perception: While the RGBD-based 6D pose tracker is robust to a variety of objects with challenging properties such as textureless, reflective, geometrical featureless, or thin/small shapes, it struggles to track severely shiny, glossy, or transparent objects, due to the degenerated depth sensing of the camera. In future work, we hope to explore extending this framework to these other types of scenarios with the techniques of depth enhancement and completion [64]. In addition, the current framework requires an object CAD model beforehand to perform 6D pose tracking and for reasoning about the task of peg-in-hole insertion.…”
Section: System Observations and Limitationsmentioning
confidence: 99%
“…We transfer pre-trained weights from these tasks to learn suction and grasping affordance models, then report picking success rates while fine-tuning with trial and error on the training objects in Figure 7. We also execute test runs with novel unseen objects (including transparent objects [37]) using the affordance models and report their performance in Figure 7. These results highlight three key observations: 1) we validate that our findings in simulation apply to our real-world setting, 2) off-the-shelf vision models generalize better than vision models trained in the target environment (COCO-fix vs. Random on unseen objects, Figure 7b,d) 3) models initialized with COCO features perform better than models initialized with ImageNet features.…”
Section: B Real-world Experimentsmentioning
confidence: 99%