2022
DOI: 10.1109/lra.2022.3191231
|View full text |Cite
|
Sign up to set email alerts
|

A4T: Hierarchical Affordance Detection for Transparent Objects Depth Reconstruction and Manipulation

Abstract: Transparent objects are widely used in our daily lives and therefore robots need to be able to handle them. However, transparent objects suffer from light reflection and refraction, which makes it challenging to obtain the accurate depth maps required to perform handling tasks. In this paper, we propose a novel affordance-based framework for depth reconstruction and manipulation of transparent objects, named A4T. A hierarchical AffordanceNet is first used to detect the transparent objects and their associated … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
15
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 22 publications
(15 citation statements)
references
References 26 publications
0
15
0
Order By: Relevance
“…As pointed out before, transparent objects face some further difficulties. There are some approaches using depth data, with some work on depth completion, as in [ 10 ], which uses affordance maps to reconstruct the depth data. Another approach is made by Xu et al [ 11 ].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…As pointed out before, transparent objects face some further difficulties. There are some approaches using depth data, with some work on depth completion, as in [ 10 ], which uses affordance maps to reconstruct the depth data. Another approach is made by Xu et al [ 11 ].…”
Section: Related Workmentioning
confidence: 99%
“…This can distort the depth data quite a lot. There are different common approaches to tackle this problem: trying to reconstruct the depth data, as performed for example in [ 10 , 11 , 12 ], simply trying to solve the 6D pose estimation with stereo RGB images [ 13 ] or by just using single RGB images [ 14 , 15 ]. The latter is what our work focuses on.…”
Section: Introductionmentioning
confidence: 99%
“…However, those methods require multiple views of one object, which is not suitable for the case when the camera is fixed. To address this challenge, some other studies [3], [13], [14] focus on reconstructing the missing or noisy depth regions of transparent objects using a single RGB-D image. In [3], a global optimisation algorithm was adopted to reconstruct the depth values that are removed based on predicted object masks.…”
Section: A Transparent Object Graspingmentioning
confidence: 99%
“…In [13], a local implicit neural representation built on ray-voxel pairs was proposed to reconstruct depth information incorporated with an iterative self-correcting refinement model. In [14], an affordance-based depth reconstruction framework was proposed to facilitate the robotic manipulation of transparent objects.…”
Section: A Transparent Object Graspingmentioning
confidence: 99%
“…Table 1 summarizes recent datasets for transparent object recognition. As mainstream, most researchers were focused on the RGB-based datasets (Bashkirova et al, 2022; Chen et al, 2018, 2022; Dai et al, 2022; Fang et al, 2022; Jiang et al, 2022; Jiang and Shan Ll, 2022b; Lin et al, 2021; Liu et al, 2020, 2021; Mei et al, 2020; Proença and Simoes, 2020; Sajjan et al, 2020; Xie et al, 2020; Xu et al, 2022). These datasets have necessitated background clutters using additional artifacts to overcome the attribute of the transparent objects heavily dependent on the background.…”
Section: Related Workmentioning
confidence: 99%