2021
DOI: 10.48550/arxiv.2103.16397
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

3D AffordanceNet: A Benchmark for Visual Object Affordance Understanding

Abstract: The ability to understand the ways to interact with objects from visual cues, a.k.a. visual affordance, is essential to vision-guided robotic research. This involves categorizing, segmenting and reasoning of visual affordance. Relevant studies in 2D and 2.5D image domains have been made previously, however, a truly functional understanding of object affordance requires learning and prediction in the 3D physical domain, which is still absent in the community. In this work, we present a 3D AffordanceNet dataset,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 32 publications
0
3
0
Order By: Relevance
“…Object affordances Several methods have been proposed for detecting functional object parts and their corresponding affordance labels. These works involve the detection of object affordance parts by considering their visual and geometric features (Deng et al, 2021;Wang & Tarr, 2020). One of the early works in this direction focused in the detection of graspable object areas by creating local visual descriptors of grasping points and estimating the probability of the presence of a graspable object based on the Bernoulli trial (Montesano & Lopes, 2009).…”
Section: Literature Reviewmentioning
confidence: 99%
See 1 more Smart Citation
“…Object affordances Several methods have been proposed for detecting functional object parts and their corresponding affordance labels. These works involve the detection of object affordance parts by considering their visual and geometric features (Deng et al, 2021;Wang & Tarr, 2020). One of the early works in this direction focused in the detection of graspable object areas by creating local visual descriptors of grasping points and estimating the probability of the presence of a graspable object based on the Bernoulli trial (Montesano & Lopes, 2009).…”
Section: Literature Reviewmentioning
confidence: 99%
“…One of the early works in this direction focused in the detection of graspable object areas by creating local visual descriptors of grasping points and estimating the probability of the presence of a graspable object based on the Bernoulli trial (Montesano & Lopes, 2009). New approaches employ Convolutional Neural Network (CNN) models to produce classes of functional object parts from RGB (Nguyen et al, 2017;Do et al (Montesano & Lopes, 2009) 2D & Keypoints (Zhao & Zhu, 2013) 2D & 3D (Myers et al, 2015) 2.5D (Nguyen et al, 2016) 2.5D (Nguyen et al, 2017) 2D (Kokic et al, 2017) Synthetic (Sawatzky et al, 2017) 2D & Keypoints (Do et al, 2018) 2D (Wang & Tarr, 2020) 2D (Deng et al, 2021) 3D (Xu et al, 2021) 2.5D & Keypoints (Turek et al, 2010) 2D (Qi et al, 2018) 3D (Kjellström et al, 2011) 2D (Yao et al, 2013) 3D (Qi et al, 2017) 2.5D (Gkioxari et al, 2018) 2D (Fang et al, 2018) 2D (Chuang et al, 2018) 2D (Tan et al, 2019) 2D (Wu et al, 2020) Synthetic (Hou et al, 2021) 2D (Sridhar et al, 2008) 2D (Aksoy et al, 2010) 2D (Aksoy et al, 2011) 2D (Pieropan et al, 2013) 2.5D (Pieropan et al, 2014) 2D (Moldovan & De Raedt, 2014) Synthetic (Liang et al, 2016) 2.5D (Liang et al, 2018) 2.5D 2018; Sawatzky et al, 2017) and synthetic data (Kokic et al, 2017). However, depth cues along with the RGB information have demonstrated a greater detection accuracy in this task (Nguyen et al, 2016;…”
Section: Literature Reviewmentioning
confidence: 99%
“…A benchmark for object affordances has been recently introduced [14] to evaluate point cloud deep learning networks. Our framework complements the aforementioned approaches by introducing manifolds that contain possible grasp points from which an algorithm can sample and execute a grasp.…”
Section: Related Workmentioning
confidence: 99%