2021
DOI: 10.1145/3446370
|View full text |Cite
|
Sign up to set email alerts
|

Visual Affordance and Function Understanding

Abstract: Nowadays, robots are dominating the manufacturing, entertainment, and healthcare industries. Robot vision aims to equip robots with the capabilities to discover information, understand it, and interact with the environment, which require an agent to effectively understand object affordances and functions in complex visual domains. In this literature survey, first, “visual affordances” are focused on and current state-of-the-art approaches for solving relevant problems as well as open problems and research gaps… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
41
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 64 publications
(41 citation statements)
references
References 186 publications
(245 reference statements)
0
41
0
Order By: Relevance
“…Part affordances in our dataset are defined in terms of adjectives (e.g., sittable) or nouns (e.g., armrest), which describe how object parts could afford human daily actions and activities. We adopt certain common affordance categories from a comprehensive survey of visual affordance [17], e.g., containment, sittable, support, openable, rollable, display, and wrapgrasp.…”
Section: Affordance Definition and Dataset Constructionmentioning
confidence: 99%
See 1 more Smart Citation
“…Part affordances in our dataset are defined in terms of adjectives (e.g., sittable) or nouns (e.g., armrest), which describe how object parts could afford human daily actions and activities. We adopt certain common affordance categories from a comprehensive survey of visual affordance [17], e.g., containment, sittable, support, openable, rollable, display, and wrapgrasp.…”
Section: Affordance Definition and Dataset Constructionmentioning
confidence: 99%
“…Learning and evaluating PartAfford demands collections of 3D objects and their affordance labels for object parts. Prior work on visual affordance learning [17] either focuses on 2D objects and scenes or lacks part-based annotation [6]. Hence, we construct a part-level, cross-category 3D object affordance dataset annotated with 24 affordance categories shared among over 25, 000 3D objects.…”
Section: Introductionmentioning
confidence: 99%
“…Another line of work explores the use of affordances in RL, robotics, and control, historically through the lens of perception [53], [19], [52]. Affordances have also been discussed previously in reinforcement learning in order to accelerate planning in model-based RL by planning over only a subset of relevant actions [1], [23], [51].…”
Section: Related Workmentioning
confidence: 99%
“…In more general terms, the way an agent interacts with a particular object is what defines its affordance, and this is strictly related to their relative motion. In the last decades, computer scientists have also being working on this general idea, trying to quantitatively implement it in the fields of computer vision and robotics ( Ardón et al, 2020 ; Hassanin et al, 2021 ). As far as visual affordance is concerned, that is, extracting affordance information from still images and videos, different cognitive tasks have been considered so far, as for example affordance recognition and affordance segmentation, see ( Hassanin et al, 2021 ) for a recent review.…”
Section: Feature Extraction and Conjugate Velocitiesmentioning
confidence: 99%