2023
DOI: 10.48550/arxiv.2303.10437
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Grounding 3D Object Affordance from 2D Interactions in Images

Abstract: Grounding 3D object affordance seeks to locate objects' "action possibilities" regions in the 3D space, which serves as a link between perception and operation for embodied agents. Existing studies primarily focus on connecting visual affordances with geometry structures, e.g. relying on annotations to declare interactive regions of interest on the object and establishing a mapping between the regions and affordances. However, the essence of learning object affordance is to understand how to use it, and the ma… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 72 publications
0
1
0
Order By: Relevance
“…As manual affordance annotations are often costly to acquire, much subsequent research has shifted its focus to weak supervision such as keypoints [16,53,54] or image-level labels [36,43]. Recent work has explored a novel perspective on how to ground affordances from human-object interaction images [29,36,64] or human action videos [9,19,31,43]. In robotics, affordance learning enables robots to interact effectively and intelligently with complex and dynamic environments [2,63].…”
Section: Related Workmentioning
confidence: 99%
“…As manual affordance annotations are often costly to acquire, much subsequent research has shifted its focus to weak supervision such as keypoints [16,53,54] or image-level labels [36,43]. Recent work has explored a novel perspective on how to ground affordances from human-object interaction images [29,36,64] or human action videos [9,19,31,43]. In robotics, affordance learning enables robots to interact effectively and intelligently with complex and dynamic environments [2,63].…”
Section: Related Workmentioning
confidence: 99%