2015 IEEE International Conference on Robotics and Biomimetics (ROBIO) 2015
DOI: 10.1109/robio.2015.7419073
|View full text |Cite
|
Sign up to set email alerts
|

Affordance-map: Mapping human context in 3D scenes using cost-sensitive SVM and virtual human models

Abstract: Abstract-Robots are often required to operate in environments where humans are not present, but yet require the human context information for better human robot interaction. Even humans are present in the environment, detecting their presence in cluttered environments could be challenging. As a solution to this problem, this paper presents the concept of affordancemap which learns human context by looking at geometric features of the environment. Instead of observing real humans to learn human context, it uses… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
8
0
1

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(10 citation statements)
references
References 14 publications
1
8
0
1
Order By: Relevance
“…Using RGB-D images of indoor scenes [24] perform segmentation for human actions such as walkable, sittable, lyable. Similarly, affordances are studied in [23,8,10] to map locations suitable for sitting, or laying down; particularly in these cases using human skeleton hallucinated on the different indoor scenes. Crucially, these previous methods are heavy in terms of requiring multiple learning examples, impose a particular parameterization such as detection of planes or shapes and or are highly specific to an object e.g.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Using RGB-D images of indoor scenes [24] perform segmentation for human actions such as walkable, sittable, lyable. Similarly, affordances are studied in [23,8,10] to map locations suitable for sitting, or laying down; particularly in these cases using human skeleton hallucinated on the different indoor scenes. Crucially, these previous methods are heavy in terms of requiring multiple learning examples, impose a particular parameterization such as detection of planes or shapes and or are highly specific to an object e.g.…”
Section: Related Workmentioning
confidence: 99%
“…At test time, we are able to predict affordance location candidates by approximating the iT on a previously unseen input scene. The method allows us to use a model of say a humanoid skeleton and predict human affordances such as sitting; similarly to [8,10,23,24]. But importantly, it also allows us to build these tensors more generally for any other pair of objects.…”
Section: Our Approachmentioning
confidence: 99%
“…We humans perceive and interact with the surrounding 3D world to accomplish everyday tasks. To equip AI systems with the same capabilities, researchers have done many works investigating shape functionality [Guan et al, 2020, Hu et al, 2016, Lai et al, 2021, grasp affordance [Brahmbhatt et al, 2019, Corona et al, 2020, Fang et al, 2018, Jiang et al, 2021, Kjellström et al, 2011, Kokic et al, 2020, Lenz et al, 2015, Mandikal & Grauman, 2021, Montesano & Lopes, 2009, Nagarajan et al, 2019, Redmon & Angelova, 2015, Yang et al, 2020, manipulation affordance [Do et al, 2018, Mo et al, 2021a, Nagarajan et al, 2019, scene affordance , Piyathilaka & Kodagoda, 2015, Rhinehart & Kitani, 2016, etc. While these works mostly study single-object/scene or agent-object interaction scenarios, our work explores inter-object relationships.…”
Section: Related Workmentioning
confidence: 99%
“…Affordance Reasoning. There is a growing interest in reasoning object affordances in the field of computer vision [8]- [14] and robotics [15]- [20]. Stoytchev [15] introduces an approach to ground tool affordances via dynamically applying different behaviors from a behavioral repertoire.…”
Section: Related Workmentioning
confidence: 99%