2016
DOI: 10.1007/978-3-319-30285-0_18
|View full text |Cite
|
Sign up to set email alerts
|

Attribute Based Affordance Detection from Human-Object Interaction Images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 14 publications
(8 citation statements)
references
References 23 publications
0
8
0
Order By: Relevance
“…(See Section IV for details.) et al [25] proposed a Bayesian network-based affordance detection method that exploits the attribute of the object, actor, and environment. Grabner et al [26] utilized a human skeleton 3D model to learn the action of sitting on a chair to infer whether an object can afford "sitting" action or not.…”
Section: A Affordance Learningmentioning
confidence: 99%
“…(See Section IV for details.) et al [25] proposed a Bayesian network-based affordance detection method that exploits the attribute of the object, actor, and environment. Grabner et al [26] utilized a human skeleton 3D model to learn the action of sitting on a chair to infer whether an object can afford "sitting" action or not.…”
Section: A Affordance Learningmentioning
confidence: 99%
“…This task will be complicated if one of those objects is unseen or occluded. [37]. The existence of a human in a scene makes it more complex in terms of actions, events and affordance learning in particular because the affordance is then dependent on the attributes of the human.…”
Section: Challengesmentioning
confidence: 99%
“…racket head detection obstruct [59] indicates the locations of obstructer wall detection break [59] indicates break-sensitive objects glass cups detection pinch-pull [59] indicates objects that should be pulled with punch knob detection hook-pull [59] indicates objects that should be pulled with hooking up handle detection tip-push [59] indicates objects that perform actions after pushing electricity buttons detection warmth [59] indicates warmth objects fireplaces detection illumination [59] indicates light objects lamps detection dry [59] indicates objects that absorb water towels detection walk [59] , [63] indicates places that allow walking gardens detection, segmentation lyable [63] refers to long free space that allow person to lie down bed segmentation reachable [63] refers to object in a scene that is reachable for a person to pick it water bottle from the fridge segmentation movable [64] refers to objects that can be moved around small objects like balls, mugs segmentation TABLE 1: Indoor affordance labels used to detect objects as in studies [57], [49], [58], [60], [61], [59], [63], [64], [65], [62] Offline Online images. Hassan and Dharmaratne [37] proposed an affordance detection method based on the object, human and the ambient environment. They used the objects attributes (physical, material, shape, etc), human attributes (poses) and object-to-object to train their scheme.…”
Section: Feature-engineering Approachesmentioning
confidence: 99%
“…Grabner et al detected a surface suitable for sitting in 3D data [9], Kjellstrom et al classified the function of tools from a video [10], and Zhu et al proposed a task-oriented object model for learning construction patterns [11]. Hassan et al integrated human, object, and environmental aspects for attribute modeling and achieved accurate functional detection [12]. Kemp et al detected a tool tip that can be grasped by a robot [13].…”
Section: Releated Workmentioning
confidence: 99%