2013 IEEE International Conference on Robotics and Automation 2013
DOI: 10.1109/icra.2013.6630785
|View full text |Cite
|
Sign up to set email alerts
|

Predicting human intention in visual observations of hand/object interactions

Abstract: Abstract-The main contribution of this paper is a probabilistic method for predicting human manipulation intention from image sequences of human-object interaction. Predicting intention amounts to inferring the imminent manipulation task when human hand is observed to have stably grasped the object. Inference is performed by means of a probabilistic graphical model that encodes object grasping tasks over the 3D state of the observed scene. The 3D state is extracted from RGB-D image sequences by a novel vision-… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
30
0

Year Published

2013
2013
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 47 publications
(30 citation statements)
references
References 30 publications
0
30
0
Order By: Relevance
“…An important application of imitation learning is robotic grasp selection. Song et al [6] instead proposed a robot grasping planning method which aims to learn human intention and mapping to the robotic embodiment on this more abstracted level, instead of directly mapping the grasps from the human to the robotic grasp spaces. However, in their method, a fixed setting of high-level object and action parameters need to be manually specified.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…An important application of imitation learning is robotic grasp selection. Song et al [6] instead proposed a robot grasping planning method which aims to learn human intention and mapping to the robotic embodiment on this more abstracted level, instead of directly mapping the grasps from the human to the robotic grasp spaces. However, in their method, a fixed setting of high-level object and action parameters need to be manually specified.…”
Section: Related Workmentioning
confidence: 99%
“…Moreover, as a topic model, LM-LDA represents the data in terms of "topics" in a latent space. Hence, there is a potential to use the learned topics for transfer learning of action "intention" [6] to other robot configurations with different grasping state space.…”
Section: Contributionmentioning
confidence: 99%
See 3 more Smart Citations