2010 IEEE/RSJ International Conference on Intelligent Robots and Systems 2010
DOI: 10.1109/iros.2010.5650024
|View full text |Cite
|
Sign up to set email alerts
|

Prediction of user's grasping intentions based on eye-hand coordination

Abstract: Eye-hand coordination is a primordial reach-tograsp action performed by a human hand when reaching for an object. This paper proposes the use of a visual sensor which allows the simultaneous analysis of hand and eye motions in order to recognize the reach-to-grasp movement, i.e. to predict the grasping gesture. This solution fuses two viewpoints taken from the user's perspective. First, by using an eye-tracker device attached to the user's head; and second, by utilizing a wearable camera attached to the user's… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 21 publications
0
2
0
Order By: Relevance
“…The aim of this process is to continuously monitoring the operator's actions on real-time. The action identification can be driven by different cues, like gestures (Carrasco and Clady 2010), scene objects being manipulated (Koppula and Saxena 2015) or environmental information (Casalino et al 2018). Action recognition has been also extensively studied in terms of whole body motion tracking and segmentation (Natola et al 2015;Tome et al 2017).…”
Section: Human-robot Cooperation In Assemblymentioning
confidence: 99%
“…The aim of this process is to continuously monitoring the operator's actions on real-time. The action identification can be driven by different cues, like gestures (Carrasco and Clady 2010), scene objects being manipulated (Koppula and Saxena 2015) or environmental information (Casalino et al 2018). Action recognition has been also extensively studied in terms of whole body motion tracking and segmentation (Natola et al 2015;Tome et al 2017).…”
Section: Human-robot Cooperation In Assemblymentioning
confidence: 99%
“…We think that the same problem can be mitigated by considering also other observations rather than only the gaze, as for example one hand trajectory. This approach was followed in [26], where a hidden Markov model (HMM) was adopted. The HMM allows to make inference about a sliding temporal window of observations.…”
Section: A Generalitiesmentioning
confidence: 99%