2011
DOI: 10.1007/s10514-011-9263-y
|View full text |Cite
|
Sign up to set email alerts
|

Two-handed gesture recognition and fusion with speech to command a robot

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
54
0

Year Published

2013
2013
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 79 publications
(55 citation statements)
references
References 48 publications
1
54
0
Order By: Relevance
“…For example, touchscreens could be smeared with residues, and remote controls could easily be damaged or misplaced. Examples of perceptual gesture-based interaction in a context of use resembling ours can be found in human-robot interaction [5][6][7]. However, to our knowledge, our work is one of the first user-centered studies of designing gesture-based control to operate factory automation systems.…”
Section: Gesture-based Interactionmentioning
confidence: 93%
“…For example, touchscreens could be smeared with residues, and remote controls could easily be damaged or misplaced. Examples of perceptual gesture-based interaction in a context of use resembling ours can be found in human-robot interaction [5][6][7]. However, to our knowledge, our work is one of the first user-centered studies of designing gesture-based control to operate factory automation systems.…”
Section: Gesture-based Interactionmentioning
confidence: 93%
“…The same approach does not directly apply to reconstruct the absolute orientation of the rigid body, because ∆r t + ∆r t+1 = r t+1 . Having chosen H r,t [4,4] = 0 in (32), we can extract the relative rotation vector…”
Section: Pose Reconstructionmentioning
confidence: 99%
“…To ensure a smooth and effective cooperation, there is a need of representing human and robot actions in a compact and robust way that facilitates the understanding of performed motions as well as the generation of motions which flexibly adapt to different scenarios. Examples include recognizing human actions [44], gesture-based human-robot interaction [38,4,18], learning human activities [14], tasks learning by demonstration [1,17,33]. Aforementioned applications usually adopt Cartesian trajectories as task descriptors.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, because it is still a difficult problem to extract these two small positions, there are many alternative methods for estimating the pointing direction; head-hand line [2,6,8], head-finger line [4,6,13], forearm direction [1,6,8], and head orientation [8,11] methods. Regarding the head orientation approach, this method does not use the hand position so we need to obtain more information to identify the target object, for example, speech recognition to obtain the object features [11].…”
Section: Related Workmentioning
confidence: 99%