2008
DOI: 10.1109/icpr.2008.4761609
|View full text |Cite
|
Sign up to set email alerts
|

Real-time detection and interpretation of 3D deictic gestures for interactionwith an intelligent environment

Abstract: We present a system that enables pointing-based unconstrained interaction with a smart conference room using an arbitrary multi-camera setup. For each individual camera stream, areas exhibiting strong motion are identified. In these areas, face and hand hypotheses are detected. The detections of multiple cameras are then combined to 3D hypotheses from which deictic gestures are identified and a pointing direction is derived. This is then used to identify objects in the scene. Since we use a combination of simp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2010
2010
2014
2014

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 8 publications
(15 citation statements)
references
References 9 publications
0
15
0
Order By: Relevance
“…An often utilized approach is to calculate the direction as the line-of-sight between the eyes and the pointing hand or finger (e.g. [37], [40]). In [39], 3 different possibilities were evaluated, and the line-of-sight model was reported to be the best.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…An often utilized approach is to calculate the direction as the line-of-sight between the eyes and the pointing hand or finger (e.g. [37], [40]). In [39], 3 different possibilities were evaluated, and the line-of-sight model was reported to be the best.…”
Section: Related Workmentioning
confidence: 99%
“…Saliency-based Object Detection and Selection 1) In order to detect pointing gestures, we use a modified version of the system presented in [40]. Most importantly, we replaced the face detector with a head-shoulder detector based on histograms of oriented gradients, which -in practice -is more robust and less view-dependent.…”
Section: Realizationmentioning
confidence: 99%
See 1 more Smart Citation
“…Some solutions acknowledge this principle [13,29], but do not take into account whether the gestures are ambiguous regarding the target population, since ambiguity is intrinsically related to the cultural aspects of the population. As an example of the problem of not considering the interdependency of gestural ambiguity with cultural aspects, the application in [39] uses only deictic gestures (pointing gestures). Gestures for some commands for this application resemble a firearm, which is probably not desired in a home or in war-or conflict-ridden regions of the world.…”
Section: Socio-technical Aspects Of Gestural Interaction: Framework Amentioning
confidence: 99%
“…Accordingly, visually recognizing pointing gestures and inferring a referent or target direction has been addressed by several authors; e.g., for interaction with smart environments (e.g. [39]), wearable visual interfaces (e.g. [22]), and robots (e.g.…”
Section: Pointingmentioning
confidence: 99%