Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction 2012
DOI: 10.1145/2401836.2401849
|View full text |Cite
|
Sign up to set email alerts
|

Eye gaze assisted human-computer interaction in a hand gesture controlled multi-display environment

Abstract: A special human-computer interaction (HCI) framework processing user input in a multi-display environment has the ability to detect and interpret dynamic hand gesture input. In an environment equipped with large displays, full contactless application control is possible with this system. This framework was extended with a new input modality that involves human gaze in the interaction. The main contribution of this work is the possibility to unite any types of computer input and obtain a detailed view on the be… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 7 publications
0
2
0
Order By: Relevance
“…Cha and Maier proposed a combination of gaze and mid-air gestures for a multidisplay use case [31]. These authors presented architectural implementation details of their system, but did not present any evaluation results or interaction design decisions.…”
Section: Gaze and Mid-air Gesturesmentioning
confidence: 99%
“…Cha and Maier proposed a combination of gaze and mid-air gestures for a multidisplay use case [31]. These authors presented architectural implementation details of their system, but did not present any evaluation results or interaction design decisions.…”
Section: Gaze and Mid-air Gesturesmentioning
confidence: 99%
“…The gaze-only input technique often suffers from the so-called “Midas touch” problem [ 32 ]. This problem can be relieved by recognizing the intention based on in-depth processing of gaze tracking data, such as the intended and unintended gaze movement classification [ 33 ], or by introducing an additional modality for confirmation, such as hand gesture [ 34 ], touch action [ 35 – 37 ], joystick [ 38 ] and EEG (electroencephalography) [ 39 ]. As these previous studies aimed to utilize the gaze as an input method, it is more important to recognize the user intention accurately after the actual command action than before the command action.…”
Section: Introductionmentioning
confidence: 99%