2017
DOI: 10.1007/s12193-017-0242-2
|View full text |Cite
|
Sign up to set email alerts
|

Multi-modal user interface combining eye tracking and hand gesture recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2017
2017
2025
2025

Publication Types

Select...
6
4

Relationship

0
10

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 14 publications
0
5
0
Order By: Relevance
“…If the graphical representation is presented on a large display, as in our case, yet additional challenges arise as concerns how humans interact with it, including window management problems (Robertson et al, 2005). Closer to our interests, not much work exists on interpreting deictic gestures directed to large displays, especially as concerns recognizing the target at a semantic level (Kim et al, 2017).…”
Section: Related Workmentioning
confidence: 98%
“…If the graphical representation is presented on a large display, as in our case, yet additional challenges arise as concerns how humans interact with it, including window management problems (Robertson et al, 2005). Closer to our interests, not much work exists on interpreting deictic gestures directed to large displays, especially as concerns recognizing the target at a semantic level (Kim et al, 2017).…”
Section: Related Workmentioning
confidence: 98%
“…In the 30 years since, not much work has been done on how to accommodate the creation of new entities 5 (see (Wilson et al, 2016) for documents and (Li and Boyer, 2016) for tutoring dialogues about programming), and none in the visualization domain. Note we do not focus on multimodal reference resolution, another vast area (Navarretta, 2011;Qu and Chai, 2008;Eisenstein and Davis, 2006;Prasov and Chai, 2008;Iida et al, 2011;Kim et al, 2017;Sluÿters et al, 2022), even if we will briefly touch on deictic gestures in Section 3.…”
Section: Co-reference Resolutionmentioning
confidence: 99%
“…Likewise, an eyetracker combined with haptic feedback in virtual reality-based games is proposed for learning-based games [21]. Multimodal systems are advanced for communique functions, combining eye-tracking, gesture, and contact-and-voice input [22], [23], [24], [25], however, they did not consider potential end users in their studies. Also, it is unclear how the choice of the modality can impact the performance, especially if children with dyslexia have a different performance.…”
Section: Introductionmentioning
confidence: 99%