2016 12th International Conference on Intelligent Environments (IE) 2016
DOI: 10.1109/ie.2016.42
|View full text |Cite
|
Sign up to set email alerts
|

Combining Speech, Gaze, and Micro-gestures for the Multimodal Control of In-Car Functions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 42 publications
(21 citation statements)
references
References 10 publications
0
20
0
Order By: Relevance
“…Furthermore, Roider et al [28] and Nesselrath et al [21] studied the selection of objects inside the vehicle using hand gestures, eye gaze or speech commands separately. Similarly, Poitschke et al [25] studied referencing objects inside the vehicle using eye gaze gestures while Sezgin et al [31] studied selection using speech commands and facial recognition.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Furthermore, Roider et al [28] and Nesselrath et al [21] studied the selection of objects inside the vehicle using hand gestures, eye gaze or speech commands separately. Similarly, Poitschke et al [25] studied referencing objects inside the vehicle using eye gaze gestures while Sezgin et al [31] studied selection using speech commands and facial recognition.…”
Section: Related Workmentioning
confidence: 99%
“…There are several advantages to using hand gestures, eye gaze, head movements and speech over traditional touch-based interaction methods, such as increased simplicity and naturalness when interacting with a relatively complicated machine like a modern car, in addition to a reduction in distraction during the primary task (i.e. driving) [7,21,24,28]. Thus, researchers have tried to incorporate these modalities to control various components inside the vehicle [18,19,23,27,31,38].…”
Section: Introductionmentioning
confidence: 99%
“…Lastly, researchers design multimodal interfaces by combining single modalities such as speech, gaze, and gesture for one command (e.g., gazing at an object and gesturing toward it to select) [36], rather than applying them to cascading steps. These studies focus on the synergy effect of single modalities in interaction, rather than on reducing overall driver distraction [26].…”
Section: E Discussionmentioning
confidence: 99%
“…MMI can support education for disabled people using gestures and sound [13]. Using MMI in a car, the driver may choose his/her preferred modality from speech, gaze, and gestures, and can combine the respective system input with different modalities [14]. Another MMI system connected to the steering wheel of a car enables input via speech and gestures [15].…”
Section: Examples Of Multimodal Interaction Prototypesmentioning
confidence: 99%