2015 International Conference on Intelligent Environments 2015
DOI: 10.1109/ie.2015.39
|View full text |Cite
|
Sign up to set email alerts
|

SiAM - Situation-Adaptive Multimodal Interaction for Innovative Mobility Concepts of the Future

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 3 publications
0
5
0
Order By: Relevance
“…As research has shown, the use of multiple input modalities can surpass a single input modality in terms of performance [9,18,37], multimodal user interaction offers a significant utility for in-vehicle application. Mitrevska et al demonstrate an adaptive control of in-vehicle functions using an individual modality (speech, gaze or gesture) or a combination of two or more [23]. Mu ller and Weinberg discuss methods for a multimodal interaction using gaze, touch and speech for in-vehicle tasks presenting a few advantages and disadvantages of individual modalities [26].…”
Section: Related Workmentioning
confidence: 99%
“…As research has shown, the use of multiple input modalities can surpass a single input modality in terms of performance [9,18,37], multimodal user interaction offers a significant utility for in-vehicle application. Mitrevska et al demonstrate an adaptive control of in-vehicle functions using an individual modality (speech, gaze or gesture) or a combination of two or more [23]. Mu ller and Weinberg discuss methods for a multimodal interaction using gaze, touch and speech for in-vehicle tasks presenting a few advantages and disadvantages of individual modalities [26].…”
Section: Related Workmentioning
confidence: 99%
“…Multimodal user interaction has a wide variety of applications for in-vehicle functions. Mitrevska et al demonstrate the use cases of an adaptive multimodal control of in-vehicle functions with the help of an individual modality (speech, gaze or gesture) or a combination of two or more modalities [15].…”
Section: Related Workmentioning
confidence: 99%
“…Participants of the evaluation study were much more comfortable speaking than nodding and were much more forgiving for recognition errors with the speech recognition than for than for head nods (Kousidis et al, 2014). C3 also enables the driver to choose between equivalent input modalities (Mitrevska et al, 2015). At the same time, the concept also uses complementary input of different modalities to build up a command.…”
Section: Providing Equivalent Alternativesmentioning
confidence: 99%
“…C8 integrates gaze information for controlling multiple Figure 2.12: Left: C5 uses complementary input of speech with touch gestures on the steering wheel (Pfleging et al, 2011(Pfleging et al, , 2012. Right: Sensor setup for combining gaze input with gestures on the steering wheel in C3 (Mitrevska et al, 2015).…”
Section: Complementary Inputmentioning
confidence: 99%
See 1 more Smart Citation