Multimodal Interaction With W3C Standards 2016
DOI: 10.1007/978-3-319-42816-1_19
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal Fusion and Fission within the W3C MMI Architectural Pattern

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2019
2019
2019
2019

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 18 publications
0
3
0
Order By: Relevance
“…Modality fusion has attained maturity [18], but fusion engines are not yet integrated in MMI architectures. Our team recently presented a first proposal regarding how to integrate a fusion engine in a W3C-aligned MMI architecture [19], but to tackle fusion in the generic decoupled scenario of an MMI architecture, where a wide range of modalities can exist and, potentially, be fused, several challenges need to be addressed [20]. For instance, the semantics for the input modalities must be uniform: saying “left” or pressing the left arrow should result in the same semantic content, e.g., LEFT.…”
Section: Background and Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Modality fusion has attained maturity [18], but fusion engines are not yet integrated in MMI architectures. Our team recently presented a first proposal regarding how to integrate a fusion engine in a W3C-aligned MMI architecture [19], but to tackle fusion in the generic decoupled scenario of an MMI architecture, where a wide range of modalities can exist and, potentially, be fused, several challenges need to be addressed [20]. For instance, the semantics for the input modalities must be uniform: saying “left” or pressing the left arrow should result in the same semantic content, e.g., LEFT.…”
Section: Background and Related Workmentioning
confidence: 99%
“…The adaptive, context-aware use of output modalities—fission—while it has deserved far less attention than fusion, can play a very important role in how interactive systems can adapt to different devices, users and dynamic contexts [21]. In this regard, very interesting contributions have been made by [22,23], dividing the life cycle of a multimodal presentation in stages, as also proposed by the W3C (generation, styling, and rendering [13,20,24]), proposing a mathematical model to determine the best possible combination of input and output modalities based on the user’s profile and preferences; and our group proposing the basis for an intelligent adaptation of output relying on context and user models—AdaptO [6].…”
Section: Background and Related Workmentioning
confidence: 99%
“…The main difference is the time at which level of processing the fusion of different sources takes place (Dumas et al, 2009). The following paragraphs give a brief summary based on (Dumas et al, 2009;Schnelle-Walka, Duarte, & Radomski, 2016):…”
Section: Fusion Methodsmentioning
confidence: 99%