Proceedings of the 10th International Conference on Multimodal Interfaces 2008
DOI: 10.1145/1452392.1452421
|View full text |Cite
|
Sign up to set email alerts
|

A three-dimensional characterization space of software components for rapidly developing multimodal interfaces

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
16
0
4

Year Published

2008
2008
2022
2022

Publication Types

Select...
3
2
1

Relationship

3
3

Authors

Journals

citations
Cited by 13 publications
(20 citation statements)
references
References 28 publications
0
16
0
4
Order By: Relevance
“…For instance, when using the visual languages provided by HephaisTK (Dumas et al 2014) and OIDE (Serrano et al 2008), one has to describe multimodal interactions in terms of the CARE properties (Coutaz et al 1995). For the ICO (Navarre et al 2009) language, depicting interaction models requires a solid command of a formalism called Petri nets.…”
Section: The Warnings Seem To Be Overlookedmentioning
confidence: 99%
See 1 more Smart Citation
“…For instance, when using the visual languages provided by HephaisTK (Dumas et al 2014) and OIDE (Serrano et al 2008), one has to describe multimodal interactions in terms of the CARE properties (Coutaz et al 1995). For the ICO (Navarre et al 2009) language, depicting interaction models requires a solid command of a formalism called Petri nets.…”
Section: The Warnings Seem To Be Overlookedmentioning
confidence: 99%
“…Some examples of the aforementioned UIMSs include Mudra (Hoste et al 2011), ICO (Navarre et al 2009), OIDE (Serrano et al 2008), HephaisTK (Dumas et al 2014), and NiMMiT (De Boeck et al 2007). They certainly accomplish their goal of facilitating the prototyping of multimodal systems, but they all share the same issue: Their domain-specific languages require the use of concepts that are unrelated with the event languages with which programmers used to implement interactive systems in real-world projects 1 .…”
Section: Introductionmentioning
confidence: 99%
“…Fig. 2 presents an example of a graphically specified multimodal interaction using OIDE [13]: the example involves the combined usage of speech and gesture for performing a zoom task on a map displayed on an augmented table.…”
Section: IImentioning
confidence: 99%
“…Figure 2. Combined usage of speech and gesture for performing a zoom task on a map displayed on an augmented table: designer assembly for specifying a zoom task and screenshot of the same assembly in the OIDE (adapted from [13]). …”
Section: IImentioning
confidence: 99%
“…To do so, we define WoZ components as parts of the component-based approach for rapidly developing multimodal prototypes defined in [26]. These WoZ components are characterized according to the roles that a WoZ component can play in the data-flow of input multimodal interaction, from devices to tasks.…”
mentioning
confidence: 99%