In this paper we present an extensible software workbench for supporting the effective and dynamic prototyping of multimodal interactive systems. We hypothesize the construction of such applications to be based on the assembly of several components, namely various and sometimes interchangeable modalities at the input, fusion-fission components, and also several modalities at the output. Successful realization of advanced interactions can benefit from early prototyping and the iterative implementation of design requires the easy integration, combination, replacement, or upgrade of components. We have designed and implemented a thin integration platform able to manage these key elements, and thus provide the research community a tool to bridge the gap of the current support for multimodal applications implementation. The platform is included within a workbench offering visual editors, non-intrusive tools, components and techniques to assemble various modalities provided in different implementation technologies, while keeping a high level of performance of the integrated system
In this paper we focus on the software design of a multimodal driving simulator that is based on both multimodal driver's focus of attention detection as well as driver's fatigue state detection and prediction. Capturing and interpreting the driver's focus of attention and fatigue state is based on video data (e.g., facial expression, head movement, eye tracking). While the input multimodal interface relies on passive modalities only (also called attentive user interface), the output multimodal user interface includes several active output modalities for presenting alert messages including graphics and text on a mini-screen and in the windshield, sounds, speech and vibration (vibration wheel). Active input modalities are added in the meta-User Interface to let the user dynamically select the output modalities. The driving simulator is used as a case study for studying its software architecture based on multimodal signal processing and multimodal interaction components considering two software platforms, OpenInterface and ICARE.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.