Users require more effective and efficient means of interaction with increasingly complex information and new interactive devices. This document summarizes the results of the international Dagstuhl Seminar on Coordination and Fusion in Multimodal Interaction that took place at Schloss Dagstuhl in Germany October 27 through November 2, 20011 . We first outline a research roadmap in the near and long term. Next we describe requirements and an abstract architecture for this class of systems. We then detail requirements for semantic representations and languages necessary to enable these systems. Finally, we describe data, annotation methodologies and tools necessary to further advance the field. We conclude with a recommended action plan for forward progress in the community.1. 0 ROADMAP Figure 1 illustrates the roadmap in the near term, from 2002-2005 for the creation of mobile, human-centered intelligent multimodal interfaces. Three "lanes" in the road identify three areas of research and development, including empirical and data driven models of multimodality, advanced methods for multimodal communication and toolkits for multimodal systems. The end of the road maps indicate the outcome in 2005, specifically multimodal corpora, computational models, and interface toolkits. Of course there are a variety of interim outcomes along with way. For multimodal corpora this includes annotated corpora of human and natural phenomena (e.g., surveillance, meeting, or broadcast news video) as well as human-machine interactions. Corpora can be used by systems for training or testing/evaluation purposes. In the methods lane, this includes developments such as multimodal mutual disambiguation, multiparty interaction, and multimodal barge in. With respect to toolkits, developments include markup standards for multimodal phenomena (e.g., for combinations of speech, gesture, and facial expressions), reusable components for multimodal analysis and generation, and tools for universal and mobile multimodal access.1 Some slides are available at www.dfki.de/~wahlster/Dagstuhl_Multi_Modality/