Abstract. We introduce an approach to multimodal generation of verbal and nonverbal contributions for virtual characters in a multiparty dialogue scenario. This approach addresses issues of turn-taking, is able to synchronize the different modalities in real-time, and supports fixed utterances as well as utterances that are assembled by a full-fledged treebased text generation algorithm. The system is implemented in a first version as part of the second VirtualHuman demonstrator.
Natural multimodal interaction with realistic virtual characters provides rich opportunities for entertainment and education. In this paper we present the current VirtualHuman demonstrator system. It provides a knowledge-based framework to create interactive applications in a multi-user, multi-agent setting. The behavior of the virtual humans and objects in the 3D environment is controlled by interacting affective conversational dialogue engines. An elaborate model of affective behavior adds natural emotional reactions and presence of the virtual humans. Actions are defined in a XML-based markup language that supports the incremental specification of synchronized multimodal output. The system was successfully demonstrated during CeBIT 2006.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.