In recent years, several paradigms have emerged for interactive storytelling. In character-based storytelling, plot generation is based on the behaviour of autonomous characters. In this paper, we describe user interaction in a fully-implemented prototype of an interactive storytelling system. We describe the planning techniques used to control autonomous characters, which derive from HTN planning. The hierarchical task network representing a characters' potential behaviour constitute a target for user intervention, both in terms of narrative goals and in terms of physical actions carried out on stage. We introduce two different mechanisms for user interaction: direct physical interaction with virtual objects and interaction with synthetic characters through speech understanding. Physical intervention exists for the user in on-stage interaction through an invisible avatar: this enables him to remove or displace objects of narrative significance that are resources for character's actions, thus causing these actions to fail. Through linguistic intervention, the user can influence the autonomous characters in various ways, by providing them with information that will solve some of their narrative goals, instructing them to take direct action, or giving advice on the most appropriate behaviour. We illustrate these functionalities with examples of system-generated behaviour and conclude with a discussion of scalability issues.
Copyright © 2005 IEEE. This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of Teesside University's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to pubs-permissions@ieee.org.By choosing to view this document, you agree to all provisions of the copyright laws protecting it.This document was downloaded from http://tees.openrepository.com/tees/handle/10149/58295 Please do not use this version for citation purposes.All items in TeesRep are protected by copyright, with all rights reserved, unless otherwise indicated. I nteractive storytelling immerses users in fantasy worlds in which they play parts in evolving narratives that respond to their intervention. Implementing the interactive storytelling concept involves many computing technologies: virtual or mixed reality for creating the artificial world, and artificial intelligence techniques and formalisms for generating the narrative and characters in real time.As a character in the narrative, the user communicates with virtual characters much like an actor communicates with other actors. This requirement introduces a novel context for multimodal communication as well as several technical challenges. Acting involves attitudes and body gestures that are highly significant for both dramatic presentation and communication. At the same time, spoken communication is essential to realistic interactive narratives. This kind of multimodal communication faces several difficulties in terms of real-time performance, coverage, and accuracy.We've developed an experimental system that provides a small-scale but complete integration of multimodal communication in interactive storytelling. It uses a narrative's semantic context to focus multimodal input processing-that is, the system interprets users' acting (the multimodal input) in the mixed reality stage in terms of narrative functions representing users' contributions to the unfolding plot.System overview: The mixed reality installation Figure 1 shows the mixed reality system architecture. The system uses a "magic mirror" paradigm, which we derived from the Transfiction approach.1 In our approach, a video camera captures the user's image in real time, and the Transfiction engine extracts the image from the background and mixes it with a 3D graphic model of a virtual stage, which includes the story's synthetic characters. The system projects the resulting image on a large screen facing the user, who sees his or her image embedded in the virtual stage with the synthetic actors.We based the mixed reality world's graphic component on the Unreal Tournament 2003 game engine (http://www.unrealtournament.com). This engine not only renders graphics and animates characters but, most importantly, contains a sophisticated deve...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.