Summary. The Behavior Expression Animation Toolkit (BEAT) allows animators to input typed text that they wish to be spoken by an animated human figure, and to obtain as output appropriate and synchronized non-verbal behaviors and synthesized speech in a form that can be sent to a number of different animation systems. The non-verbal behaviors are assigned on the basis of actual linguistic and contextual analysis of the typed text, relying on rules derived from extensive research into human conversational behavior. The toolkit is extensible, so that new rules can be quickly added. It is designed to plug into larger systems that may also assign personality profiles, motion characteristics, scene constraints, or the animation styles of particular animators.
More than another friendly facer Rea knows how to have a conversation with living, breathing human users with a wink, a nod, and a sidelong glance. 70 April 2000/Vot 43, No 4 COMMUNICATIONS OF THE ACM M £0^C OMMUNICATIONS Of THE ACM April ZOOO/Vol, 43 No 4 Hello. How can I help you? Mike. I'm looking to buy a place near MIT Rea nods, indicating that she is following.Rea. I have a house to show you. A picture of a house appears on the screen behind Rea, who blinks and looks at the house and then at Mike. Rea. It's in Somerville.Mike. Tell me about it. Rea looks up and away while she plans what to say.Rea. It's big. Rea makes an expansive gesture with her hands. Mike brings his hands up as if he is about to speak, so Rea does not continue, waiting for him to speak.Mike.Tell me more about it. Rea. Sure thing. It has a nice garden. Rea sketches o curved gesture with her hands, indicating the garden extends along two sides ofthe house.Mike. How far is it? Rea. It is five minutes to the Porter SquareT station. Reo makes it clear it is five minutes on foot from the T station by making a walking gesture with her fingers.Mike. How big is the house? Rea. It has four bedrooms, three bathrooms... Mike interrupts Rea, who stops speaking immediately.Mike. Wait. Tel I me, where is the master bedrooom? Rea. I'm sorry, I didn't catch that. What did you ask me? Mike. Where is the master bedroom? Rea. It's upstairs. Rea points up.Mike.Where is the master bathroom? Rea. It's next to the bedroom. Rea brings her hands together to indicate the relationship between the bedroom and the bathroom.And the house tour continues.
Adults and children spontaneously produce gestures while they speak, and such gestures appear to support and expand on the information communicated by the verbal channel. Little research, however, has been carried out to examine the role played by gesture in the listener's representation of accumulating information. Do listeners attend to the gestures that accompany narrative speech? In what kinds of relationships between gesture and speech do listeners attend to the gestural channel? If listeners do attend to information received in gesture, how is this information represented-is it 'tagged' as originating in the gestural channel? In this article research is described that addresses these questions. Results show that listeners do attend to information conveyed in gesture, when that information supplements or even contradicts the information conveyed by speech. And information received via gesture is available for retelling in speech. These results are taken to demonstrate that gesture is not taken by the listener to be epiphenomenal to the act of speaking, or a simple manual translation of speech. But they also suggest that the information conveyed in a discourse may be represented in a manner that is neither gesture nor language, although accessible to both channels.Pantomime without discourse will leave you nearly tranquil, discourse without gestures will wring tears from you.
In this paper, we argue for embodied corrversational characters as the logical extension of the metaphor of human -computer interaction as a conversation. We argue that the only way to fully model the richness of human I&+ to-face communication is to rely on conversational analysis that describes sets of conversational behaviors as fi~lfilling conversational functions, both interactional and propositional. We demonstrate how to implement this approach in Rea, an embodied conversational agent that is capable of both multimodal input understanding and output generation in a limited application domain. Rea supports both social and task-oriented dialogue. We discuss issues that need to be addressed in creating embodied conversational agents, and describe the architecture of the Rea interface.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.