Since the beginning of the SAIBA effort to unify key interfaces in the multi-modal behavior generation process, the Behavior Markup Language (BML) has both gained ground as an important component in many projects worldwide, and continues to undergo further refinement. This paper reports on the progress made in the last year in further developing BML. It discusses some of the key challenges identified that the effort is facing, and reviews a number of projects that already are making use of BML or support its use.
Virtual humans are employed in many interactive applications using 3D virtual environments, including (serious) games. The motion of such virtual humans should look realistic (or 'natural') and allow interaction with the surroundings and other (virtual) humans. Current animation techniques differ in the trade-off they offer between their motion naturalness and the amount of control that can be exerted over the motion. We give an overview of these techniques, focusing on the exact trade-offs made. We show how to parameterize, combine (on different body parts) and concatenate motions to gain control. We discuss several aspects of motion naturalness and show how it can be evaluated. We conclude by showing the promise of combinations of different animation paradigms to enhance both naturalness and control.
Abstract. Human conversations are highly dynamic, responsive interactions. To enter into flexible interactions with humans, a conversational agent must be capable of fluent incremental behavior generation. New utterance content must be integrated seamlessly with ongoing behavior, requiring dynamic application of co-articulation. The timing and shape of the agent's behavior must be adapted on-the-fly to the interlocutor, resulting in natural interpersonal coordination. We present AsapRealizer, a BML 1.0 behavior realizer that achieves these capabilities by building upon, and extending, two state of the art existing realizers, as the result of a collaboration between two research groups.
Embodied conversational agents still do not achieve the fluidity and smoothness of natural conversational interaction. One main reason is that current system often respond with big latencies and in inflexible ways. We argue that to overcome these problems, real-time conversational agents need to be based on an underlying architecture that provides two essential features for fast and fluent behavior adaptation: A close bi-directional coordination between input processing and output generation, and incrementality of processing at both stages. We propose an architectural framework for conversational agents (ASAP) providing these two ingredients for fluid real-time conversation. The overall architectural concept is described, along with specific means of specifying incremental behavior in BML and technical implementations of different modules. We show how phenomena of fluid realtime conversation, like adapting to user feedback or smooth turn-keeping, can be realized with ASAP and we describe in detail an example real-time interaction with the implemented system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.