Animated characters that move and gesticulate appropriately with spoken text are useful in a wide range of applications. Unfortunately, this class of movement is very difficult to generate, even more so when a unique, individual movement style is required. We present a system that, with a focus on arm gestures, is capable of producing full-body gesture animation for given input text in the style of a particular performer. Our process starts with video of a person whose gesturing style we wish to animate. A tool-assisted annotation process is performed on the video, from which a statistical model of the person's particular gesturing style is built. Using this model and input text tagged with theme, rheme and focus, our generation algorithm creates a gesture script. As opposed to isolated singleton gestures, our gesture script specifies a stream of continuous gestures coordinated with speech. This script is passed to an animation system, which enhances the gesture description with additional detail. It then generates either kinematic or physically simulated motion based on this description. The system is capable of generating gesture animations for novel text that are consistent with a given performer's style, as was successfully validated in an empirical user study.
The implementation of a state-specific configuration-selective vibrational configuration interaction (cs-VCI) approach based on a polynomial representation of the potential energy surface is presented. Advantages over grid-based algorithms are discussed. A combination of a configuration selection criterion, the simultaneous exclusion of irrelevant configurations, and an internal contraction scheme allow to handle large variational spaces. A modified version of the iterative Jacobi-Davidson diagonalization has been used to determine relevant internal eigenpairs of the cs-VCI matrices in the selected space. Benchmark calculations are provided for systems with up to 2x10(7) configurations and three-mode couplings in the expansion of the potential.
Abstract. A significant goal in multi-modal virtual agent research is to determine how to vary expressive qualities of a character so that it is perceived in a desired way. The "Big Five" model of personality offers a potential framework for organizing these expressive variations. In this work, we focus on one parameter in this model -extraversion -and demonstrate how both verbal and non-verbal factors impact its perception. Relevant findings from the psychology literature are summarized. Based on these, an experiment was conducted with a virtual agent that demonstrates how language generation, gesture rate and a set of movement performance parameters can be varied to increase or decrease the perceived extraversion. Each of these factors was shown to be significant. These results offer guidance to agent designers on how best to create specific characters.
Embodied virtual reality faithfully renders users' movements onto an avatar in a virtual 3D environment, supporting nuanced nonverbal behavior alongside verbal communication. To investigate communication behavior within this medium, we had 30 dyads complete two tasks using a shared visual workspace: negotiating an apartment layout and placing model furniture on an apartment floor plan. Dyads completed both tasks under three different conditions: face-to-face, embodied VR with visible full-body avatars, and no embodiment VR, where the participants shared a virtual space, but had no visible avatars. Both subjective measures of users' experiences and detailed annotations of verbal and nonverbal behavior are used to understand how the media impact communication behavior. Embodied VR provides a high level of social presence with conversation patterns that are very similar to face-to-face interaction. In contrast, providing only the shared environment was generally found to be lonely and appears to lead to degraded communication.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.