This paper presents a new motion model deformable motion models for human motion modeling and synthesis. Our key idea is to apply statistical analysis techniques to a set of precaptured human motion data and construct a low-dimensional deformable motion model of the form x = M( α, γ), where the deformable parameters α and γ control the motion's geometric and timing variations, respectively. To generate a desired animation, we continuously adjust the deformable parameters' values to match various forms of userspecified constraints. Mathematically, we formulate the constraintbased motion synthesis problem in a maximum a posteriori (MAP) framework by estimating the most likely deformable parameters from the user's input. We demonstrate the power and flexibility of our approach by exploring two interactive and easy-to-use interfaces for human motion generation: direct manipulation interfaces and sketching interfaces.
Figure 1: Motion style synthesis and retargeting: (top) after observing an unknown actor performing one walking style, we can synthesize other walking styles for the same actor; (bottom) we can transfer the walking style from one actor to another. AbstractThis paper presents a generative human motion model for synthesis, retargeting, and editing of personalized human motion styles. We first record a human motion database from multiple actors performing a wide variety of motion styles for particular actions. We then apply multilinear analysis techniques to construct a generative motion model of the form x = g(a, e) for particular human actions, where the parameters a and e control "identity" and "style" variations of the motion x respectively. The new modular representation naturally supports motion generalization to new actors and/or styles. We demonstrate the power and flexibility of the multilinear motion models by synthesizing personalized stylistic human motion and transferring the stylistic motions from one actor to another. We also show the effectiveness of our model by editing stylistic motion in style and/or identity space.
Figure 1: Realtime generation of physics-based motion control for human grasping: (left) automatic grasping of objects with different shapes, weights, frictions, and spatial orientations; (right) performance interfaces: acting out the desired grasping motion in front of a single Kinect. AbstractThis paper presents a robust physics-based motion control system for realtime synthesis of human grasping. Given an object to be grasped, our system automatically computes physics-based motion control that advances the simulation to achieve realistic manipulation with the object. Our solution leverages prerecorded motion data and physics-based simulation for human grasping. We first introduce a data-driven synthesis algorithm that utilizes large sets of prerecorded motion data to generate realistic motions for human grasping. Next, we present an online physics-based motion control algorithm to transform the synthesized kinematic motion into a physically realistic one. In addition, we develop a performance interface for human grasping that allows the user to act out the desired grasping motion in front of a single Kinect camera. We demonstrate the power of our approach by generating physics-based motion control for grasping objects with different properties such as shapes, weights, spatial orientations, and frictions. We show our physics-based motion control for human grasping is robust to external perturbations and changes in physical quantities.
This paper introduces a new generative statistical model that allows for human motion analysis and synthesis at both semantic and kinematic levels. Our key idea is to decouple complex variations of human movements into finite structural variations and continuous style variations and encode them with a concatenation of morphable functional models. This allows us to model not only a rich repertoire of behaviors but also an infinite number of style variations within the same action. Our models are appealing for motion analysis and synthesis because they are highly structured, contact aware , and semantic embedding . We have constructed a compact generative motion model from a huge and heterogeneous motion database (about two hours mocap data and more than 15 different actions). We have demonstrated the power and effectiveness of our models by exploring a wide variety of applications, ranging from automatic motion segmentation, recognition, and annotation, and online/offline motion synthesis at both kinematics and behavior levels to semantic motion editing. We show the superiority of our model by comparing it with alternative methods.
This paper describes a new method for acquiring physically realistic hand manipulation data from multiple video streams. The key idea of our approach is to introduce a composite motion control to simultaneously model hand articulation, object movement, and subtle interaction between the hand and object. We formulate videobased hand manipulation capture in an optimization framework by maximizing the consistency between the simulated motion and the observed image data. We search an optimal motion control that drives the simulation to best match the observed image data. We demonstrate the effectiveness of our approach by capturing a wide range of high-fidelity dexterous manipulation data. We show the power of our recovered motion controllers by adapting the captured motion data to new objects with different properties. The system achieves superior performance against alternative methods such as marker-based motion capture and kinematic hand motion tracking.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.