Generating a visually appealing human motion sequence using low-dimensional control signals is a major line of study in the motion research area in computer graphics. We propose a novel approach that allows us to reconstruct full body human locomotion using a single inertial sensing device, a smartphone. Smartphones are among the most widely used devices and incorporate inertial sensors such as an accelerometer and a gyroscope. To find a mapping between a full body pose and smartphone sensor data, we perform low dimensional embedding of full body motion capture data, based on a Gaussian Process Latent Variable Model. Our system ensures temporal coherence between the reconstructed poses by using a state decomposition model for automatic phase segmentation. Finally, application of the proposed nonlinear regression algorithm finds a proper mapping between the latent space and the sensor data. Our framework effectively reconstructs plausible 3D locomotion sequences. We compare the generated animation to ground truth data obtained using a commercial motion capture system.
This article presents a Model Predictive Control framework with a visuomotor system that synthesizes eye and head movements coupled with physics-based full-body motions while placing visual attention on objects of importance in the environment. As the engine of this framework, we propose a visuomotor system based on human visual perception and full-body dynamics with contacts. Relying on partial observations with uncertainty from a simulated visual sensor, an optimal control problem for this system leads to a Partially Observable Markov Decision Process, which is difficult to deal with. We approximate it as a deterministic belief Markov Decision Process for effective control. To obtain a solution for the problem efficiently, we adopt differential dynamic programming, which is a powerful scheme to find a locally optimal control policy for nonlinear system dynamics. Guided by a reference skeletal motion without any a priori gaze information, our system produces realistic eye and head movements together with full-body motions for various tasks such as catching a thrown ball, walking on stepping stones, balancing after being pushed, and avoiding moving obstacles.
In this paper, we propose an efficient data-guided method based on Model Predictive Control (MPC) to synthesize a full-body motion. Guided by a reference motion, our method repeatedly plans the full-body motion to produce an optimal control policy for predictive control while sliding the fixed-span window along the time axis. Based on this policy, the method computes the joint torques of a character at every time step. Together with contact forces and external perturbations if there are any, the joint torques are used to update the state of the character. Without including the contact forces in the control vector, our formulation of the trajectory optimization problem enables automatic adjustment of contact timings and positions for balancing in response to environmental changes and external perturbations. For efficiency, we adopt derivative-based trajectory optimization on top of state-of-the-art smoothed contact dynamics. Use of derivatives enables our method to run much faster than the existing sampling-based methods. In order to further accelerate the performance of MPC, we propose efficient numerical differentiation of the system dynamics of a full-body character based on two schemes: data reuse and data interpolation. The former scheme exploits data dependency to reuse physical quantities of the system dynamics at near-by time points. The latter scheme allows the use of derivatives at sparse sample points to interpolate those at other time points in the window. We further accelerate evaluation of the system dynamics by exploiting the sparsity of physical quantities such as Jacobian matrix resulting from the tree-like structure of the articulated body. Through experiments, we show that the proposed method efficiently can synthesize realistic motions such as locomotion, dancing, gymnastic motions, and martial arts at interactive rates using moderate computing resources.
We present SketchiMo, a novel approach for the expressive editing of articulated character motion. SketchiMo solves for the motion given a set of projective constraints that relate the sketch inputs to the unknown 3 D poses. We introduce the concept of sketch space, a contextual geometric representation of sketch targets---motion properties that are editable via sketch input---that enhances, right on the viewport, different aspects of the motion. The combination of the proposed sketch targets and space allows for seamless editing of a wide range of properties, from simple joint trajectories to local parent-child spatiotemporal relationships and more abstract properties such as coordinated motions. This is made possible by interpreting the user's input through a new sketch-based optimization engine in a uniform way. In addition, our view-dependent sketch space also serves the purpose of disambiguating the user inputs by visualizing their range of effect and transparently defining the necessary constraints to set the temporal boundaries for the optimization.
The processing of captured motion is an essential task for undertaking the synthesis of high‐quality character animation. The motion decomposition techniques investigated in prior work extract meaningful motion primitives that help to facilitate this process. Carefully selected motion primitives can play a major role in various motion‐synthesis tasks, such as interpolation, blending, warping, editing or the generation of new motions. Unfortunately, for a complex character motion, finding generic motion primitives by decomposition is an intractable problem due to the compound nature of the behaviours of such characters. Additionally, decomposed motion primitives tend to be too limited for the chosen model to cover a broad range of motion‐synthesis tasks. To address these challenges, we propose a generative motion decomposition framework in which the decomposed motion primitives are applicable to a wide range of motion‐synthesis tasks. Technically, the input motion is smoothly decomposed into three motion layers. These are base‐level motion, a layer with controllable motion displacements and a layer with high‐frequency residuals. The final motion can easily be synthesized simply by changing a single user parameter that is linked to the layer of controllable motion displacements or by imposing suitable temporal correspondences to the decomposition framework. Our experiments show that this decomposition provides a great deal of flexibility in several motion synthesis scenarios: denoising, style modulation, upsampling and time warping.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.