Figure 1: Realtime generation of physics-based motion control for human grasping: (left) automatic grasping of objects with different shapes, weights, frictions, and spatial orientations; (right) performance interfaces: acting out the desired grasping motion in front of a single Kinect. AbstractThis paper presents a robust physics-based motion control system for realtime synthesis of human grasping. Given an object to be grasped, our system automatically computes physics-based motion control that advances the simulation to achieve realistic manipulation with the object. Our solution leverages prerecorded motion data and physics-based simulation for human grasping. We first introduce a data-driven synthesis algorithm that utilizes large sets of prerecorded motion data to generate realistic motions for human grasping. Next, we present an online physics-based motion control algorithm to transform the synthesized kinematic motion into a physically realistic one. In addition, we develop a performance interface for human grasping that allows the user to act out the desired grasping motion in front of a single Kinect camera. We demonstrate the power of our approach by generating physics-based motion control for grasping objects with different properties such as shapes, weights, spatial orientations, and frictions. We show our physics-based motion control for human grasping is robust to external perturbations and changes in physical quantities.
This paper describes a new method for acquiring physically realistic hand manipulation data from multiple video streams. The key idea of our approach is to introduce a composite motion control to simultaneously model hand articulation, object movement, and subtle interaction between the hand and object. We formulate videobased hand manipulation capture in an optimization framework by maximizing the consistency between the simulated motion and the observed image data. We search an optimal motion control that drives the simulation to best match the observed image data. We demonstrate the effectiveness of our approach by capturing a wide range of high-fidelity dexterous manipulation data. We show the power of our recovered motion controllers by adapting the captured motion data to new objects with different properties. The system achieves superior performance against alternative methods such as marker-based motion capture and kinematic hand motion tracking.
Figure 1: Our system automatically and accurately reconstructs full-body kinematics and dynamics data using input data captured by three depth cameras and a pair of pressure-sensing shoes. (top) reference image data; (bottom) the reconstructed full-body poses and contact forces (red arrows) and torsional torques (yellow arrows) applied at the center of pressure. AbstractWe present a new method for full-body motion capture that uses input data captured by three depth cameras and a pair of pressuresensing shoes. Our system is appealing because it is low-cost, non-intrusive and fully automatic, and can accurately reconstruct both full-body kinematics and dynamics data. We first introduce a novel tracking process that automatically reconstructs 3D skeletal poses using input data captured by three Kinect cameras and wearable pressure sensors. We formulate the problem in an optimization framework and incrementally update 3D skeletal poses with observed depth data and pressure data via iterative linear solvers. The system is highly accurate because we integrate depth data from multiple depth cameras, foot pressure data, detailed full-body geometry, and environmental contact constraints into a unified framework. In addition, we develop an efficient physics-based motion reconstruction algorithm for solving internal joint torques and contact forces in the quadratic programming framework. During reconstruction, we leverage Newtonian physics, friction cone constraints, contact pressure information, and 3D kinematic poses obtained from the kinematic tracking process to reconstruct full-body dynamics data. We demonstrate the power of our approach by capturing a wide range of human movements and achieve state-of-theart accuracy in our comparison against alternative systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.