Motion retargeting is the process of copying motion from one character (source) to another (target) when the source and target body sizes and proportions (of arms, legs, torso, etc.) are different. The problem of automatic motion retargeting has been studied for several decades; however, the motion quality obtained with the application of current approaches is on occasion unrealistic. This is because previous methods, which are mainly based on numerical optimization, generally do not incorporate prior knowledge of the details and nuances of human movements. To address these issues, we present a novel human motion retargeting system using a deep learning framework with large-scale motion data to produce high-quality retargeted human motion. We establish a deep-learning-based motion retargeting system using a variational deep autoencoder combining the deep convolutional inverse graphics network (DC-IGN) and the U-Net. The DC-IGN is utilized for disentangling the motion of each body part, while the U-Net is employed to preserve details of the original motion. We conduct several experiments to validate the proposed motion retargeting system, and find that ours achieves better accuracy along with reduced computational burden when compared with the conventional motion retargeting approach and other neural network architectures.
The inertial measurement unit (IMU) and magnetic, angular rate, and gravity (MARG) sensor orientation and position are widely used in the medical, robotics, and other fields. In general, the orientations can be defined by the integration of angular velocity data, and the positions are also computed from the double integration of acceleration data. However, the acceleration and angular velocity data are often inaccurate due to measurement errors which arise when the sensor moves quickly. Therefore, the orientations and positions significantly differ from the actual values. To address these issues, several techniques are proposed for the accurate measurement of IMU and MARG sensor orientations and positions. The proposed optimization method is applied to raw sensor data to compute faithful orientations by stabilizing and accelerating the convergence of the optimization process. Furthermore, a deep neural network based on 1D convolutional neural network (CNN) layers is proposed to predict the desired velocity from raw acceleration data. The method is validated qualitatively and quantitatively with an optical motion capture (mocap) system. The experimental results show that the proposed method significantly improves orientation and position estimations compared to those of other approaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.