Analysis of running mechanics has traditionally been limited to a gait laboratory using either force plates or an instrumented treadmill in combination with a full-body optical motion capture system. With the introduction of inertial motion capture systems, it becomes possible to measure kinematics in any environment. However, kinetic information could not be provided with such technology. Furthermore, numerous body-worn sensors are required for a full-body motion analysis. The aim of this study is to examine the validity of a method to estimate sagittal knee joint angles and vertical ground reaction forces during running using an ambulatory minimal body-worn sensor setup. Two concatenated artificial neural networks were trained (using data from eight healthy subjects) to estimate the kinematics and kinetics of the runners. The first artificial neural network maps the information (orientation and acceleration) of three inertial sensors (placed at the lower legs and pelvis) to lower-body joint angles. The estimated joint angles in combination with measured vertical accelerations are input to a second artificial neural network that estimates vertical ground reaction forces. To validate our approach, estimated joint angles were compared to both inertial and optical references, while kinetic output was compared to measured vertical ground reaction forces from an instrumented treadmill. Performance was evaluated using two scenarios: training and evaluating on a single subject and training on multiple subjects and evaluating on a different subject. The estimated kinematics and kinetics of most subjects show excellent agreement (ρ>0.99) with the reference, for single subject training. Knee flexion/extension angles are estimated with a mean RMSE <5°. Ground reaction forces are estimated with a mean RMSE < 0.27 BW. Additionaly, peak vertical ground reaction force, loading rate and maximal knee flexion during stance were compared, however, no significant differences were found. With multiple subject training the accuracy of estimating discrete and continuous outcomes decreases, however, good agreement (ρ > 0.9) is still achieved for seven of the eight different evaluated subjects. The performance of multiple subject learning depends on the diversity in the training dataset, as differences in accuracy were found for the different evaluated subjects.
Human movement analysis has become easier with the wide availability of motion capture systems. Inertial sensing has made it possible to capture human motion without external infrastructure, therefore allowing measurements in any environment. As high-quality motion capture data is available in large quantities, this creates possibilities to further simplify hardware setups, by use of data-driven methods to decrease the number of body-worn sensors. In this work, we contribute to this field by analyzing the capabilities of using either artificial neural networks (eager learning) or nearest neighbor search (lazy learning) for such a problem. Sparse orientation features, resulting from sensor fusion of only five inertial measurement units with magnetometers, are mapped to full-body poses. Both eager and lazy learning algorithms are shown to be capable of constructing this mapping. The full-body output poses are visually plausible with an average joint position error of approximately 7 cm, and average joint angle error of 7∘. Additionally, the effects of magnetic disturbances typical in orientation tracking on the estimation of full-body poses was also investigated, where nearest neighbor search showed better performance for such disturbances.
Full-body motion capture typically requires sensors/markers to be placed on each rigid body segment, which results in long setup times and is obtrusive. The number of sensors/markers can be reduced using deep learning or offline methods. However, this requires large training datasets and/or sufficient computational resources. Therefore, we investigate the following research question: “What is the performance of a shallow approach, compared to a deep learning one, for estimating time coherent full-body poses using only five inertial sensors?”. We propose to incorporate past/future inertial sensor information into a stacked input vector, which is fed to a shallow neural network for estimating full-body poses. Shallow and deep learning approaches are compared using the same input vector configurations. Additionally, the inclusion of acceleration input is evaluated. The results show that a shallow learning approach can estimate full-body poses with a similar accuracy (~6 cm) to that of a deep learning approach (~7 cm). However, the jerk errors are smaller using the deep learning approach, which can be the effect of explicit recurrent modelling. Furthermore, it is shown that the delay using a shallow learning approach (72 ms) is smaller than that of a deep learning approach (117 ms).
An increasing diversity of available motion capture technologies allows for measurement of human kinematics in various environments. However, little is known about the differences in quality of measured kinematics by such technologies. Therefore, this work presents a comparison between three motion capture approaches, based on inertial-magnetic measurement units (processed with Xsens MVN Analyze) and optical markers (processed using Plug-In Gait and OpenSim Gait2392). It was chosen to evaluate the different motion capture approaches in running, as such kinematics are preferably measured in the natural running environment and involve challenging dynamics. An evaluation was done using data of 8 subjects running on a treadmill at three different speeds, namely 10, 12 and 14 km/h. The sagittal plane results show excellent correlation (ρ > 0.96) and RMSDs are smaller than 5 degrees for 6 out of the 8 subjects. However, results in the frontal and transversal planes were less correlated between the different motion capture approaches. This shows that sagittal kinematics can be measured consistently using any of the three analyzed motion capture approaches, but ambiguities exist in the analysis of frontal and transversal planes.
Background The foot progression angle is an important measure used to help patients reduce their knee adduction moment. Current measurement systems are either lab-bounded or do not function in all environments (e.g., magnetically distorted). This work proposes a novel approach to estimate foot progression angle using a single foot-worn inertial sensor (accelerometer and gyroscope). Methods The approach uses a dynamic step frame that is recalculated for the stance phase of each step to calculate the foot trajectory relative to that frame, to minimize effects of drift and to eliminate the need for a magnetometer. The foot progression angle (FPA) is then calculated as the angle between walking direction and the dynamic step frame. This approach was validated by gait measurements with five subjects walking with three gait types (normal, toe-in and toe-out). Results The FPA was estimated with a maximum mean error of ~ 2.6° over all gait conditions. Additionally, the proposed inertial approach can significantly differentiate between the three different gait types. Conclusion The proposed approach can effectively estimate differences in FPA without requiring a heading reference (magnetometer). This work enables feedback applications on FPA for patients with gait disorders that function in any environment, i.e. outside of a gait lab or in magnetically distorted environments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.