Human kinetics, specifically joint moments and ground reaction forces (GRFs) can provide important clinical information and can be used to control assistive devices. Traditionally, collection of kinetics is mostly limited to the lab environment because it relies on data that measured from a motion capture system and floor-embedded force plates to calculate the dynamics via musculoskeletal models. This spatially limited method makes it extremely challenging to measure kinetics outside the laboratory in a variety of walking conditions due to the expensive device setup and large space required. Recently, employing machine learning with IMU sensors are suggested as an alternative method for biomechanical analyses. Although these methods enable estimating human kinetic data outside the laboratory by linking IMU sensor data with kinetics dataset, they were limited to show inaccurate kinetic estimates even in highly repeatable single walking conditions due to the employment of generic deep learning algorithms. Thus, this paper proposes a novel deep learning model, Kinetics-FM-DLR-Ensemble-Net to estimate the hip, knee, and ankle joint moments in the sagittal plane and 3 dimensional ground reaction forces (GRFs) using three IMU sensors on the thigh, shank, and foot under several representative walking conditions in daily living, such as treadmill, level-ground, stair, and ramp with different walking speeds. This is the first study that implements both joint moments and GRFs in multiple walking conditions using IMU sensors via deep learning. Our deep learning model is versatile and accurate for identifying human kinetics across diverse subjects and walking conditions and our model outperforms state-of-the-art deep learning model for kinetics estimation by a large margin.
<div>Measurement of human body movement is an essential step in biomechanical analysis. The current standard for human motion capture systems uses infrared cameras to track reflective markers placed on the subject. While these systems can accurately track joint kinematics, the analyses are spatially limited to the lab environment. Though Inertial Measurement Unit (IMU) can eliminate the spatial limitations of the motion capture system, those systems are impractical for use in daily living due to the need for many sensors, typically one per body segment. Due to the need for practical and accurate estimation of joint kinematics, this study implements a reduced number of IMU sensors and employs machine learning algorithm to map sensor data to joint angles. Our developed algorithm estimates hip, knee, and ankle angles in the sagittal plane using two shoe-mounted IMU sensors in different practical walking conditions: treadmill, level overground, stair, and slope conditions. Specifically, we proposed five deep learning networks that use combinations of Convolutional Neural Networks (CNN) and Gated Recurrent Unit (GRU) based Recurrent Neural Networks (RNN) as base learners for our framework. Using those five baseline models, we proposed a novel framework, DeepBBWAE-Net, that implements ensemble techniques such as bagging, boosting, and weighted averaging to improve kinematic predictions. DeepBBWAE-Net predicts joint kinematics for the three joint angles under all the walking conditions with a Root Mean Square Error (RMSE) 6.93-29.0% lower than base models individually. This is the first study that uses a reduced number of IMU sensors to estimate kinematics in multiple walking environments.</div>
Identifying 3D human walking poses in unconstrained environments has many applications such as enabling prosthetists and clinicians to access the amputees’ walking functions outside clinics and helping amputees obtain an optimal walking condition with predictive control. Thus, we propose the wearable motion capture problem of reconstructing and predicting 3D human poses from the wearable IMU sensors and wearable cameras. To solve this challenging problem, we introduce a novel Attention-Oriented Recurrent Neural Network (AttRNet) that contains a sensor-wise attention-oriented recurrent encoder, a reconstruction module, and a dynamic temporal attention-oriented recurrent decoder, to reconstruct the current pose and predict the future poses. To evaluate our approach, we collected a new WearableMotionCapture dataset using wearable IMUs and wearable video cameras, along with the musculoskeletal joint angle ground truth. The proposed AttRNet shows high accuracy on theWearableMotionCapture dataset, and it also outperforms the current best methods on two public pose prediction datasets with IMU-only data: DIP-IMU and TotalCaputre. The source codes and the new dataset will be publicly available on https://github.com/MoniruzzamanMd/Wearable-Motion-Capture.<br>
<p>Human kinetics, specifically joint moments and ground reaction forces (GRFs) can provide important clinical information and can be used to control assistive devices. Traditionally, collection of kinetics is mostly limited to the lab environment because it relies on data that measured from a motion capture system and floor-embedded force plates to calculate the dynamics via musculoskeletal models. This spatially limited method makes it extremely challenging to measure kinetics outside the laboratory in a variety of walking conditions due to the expensive device setup and large space required. Recently, employing machine learning with IMU sensors are suggested as an alternative method for biomechanical analyses. Although these methods enable estimating human kinetic data outside the laboratory by linking IMU sensor data with kinetics dataset, they were limited to show inaccurate kinetic estimates even in highly repeatable single walking conditions due to the employment of generic deep learning algorithms. Thus, this paper proposes a novel deep learning model, KineticsFM-DLR-Ensemble-Net to estimate the hip, knee, and ankle joint moments in the sagittal plane and 3 dimensional ground reaction forces (GRFs) using three IMU sensors on the thigh, shank, and foot under several representative walking conditions in daily living, such as treadmill, level-ground, stair, and ramp with different walking speeds. This is the first study that implements both joint moments and GRFs in multiple walking conditions using IMU sensors via deep learning. Our deep learning model is versatile and accurate for identifying human kinetics across diverse subjects and walking conditions and our model outperforms state-of-the-art deep learning model for kinetics estimation by a large margin.<br> </p>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.