Introduction: Human joint moment is a critical parameter to rehabilitation assessment and human-robot interaction, which can be predicted using an artificial neural network (ANN) model. However, challenge remains as lack of an effective approach to determining the input variables for the ANN model in joint moment prediction, which determines the number of input sensors and the complexity of prediction. Methods: To address this research gap, this study develops a mathematical model based on the Hill muscle model to determining the online input variables of the ANN for the prediction of joint moments. In this method, the muscle activation, muscle-tendon moment velocity and length in the Hill muscle model and muscle-tendon moment arm are translated to the online measurable variables, i.e., muscle electromyography (EMG), joint angles and angular velocities of the muscle span. To test the predictive ability of these input variables, an ANN model is designed and trained to predict joint moments. The ANN model with the online measurable input variables is tested on the experimental data collected from ten healthy subjects running with the speeds of 2, 3, 4 and 5 m/s on a treadmill. The variance accounted for (VAF) between the predicted and inverse dynamics moment is used to evaluate the prediction accuracy. Results: The results suggested that the method can predict joint moments with a higher accuracy (mean VAF = 89.67±5.56 %) than those obtained by using other joint angles and angular velocities as inputs (mean VAF = 86.27±6.6%) evaluated by jack-knife cross-validation. Conclusions: The proposed method provides us with a powerful tool to predict joint moment based on online measurable variables, which establishes the theoretical basis for optimizing the input sensors and detection complexity of the prediction system. It may facilitate the research on exoskeleton robot control and real-time gait analysis in motor rehabilitation.
The surface electromyogram (sEMG) contains a wealth of motion information, which can reflect user's muscle motion intentions. The decoding based on sEMG has been widely used to provide a safe and effective human-computer interaction (HCI) method for neural prosthesis and exoskeleton robot control. The motor intention decoding based on low sampling frequency sEMG may promote the application of wearable low-cost EMG sensors in HCI. Therefore, a motor intention decoding scheme suitable for low frequency EMG signal is proposed in this paper, that is, transfer learning based on Alexnet. Moreover, the effects of different feature extraction methods and data augmentation with Gaussian white noise are fully analyzed. The proposed algorithm is evaluated with the NinaPro database 5. The highest accuracy can reach 70.4% ± 4.36% in 53 gestures identification of 10 subjects. Some classical machine learning algorithms such as support vector machine (SVM), linear discriminant analysis (LDA) and K Nearest Neighbor (KNN) are chosen to make comparison, where the SVM with Gaussian kernel function reaches to the maximum accuracy of 67.98% ± 4.56%. Two-way variance results show significant differences between each other. The experiment results show that the transfer learning is effective for decoding low-frequency sEMG for a large number of gestures.INDEX TERMS EMG, hand gesture recognition, low-frequency sEMG, machine learning, motor intention decoding.
Joint moment is an important parameter for a quantitative assessment of human motor function. However, most existing joint moment prediction methods lacking feature selection of optimal inputs subset, which reduced the prediction accuracy and output comprehensibility, increased the complexity of the input sensor structure, making the portable prediction equipment impossible to achieve. To address this problem, this paper develops a novel method based on the binary particle swarm optimization (BPSO) with the variance accounted for (VAF) as fitness function to reduce the number of input variables while improves the accuracy in joint moment prediction. The proposed method is tested on the experimental data collected from ten healthy subjects who are running on a treadmill with four different speeds of 2, 3, 4 and 5m/s. The BPSO is used to select optimal inputs subset from ten electromyography (EMG) data and six joints angles, and then the selected optimal inputs subset be used to train and predict the joint moments via artificial neural network (ANN). Prediction accuracy is evaluated by the variance accounted for (VAF) test between the predicted joint moment and multi-body dynamics moment. Results show that the proposed method can reduce the number of input variables of five joint moment from 16 to less than 11. Furthermore, the proposed method can better predict joint moment (mean VAF: 94.40±0.84%) in comparison with the state-of-the-art methods, i.e. Elastic Net (mean VAF: 93.38±0.96%) and mutual information (mean VAF: 86.27±1.41%). In conclusion, the proposed method reduces the number of input variables and improves the prediction accuracy that may allow the future development of a portable, non-invasive system for joint moment prediction. As such, it may facilitate real-time assessment of human motor function. INDEX TERMS Joint moment prediction, artificial neural network, binary particle swarm optimization, feature selection.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.