Pattern recognition of time-series signals for movement and gesture analysis plays an important role in many fields as diverse as healthcare, astronomy, industry and entertainment. As a new technique in recent years, Deep Learning (DL) has made tremendous progress in computer vision and Natural Language Processing (NLP), but largely unexplored on its performance for movement and gesture recognition from noisy multi-channel sensor signals. To tackle this problem, this study was undertaken to classify diverse movements and gestures using four developed DL models: a 1-D Convolutional neural network (1-D CNN), a Recurrent neural network model with Long Short Term Memory (LSTM), a basic hybrid model containing one convolutional layer and one recurrent layer (C-RNN), and an advanced hybrid model containing three convolutional layers and three recurrent layers (3+3 C-RNN). The models will be applied on three different databases (DB) where the performances of models were compared. DB1 is the HCL dataset which includes 6 human daily activities of 30 subjects based on accelerometer and gyroscope signals. DB2 and DB3 are both based on the surface electromyography (sEMG) signal for 17 diverse movements. The evaluation and discussion for the improvements and limitations of the models were made according to the result.
Surface electromyogram (sEMG) provides a promising means to develop a non-invasive prosthesis control system. In the context of transradial amputees, it allows a limited but functionally useful return of hand function that can significantly improve patients' quality of life. In order to predict users' motion intention, the ability to process multichannel sEMG signals generated by muscle is required. We propose an attention-based Bidirectional Convolutional Gated Recurrent Unit (Bi-CGRU) deep neural network to analyse sEMG signals. The two key novel aspects of our work include: firstly, novel use of a bi-directional sequential GRU to focus on the inter-channel relationship between both the prior time steps and the posterior signals. This enhances the intra-channel features extracted by an initial one-dimensional CNN. Secondly, an attention component is employed at each GRU layer. This mechanism learns different intra-attention weights, enabling focus on vital parts and corresponding dependencies of the signal. This increases robustness to feature noise to further improve accuracy. The attention-based Bi-CGRU is evaluated on the Ninapro benchmark dataset of sEMG hand gestures. The electromyogram signals of 17 hand gestures from 10 subjects from the database are tested. The average accuracy achieved 88.73%, outperforming the state-of-the-art approaches on the same database. This demonstrates that the proposed attention based Bi-CGRU model provides a promising bio-control solution for robotic prostheses.
This work explored the requirements of accurately and reliably predicting user intention using a deep learning methodology when performing fine-grained movements of the human hand. The focus was on combining a feature engineering process with the effective capability of deep learning to further identify salient characteristics from a biological input signal. 3 time domain features (root mean square, waveform length, and slope sign changes) were extracted from the surface electromyography (sEMG) signal of 17 hand and wrist movements performed by 40 subjects. The feature data was mapped to 6 sensor bend resistance readings from a CyberGlove II system, representing the associated hand kinematic data. These sensors were located at specific joints of interest on the human hand (the thumb's metacarpophalangeal joint, the proximal interphalangeal joint of each finger, and the radiocarpal joint of the wrist). All datasets were taken from database 2 of the NinaPro online database repository. A 3-layer long short-term memory model with dropout was developed to predict the 6 glove sensor readings using a corresponding sEMG feature vector as input. Initial results from trials using test data from the 40 subjects produce an average mean squared error of 0.176. This indicates a viable pathway to follow for this prediction method of hand movement data, although further work is needed to optimize the model and to analyze the data with a more detailed set of metrics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.