Hand gesture recognition based on surface electromyography (sEMG) plays an important role in the field of biomedical and rehabilitation engineering. Recently, there is a remarkable progress in gesture recognition using high-density surface electromyography (HD-sEMG) recorded by sensor arrays. On the other hand, robust gesture recognition using multichannel sEMG recorded by sparsely placed sensors remains a major challenge. In the context of multiview deep learning, this paper presents a hierarchical view pooling network (HVPN) framework, which improves multichannel sEMG-based gesture recognition by learning not only view-specific deep features but also view-shared deep features from hierarchically pooled multiview feature spaces. Extensive intrasubject and intersubject evaluations were conducted on the large-scale noninvasive adaptive prosthetics (NinaPro) database to comprehensively evaluate our proposed HVPN framework. Results showed that when using 200 ms sliding windows to segment data, the proposed HVPN framework could achieve the intrasubject gesture recognition accuracy of 88.4%, 85.8%, 68.2%, 72.9%, and 90.3% and the intersubject gesture recognition accuracy of 84.9%, 82.0%, 65.6%, 70.2%, and 88.9% on the first five subdatabases of NinaPro, respectively, which outperformed the state-of-the-art methods.
Objective. Our study aims to investigate the feasibility of in-ear sensing for human–computer interface. Approach. We first measured the agreement between in-ear biopotential and scalp-electroencephalogram (EEG) signals by channel correlation and power spectral density analysis. Then we applied EEG compact network (EEGNet) for the classification of a two-class motor task using in-ear electrophysiological signals. Main results. The best performance using in-ear biopotential with global reference reached an average accuracy of 70.22% (cf 92.61% accuracy using scalp-EEG signals), but the performance in-ear biopotential with near-ear reference was poor. Significance. Our results suggest in-ear sensing would be a viable human–computer interface for movement prediction, but careful consideration should be given to the position of the reference electrode.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.