Surface electromyogram (sEMG) signals have been used in human motion intention recognition, which has significant application prospects in the fields of rehabilitation medicine and cognitive science. However, some valuable dynamic information on upper-limb motions is lost in the process of feature extraction for sEMG signals, and there exists the fact that only a small variety of rehabilitation movements can be distinguished, and the classification accuracy is easily affected. To solve these dilemmas, first, a multiscale time–frequency information fusion representation method (MTFIFR) is proposed to obtain the time–frequency features of multichannel sEMG signals. Then, this paper designs the multiple feature fusion network (MFFN), which aims at strengthening the ability of feature extraction. Finally, a deep belief network (DBN) was introduced as the classification model of the MFFN to boost the generalization performance for more types of upper-limb movements. In the experiments, 12 kinds of upper-limb rehabilitation actions were recognized utilizing four sEMG sensors. The maximum identification accuracy was 86.10% and the average classification accuracy of the proposed MFFN was 73.49%, indicating that the time–frequency representation approach combined with the MFFN is superior to the traditional machine learning and convolutional neural network.
Brain functional connectivity under the naturalistic paradigm has been illustrated to be better at predicting individual behaviors than other brain states, such as rest and task. Nevertheless, the state-of-the-art methods are difficult to achieve desirable results from movie-watching paradigm fMRI (mfMRI) induced brain functional connectivity, especially when the datasets are fewer. Incorporating other physical measurements into the prediction method may enhance accuracy. Eye tracking, becoming popular due to its portability and less expense, can provide abundant behavioral features related to the output of human's cognition, and thus might supplement the mfMRI in observing subjects’ subconscious behaviors. However, there are very few works on how to effectively integrate the multimodal information to strengthen the performance by a unified framework. To this end, a fusion approach with mfMRI and eye tracking, based on Convolution with Edge-Node Switching in Graph Neural Networks (CensNet), is proposed in this article, with subjects taken as nodes, mfMRI derived functional connectivity as node feature, different eye tracking features used to compute similarity between subjects to construct heterogeneous graph edges. By taking multiple graphs as different channels, we introduce squeeze-and-excitation attention module to CensNet (A-CensNet) to integrate graph embeddings from multiple channels into one. The experiments demonstrate that the proposed model outperforms the one using single modality, single channel and state-of-the-art methods. The results suggest that brain functional activities and eye behaviors might complement each other in interpreting trait-like phenotypes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.