Objective.
Accurate classification of electroencephalogram (EEG) signals is crucial for advancing brain-computer interface (BCI) technology. However, current methods face significant challenges in classifying hand movement EEG signals, including effective spatial feature extraction, capturing temporal dependencies, and representing underlying signal dynamics.
Approach.
This paper introduces a novel multi-model fusion approach, FusionNet-LSTM, designed to address these issues. Specifically, it integrates Convolutional Neural Networks (CNN) for spatial feature extraction, Gated Recurrent Units (GRU) and Long Short-Term Memory (LSTM) networks for capturing temporal dependencies, and Autoregressive (AR) models for representing signal dynamics.
Main results.
Compared to single models and state-of-the-art methods, this fusion approach demonstrates substantial improvements in classification accuracy. Experimental results show that the proposed model achieves an accuracy of 87.1% in cross-subject data classification and 99.1% in within-subject data classification. Additionally, Gradient Boosting Trees were employed to evaluate the significance of various EEG features to the model.
Significance.
This study highlights the advantages of integrating multiple models and introduces a superior classification model, which is pivotal for the advancement of BCI systems.