2020
DOI: 10.1088/1741-2552/abca16
|View full text |Cite
|
Sign up to set email alerts
|

Motor imagery recognition with automatic EEG channel selection and deep learning

Abstract: Objective. Modern motor imagery (MI)-based brain computer interface systems often entail a large number of electroencephalogram (EEG) recording channels. However, irrelevant or highly correlated channels would diminish the discriminatory ability, thus reducing the control capability of external devices. How to optimally select channels and extract associated features remains a big challenge. This study aims to propose and validate a deep learning-based approach to automatically recognize two different MI state… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
41
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 36 publications
(41 citation statements)
references
References 61 publications
0
41
0
Order By: Relevance
“…Compared with other channel selection methods, the mean classification performance of STECS on the three data sets was improved by up to 10.42%, 6.13%, and 3.72% respectively. Zhang et al [9] inserted an automatic channel selection (ACS) layer into a convolutional neural network for MI classification. By introducing the sparse regularization, the output of the ACS layer was constrained to be sparse and the channels corresponding to the nonzero coefficients were retained for MI classification.…”
Section: Introductionmentioning
confidence: 99%
“…Compared with other channel selection methods, the mean classification performance of STECS on the three data sets was improved by up to 10.42%, 6.13%, and 3.72% respectively. Zhang et al [9] inserted an automatic channel selection (ACS) layer into a convolutional neural network for MI classification. By introducing the sparse regularization, the output of the ACS layer was constrained to be sparse and the channels corresponding to the nonzero coefficients were retained for MI classification.…”
Section: Introductionmentioning
confidence: 99%
“…The outputs of SincNet were fed into the squeeze-andexcitation (SE) modules [24,25] for recalibration. The structure of the SE modules is shown in Fig.…”
Section: B Sincnet-based Hybrid Neural Networkmentioning
confidence: 99%
“…In this study, we used the cross-entropy loss to minimize the classification error between the predicted labels and the groundtruth labels. Moreover, the sparse loss [24] and center loss [26] were used to simplify the neural network and improve the discriminability of different class features, respectively. The objective functions of cross-entropy loss 𝐿 , sparse loss 𝐿 , and center loss 𝐿 are given as follows:…”
Section: Loss Functionmentioning
confidence: 99%
“…Zhang et al [ 10 ] validate and developed a DL-based algorithm for automatically recognizing two distinct MI states by choosing the related EEG channel. It employs an automated channel selection (ACS) approach.…”
Section: Related Workmentioning
confidence: 99%