2015
DOI: 10.1371/journal.pone.0125039
|View full text |Cite
|
Sign up to set email alerts
|

A Neural Network-Based Optimal Spatial Filter Design Method for Motor Imagery Classification

Abstract: In this study, a novel spatial filter design method is introduced. Spatial filtering is an important processing step for feature extraction in motor imagery-based brain-computer interfaces. This paper introduces a new motor imagery signal classification method combined with spatial filter optimization. We simultaneously train the spatial filter and the classifier using a neural network approach. The proposed spatial filter network (SFN) is composed of two layers: a spatial filtering layer and a classifier laye… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
17
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 32 publications
(17 citation statements)
references
References 21 publications
0
17
0
Order By: Relevance
“…Research is also being carried out to improve the CSP algorithm. In [65], a neural network-based optimal spatial filter design method has been proposed. The spatial filter and the classifier are trained simultaneously using a neural network named spatial filter network (SFN).…”
Section: Discussionmentioning
confidence: 99%
“…Research is also being carried out to improve the CSP algorithm. In [65], a neural network-based optimal spatial filter design method has been proposed. The spatial filter and the classifier are trained simultaneously using a neural network named spatial filter network (SFN).…”
Section: Discussionmentioning
confidence: 99%
“…Support vector machines with nonlinear kernels may be able to achieve higher classification accuracy than LDA classifiers. The more powerful classification abilities of neural networks may also prove beneficial for improving BCI performance, as has been explored recently with EEG-based BCIs [ 66 69 ].…”
Section: Discussionmentioning
confidence: 99%
“…End-to-end learning exhibits several advantages; for example, it can be implemented with minimal preprocessing procedures [e.g., centering (Wang Z. et al, 2013 ; Nurse et al, 2015b ; Schirrmeister et al, 2017 ), scaling (Wang Z. et al, 2013 ; Schirrmeister et al, 2017 ), outlier removal (Nurse et al, 2016 ), or band pass filtering (Yuksel and Olmez, 2015 ; Sturm et al, 2016 ; Tang et al, 2017 )]. It additionally holds the promise of highly accurate decoding because of the joint optimization of feature extraction and decoding.…”
Section: Feature Extractionmentioning
confidence: 99%
“…It additionally holds the promise of highly accurate decoding because of the joint optimization of feature extraction and decoding. While statistically significant performance improvements have been reported when comparing end-to-end models with approaches combining CSP-based feature extraction with generic classifiers (Yuksel and Olmez, 2015 ; Lu et al, 2017 ; Tang et al, 2017 ), end-to-end models have not yet clearly outperformed state of the art methods (Nurse et al, 2015b , 2016 ; Schirrmeister et al, 2017 ). Some of the difficulties that may impair the efficiency of end-to-end approaches include difficulties to fit end-to-end models, such as to gather enough data, and/or properly regularize the models so as to avoid overfitting.…”
Section: Feature Extractionmentioning
confidence: 99%