2021
DOI: 10.1016/j.neunet.2020.12.013
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive transfer learning for EEG motor imagery classification with deep Convolutional Neural Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

5
153
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 201 publications
(158 citation statements)
references
References 80 publications
5
153
0
Order By: Relevance
“…Slightly lower classification accuracies for continuous fist and feet motor execution and imagery from the EEGMMI dataset can be attributed to the leave-one-participant-out LDA classifier training scheme as for each participant, the classifier was trained with the data from the remaining participants, whereas, for finger tapping dataset, the classifier was trained for each participant using their own data with 10 x 10 fold cross-validation. The classification accuracies using broadband LRTC are comparable to the accuracies obtained in the BCI literature (Ibáñez et al, 2014;Lew et al, 2014;Lopez-Larraz et al, 2014;Xu et al, 2014;Padfield et al, 2019;Zhang et al, 2021). Thus, broadband LRTCs can be used as features independently for application in BCI.…”
Section: Discussionsupporting
confidence: 71%
“…Slightly lower classification accuracies for continuous fist and feet motor execution and imagery from the EEGMMI dataset can be attributed to the leave-one-participant-out LDA classifier training scheme as for each participant, the classifier was trained with the data from the remaining participants, whereas, for finger tapping dataset, the classifier was trained for each participant using their own data with 10 x 10 fold cross-validation. The classification accuracies using broadband LRTC are comparable to the accuracies obtained in the BCI literature (Ibáñez et al, 2014;Lew et al, 2014;Lopez-Larraz et al, 2014;Xu et al, 2014;Padfield et al, 2019;Zhang et al, 2021). Thus, broadband LRTCs can be used as features independently for application in BCI.…”
Section: Discussionsupporting
confidence: 71%
“…With recent release of large scale EEG datasets (e.g. Cho et al, 2017; Lee et al 2019), there have been more attempts on employing DL models on signals from large number of participants (e.g., Stieger et al, 2020; Zhang et al, 2021; Ko et al, 2020; Mane et al, 2020), showing the relevance and timeliness of this study in the BCI field. Although these studies report the same conclusion for superiority of the DL approach in MI-BCI classification, their methodology and approach in building the DL model is different from our study.…”
Section: Discussionmentioning
confidence: 92%
“…Mane et al (2020) and Ko et al (2020) focused on feature representations in the model; Mane et al (2020) employed Filter-Bank CNN to decompose data into multiple frequency bands and extract spatially discriminative patterns in each band, and Ko et al (2020) applied a Multi-Scale Neural Network to exploit spatio-spectral-temporal features for all BCI paradigms. Zhang et al (2021) focused on transfer learning and employed a CNN model to develop a subject-independent classifier. Therefore, while our study pursues a similar goal, it dissociates itself from past research by conducting a statistically supported subject-wise comparison between the DL and ML approaches and also providing evidence for suitability of the DL approach for inefficient BCI users.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…To effectively classify patients infected with and without COVID-19, the deep convolution network [ 59 , 60 ] includes seven convolution layers, three pooling layers, and three full connection layers. Among them, represents the input image layer, represents the convolution layer, represents the maximum pooling layer, represents the full connection layer, represents the feature map, and represents the output result layer.…”
Section: Deep Learning Network Based On Covid-19 Ct Imagementioning
confidence: 99%