2021
DOI: 10.1016/j.bspc.2021.103021
|View full text |Cite
|
Sign up to set email alerts
|

A channel-mixing convolutional neural network for motor imagery EEG decoding and feature visualization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
9
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(9 citation statements)
references
References 36 publications
0
9
0
Order By: Relevance
“…That is, fixing the sliding short-time window length parameter τ =2 s with an overlapping step of 1 s, resulting in N τ =5 EEG segments. For implementing the filter bank strategy, the following bandwidths of interest: ∆ f ∈{µ∈ [8][9][10][11][12], β∈ [12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30]} Hz. These bandwidths belong to µ, and β rhythms, commonly associated with electrical brain activities provoked by MI tasks [53].…”
Section: Preprocessing and Feature Extraction Of Image-based Represen...mentioning
confidence: 99%
See 1 more Smart Citation
“…That is, fixing the sliding short-time window length parameter τ =2 s with an overlapping step of 1 s, resulting in N τ =5 EEG segments. For implementing the filter bank strategy, the following bandwidths of interest: ∆ f ∈{µ∈ [8][9][10][11][12], β∈ [12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30]} Hz. These bandwidths belong to µ, and β rhythms, commonly associated with electrical brain activities provoked by MI tasks [53].…”
Section: Preprocessing and Feature Extraction Of Image-based Represen...mentioning
confidence: 99%
“…Since the way the convolution kernels learn features within CNN frameworks directly influences final performance outcomes, visualizing the inputs that mainly excite the individual activation patterns of weights learned at any layer of the model can aid interpretation [28]. Several approaches to analyzing EEG decoding models via post hoc interpretation techniques are reported to enhance the ability to provide explainable information about sensor-locked activity across multiple BCI systems of diverse nature, introducing techniques like kernel visualizations, saliency map, EEG decoding components, score-weighted maps, ablation tests, among others [29][30][31]. Still, the approaches for building class activation mapping-based (CAM) visualizations have increased interest in MI research, which performs a weighted sum of the feature maps of the last convolutional layer for each class using and a structural regularizer for preventing overfitting during training [32,33].…”
Section: Introductionmentioning
confidence: 99%
“…The most commonly method (Saliency) is also the most simplest wherein the gradient w.r.t. input was computed 31 , 32 , 33 , 34 , 35 , 36 , 37 , 34 , 38 . The next commonly used method is plotting the convolutional filters directly; usually, the convolutional filters that have a kernel spanning the entire EEG channels (spatial convolutional layer weights) 39 , 40 , 41 , 42 .…”
Section: Introductionmentioning
confidence: 99%
“…The correct recognition of neuronal activity in different motion imagination can lead to brain instructions that can help patients with severe motion neuron disease to control external equipment such as wheelchairs. Also, motion imagination classification is also an important support for rehabilitation training [ 5 ].…”
Section: Introductionmentioning
confidence: 99%
“…Li et al proposed an end-to-end spatial-temporal GCNN, which simultaneously captured the spatial-temporal features of EEG signals to identify different motion imagination [13]. Lun et al [14] proposed a deep learning framework based on GCNN by combining the functional topological relationship of electrodes, so as to improve the decoding performance of motion imagination EEG signals. Sun et al proposed an adaptive spatial-temporal GCNN, which can make full use of the characteristics of EEG signal in time domain and channel correlation in space domain [15].…”
Section: Introductionmentioning
confidence: 99%