2022 26th International Conference on Pattern Recognition (ICPR) 2022
DOI: 10.1109/icpr56361.2022.9956610
|View full text |Cite
|
Sign up to set email alerts
|

Spatio-Temporal Analysis of Transformer based Architecture for Attention Estimation from EEG

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 33 publications
0
3
0
Order By: Relevance
“…By leveraging attention mechanisms, transformer-based methods can achieve performance comparable or even superior to RNN-based approaches [32]. Consequently, researchers have started exploring the application of Transformer-based methods in drowsiness detection, aiming to overcome the limitations associated with long-term sequences often encountered by RNN-based methods [17], [18]. Furthermore, the graph convolutional network (GCN) has become a popular choice for EEG-based drowsiness detection [33].…”
Section: Relatedworkmentioning
confidence: 99%
See 1 more Smart Citation
“…By leveraging attention mechanisms, transformer-based methods can achieve performance comparable or even superior to RNN-based approaches [32]. Consequently, researchers have started exploring the application of Transformer-based methods in drowsiness detection, aiming to overcome the limitations associated with long-term sequences often encountered by RNN-based methods [17], [18]. Furthermore, the graph convolutional network (GCN) has become a popular choice for EEG-based drowsiness detection [33].…”
Section: Relatedworkmentioning
confidence: 99%
“…To enhance performance of decoding drowsiness-related brain activities, many researchers have incorporated DL models into their studies. Four main DL models are used in most studies: CNN [13], [14], RNN [15], [16], Transformer [17], [18], and GCN [19], [20]. Although DL-based models have demonstrated significant improvements in decoding drowsy brain activities, most of them do not consider the relative change of brain activities, which is a crucial aspect when dealing with EEG signals.…”
Section: Introductionmentioning
confidence: 99%
“…Xie et al built five Transformer-based deep learning models, and achieved the best accuracy rates of 83.31%, 74.44% and 64.22% in the two-class, three-class and fourclass tasks, respectively [30]. Delvigne et al processed the spatiotemporal information of EEG signals based on the Transformer model and achieved a classification accuracy of more than 74% in the comparison of commonly used models [31].…”
Section: Introductionmentioning
confidence: 99%