2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC) 2022
DOI: 10.1109/smc53654.2022.9945517
|View full text |Cite
|
Sign up to set email alerts
|

EEG2Vec: Learning Affective EEG Representations via Variational Autoencoders

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(3 citation statements)
references
References 38 publications
0
3
0
Order By: Relevance
“…[25] explored traditional AE and variational AE for EEG emotion recognition with the latent features learned by the AEs. [47] proposed a conditional variational AE to perform emotion recognition with the compressed latent representation, where the proposed method conditions the decoder on the class label and participant ID. In this work, we aim to explore the usage of AE by learning a latent representation of the EEG channel dimension.…”
Section: A Autoencodersmentioning
confidence: 99%
“…[25] explored traditional AE and variational AE for EEG emotion recognition with the latent features learned by the AEs. [47] proposed a conditional variational AE to perform emotion recognition with the compressed latent representation, where the proposed method conditions the decoder on the class label and participant ID. In this work, we aim to explore the usage of AE by learning a latent representation of the EEG channel dimension.…”
Section: A Autoencodersmentioning
confidence: 99%
“…To achieve further improvement and learn more general features that can reveal or separate different factors of the phenomena entangled in the input data, unsupervised learning methods are introduced to make use of knowledge learned from a different EEG task [54]- [58]. For instance, autoencoders are first trained to reconstruct EEG time series before fine-tuning the encoder to a classification task [59]- [63]. These methods indicate that downstream EEG tasks can also benefit from more general feature extractors to a certain extent.…”
Section: A Pretrained Models For Eeg Analysismentioning
confidence: 99%
“…The drawback of auto-encoder is its strong tendency to over-fit [26], as it is solely trained to encode and decode with as little loss as possible regardless of how the latent space is organized [32], VAE has been developed as an effective solutions [26,2], e.g. VAEs has been used in EEG classification tasks to learn robust features [33,1,2,3].…”
Section: Related Work 1vaementioning
confidence: 99%