ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2021
DOI: 10.1109/icassp39728.2021.9414496
|View full text |Cite
|
Sign up to set email alerts
|

Subject-Invariant Eeg Representation Learning For Emotion Recognition

Abstract: The discrepancies between the distributions of the train and test data, a.k.a., domain shift, result in lower generalization for emotion recognition methods. One of the main factors contributing to these discrepancies is human variability. Domain adaptation methods are developed to alleviate the problem of domain shift, however, these techniques while reducing between database variations fail to reduce between-subject variability. In this paper, we propose an adversarial deep domain adaptation approach for emo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 16 publications
0
6
0
Order By: Relevance
“…Thus we have to employ DA methods from other fields. The maximum independence domain adaptation (MIDA) (Yan et al, 2017 ), model-agnostic learning of semantic features (MASF), conditional deep convolutional generative adversarial networks (C-DCGANs) (Zhang et al, 2021 ) and subject-invariant domain adaption (SIDA) (Rayatdoost et al, 2021 ) are introduced to verify the advantage of our model. The sensitivity and FPR are provided in Tables 5 , 6 .…”
Section: Resultsmentioning
confidence: 99%
“…Thus we have to employ DA methods from other fields. The maximum independence domain adaptation (MIDA) (Yan et al, 2017 ), model-agnostic learning of semantic features (MASF), conditional deep convolutional generative adversarial networks (C-DCGANs) (Zhang et al, 2021 ) and subject-invariant domain adaption (SIDA) (Rayatdoost et al, 2021 ) are introduced to verify the advantage of our model. The sensitivity and FPR are provided in Tables 5 , 6 .…”
Section: Resultsmentioning
confidence: 99%
“…HIVE-CODAs include seven constituent modules: subjectinvariant domain adaption (SIDA) [37], conditional deep convolutional generative adversarial networks (C-DCGANs) [38], plug-and-play domain adaptation (PPDA) [39], maximum independence domain adaptation (MIDA) [40], maximum mean discrepancy-adversarial autoencoders (MMD-AAEs) [41], model-agnostic learning of semantic features (MASF) [42], and cone manifold domain adaptation (CMDA) [43]. The modular hierarchical structure is depicted in Figure 4.…”
Section: Modular Hierarchical Structurementioning
confidence: 99%
“…2) SIDA: We also estimated the performance of SIDA on epileptic EEG, which combines power spectral density (PSD) features and adversarial learning [37]. SIDA focuses on the extraction of the invariant representations among different domains.…”
Section: ) Mmd-aaementioning
confidence: 99%
See 1 more Smart Citation
“…In the related fields of EEG emotion recognition and EEG motor imagery, it has been shown that explicitly modeling subject-invariant features, by using an adversarial layer to remove all subject information from the latent features, improves generalization of models and benefits classification accuracy across subjects [9,10,11]. In [12,13], the authors have proposed to include Variational Autoencoders (VAE) [14] and have found that VAE's may improve subjectindependent performance, as the latent space is conditioned to follow a Gaussian distribution, and are advantageous in the unsupervised modeling of EEG brain neural signals.…”
Section: Introductionmentioning
confidence: 99%