2021
DOI: 10.1007/s00521-020-05670-4
|View full text |Cite
|
Sign up to set email alerts
|

A deep multi-source adaptation transfer network for cross-subject electroencephalogram emotion recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 27 publications
(4 citation statements)
references
References 20 publications
0
4
0
Order By: Relevance
“…By simultaneously optimizing the loss functions of the MK-MMDs and the task, the DAN can reduce domain shift across domains, meanwhile preserving domain-invariant and task-related features. A multi-source adaptation transfer network (DMATN) for cross-subject emotion recognition is proposed by Wang et al [190], as shown in Figure 26(B). The mechanism of this model is exactly similar to the model proposed by Li et al [193].…”
Section: Experiments Detailsmentioning
confidence: 99%
See 1 more Smart Citation
“…By simultaneously optimizing the loss functions of the MK-MMDs and the task, the DAN can reduce domain shift across domains, meanwhile preserving domain-invariant and task-related features. A multi-source adaptation transfer network (DMATN) for cross-subject emotion recognition is proposed by Wang et al [190], as shown in Figure 26(B). The mechanism of this model is exactly similar to the model proposed by Li et al [193].…”
Section: Experiments Detailsmentioning
confidence: 99%
“…A: The Wasserstein generative adversarial network based domain adaptation approach 13[189]. B: The multi-source adaptation transfer network (DMATN) based domain adaptation approach that integrates the deep adversarial network and the multi-kernel maximum mean discrepancies (MK-MMDs) measurement 14[190].…”
mentioning
confidence: 99%
“…Unfortunately, these traditional methods may fail in addressing cross-subject/dataset emotion recognition due to the mismatch of feature distribution with EEG signals. To address this issue, many domain adaptation (DA) emotion recognition models for AER problem have been promoted ( Chu et al, 2017 ; Li et al, 2018a ; Li et al, 2020a , b ; Bao et al, 2021 ; Wang et al, 2021 ). In a DA emotion recognition system, one usually focuses on exploring an effective recognition model on one target domain with few or even none of the labeled data, by borrowing some positive knowledge from other source domain(s) with slightly different distribution with that of the target domain ( Bruzzone and Marconcini, 2010 ; Tao et al, 2012 ; Long et al, 2014 ; Zhang et al, 2019b ).…”
Section: Introductionmentioning
confidence: 99%
“…After private component extraction for unknown subjects, the more similar subject private classifiers would be granted higher weights, which would influence the joint classification of private and shared classifiers, and the result of 86.7 ± 7.1% was obtained on the SEED dataset [18]. In general, the main solutions of domain differences caused by individual information are transfer learning [7], [19]- [21] and deep learning [10], [15], [22]. For the results, both methods can obtain excellent results, but both miss the analysis of EEG information and the interpretability of EEG individual differences, which as the basic theory of affective brain-computer interfaces is more important for EEG-based emotion recognition.…”
Section: Introductionmentioning
confidence: 99%