2018
DOI: 10.1609/aaai.v32i1.11767
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Adversarial Domain Adaptation

Abstract: Recent advances in deep domain adaptation reveal that adversarial learning can be embedded into deep networks to learn transferable features that reduce distribution discrepancy between the source and target domains. Existing domain adversarial adaptation methods based on single domain discriminator only align the source and target data distributions without exploiting the complex multimode structures. In this paper, we present a multi-adversarial domain adaptation (MADA) approach, which captures multimode str… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
113
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 602 publications
(113 citation statements)
references
References 15 publications
0
113
0
Order By: Relevance
“…For deep learning methods EEGNet, AConvNet, FConvNet, DLSVM, and NRDNN, the batch size and epoch were set to 40 and 1,000, respectively. The learning rate was dynamically changed during optimization using the formula as follows (Pei et al, 2018 ): , in which p linearly changes from 0 to 1, η 0 = 1 e − 3, α = 10, and β = 0.75. Besides, the parameter r is set to 2 in the region-attention network.…”
Section: Methodsmentioning
confidence: 99%
“…For deep learning methods EEGNet, AConvNet, FConvNet, DLSVM, and NRDNN, the batch size and epoch were set to 40 and 1,000, respectively. The learning rate was dynamically changed during optimization using the formula as follows (Pei et al, 2018 ): , in which p linearly changes from 0 to 1, η 0 = 1 e − 3, α = 10, and β = 0.75. Besides, the parameter r is set to 2 in the region-attention network.…”
Section: Methodsmentioning
confidence: 99%
“…Other non-federated learning works also employ multiple generators or discriminators to improve learning stability (Hoang et al 2018;Durugkar, Gemp, and Mahadevan 2016) or prevent mode collapse (Ghosh et al 2018). Pei et al (2018) proposed multi-class discriminators structure, which is used to improve the classification performance cross multiple domains. Liu and Tuzel (2016) proposed to learn the joint distribution of multi-domain images by using multiple pairs of GANs.…”
Section: Related Workmentioning
confidence: 99%
“…Competitive baselines: We not only explore the state-of-theart domain adaptation methods including DAN (Long et al 2015), JAN (Long et al 2017), MADA (Pei et al 2018) 2020), DADA (Tang, Chen, and Jia 2020) but also combine network backbones (CDAN and DANN) with multi-norm strategies: BN (Long et al 2018) and TN (Wang et al 2019a) as baselines. We follow the standard protocols operated with CDAN and DANN to evaluate the effectiveness of our proposed normalization technique.…”
Section: Empirical Analysismentioning
confidence: 99%