2018
DOI: 10.48550/arxiv.1809.02176
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Multi-Adversarial Domain Adaptation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 20 publications
(25 citation statements)
references
References 14 publications
0
25
0
Order By: Relevance
“…We extend domain adversarial neural network (DANN) [7], conditional adversarial domain adaptation (CDAN) [16] with the proposed category-invariant feature enhancement (CIFE). We compare with a number of state-of-theart methods: Deep adaptation network (DAN) [15], domain adversarial neural network (DANN) [7], joint adaptation network (JAN) [17], multi-adversarial domain adaptation (MADA) [19], conditional adversarial domain adaptation (CDAN) [16], adversarial discriminative domain adaptation (ADDA) [23], cycle-consistent adversarial domain adaptation (CyCADA) [12], batch spectral penalization (BSP) [4], dynamic adversarial domain adaptation (DAAN) [30], batch nuclear-norm maximization (BNM) [5], enhanced transport distance (ETD) [14] and label propagation with augmented anchors (A 2 LP) [31].…”
Section: Comparison Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We extend domain adversarial neural network (DANN) [7], conditional adversarial domain adaptation (CDAN) [16] with the proposed category-invariant feature enhancement (CIFE). We compare with a number of state-of-theart methods: Deep adaptation network (DAN) [15], domain adversarial neural network (DANN) [7], joint adaptation network (JAN) [17], multi-adversarial domain adaptation (MADA) [19], conditional adversarial domain adaptation (CDAN) [16], adversarial discriminative domain adaptation (ADDA) [23], cycle-consistent adversarial domain adaptation (CyCADA) [12], batch spectral penalization (BSP) [4], dynamic adversarial domain adaptation (DAAN) [30], batch nuclear-norm maximization (BNM) [5], enhanced transport distance (ETD) [14] and label propagation with augmented anchors (A 2 LP) [31].…”
Section: Comparison Methodsmentioning
confidence: 99%
“…The adversarial discriminative domain adaptation (ADDA) uses asymmetric feature extractors for the two domains to conduct the alignment [23]. The multiadversarial domain adaptation (MADA) captures multimode structures by re-weighting features with category predictions [19]. The cycle-consistent adversarial domain adaptation (CyCADA) implements domain adaptation at both pixel-level and feature-level by using cycle-consistent adversarial training [12].…”
Section: Related Workmentioning
confidence: 99%
“…Specifically, the domain-adversarial neural network (DANN) [17] first leverages adversarial learning between the domain classifier and feature generator to learn domain-invariant representations by adding a simple gradient reversal layer (GRL). Further, to address the mode collapse issue, multi-adversarial domain adaptation (MADA) [30] presents a multi-adversarial domain adaptation approach with the help of multiple domain classifiers. The adversarial discriminative domain adaptation (ADDA) [18] uses label learning in the source domain to distinguish representations, and then uses asymmetric mapping (without weight sharing) learned by standard generative adversarial network (GAN) loss to map the target data onto a separate code in the same space.…”
Section: Unsupervised Domain Adversarial Adaptationmentioning
confidence: 99%
“…Designed in cycle-consistent adversarial domain adaptation (CyCADA) [31], cyclic consistency loss strengthens the consistency of structure and semantics during adversarial domain adaptation (ADA). MADA [30] captures multi-modal structures to achieve fine-grained alignment of different data distributions based on multiple domain identifiers. Co-regularized domain alignment (Co-DA) [32] constructs a number of different feature spaces, and aligns the source and target distributions in each feature space.…”
Section: Unsupervised Domain Adversarial Adaptationmentioning
confidence: 99%
See 1 more Smart Citation