2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00018
|View full text |Cite
|
Sign up to set email alerts
|

Drop to Adapt: Learning Discriminative Features for Unsupervised Domain Adaptation

Abstract: Recent works on domain adaptation exploit adversarial training to obtain domain-invariant feature representations from the joint learning of feature extractor and domain discriminator networks. However, domain adversarial methods render suboptimal performances since they attempt to match the distributions among the domains without considering the task at hand. We propose Drop to Adapt (DTA), which leverages adversarial dropout to learn strongly discriminative features by enforcing the cluster assumption. Accor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
101
0
3

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 174 publications
(104 citation statements)
references
References 31 publications
0
101
0
3
Order By: Relevance
“…The CDAN [31] conditioned the adversarial training on discriminative information conveyed in the classifier predictions. By enforcing the cluster assumption, the DTA [32] utilized adversarial dropout to learn discriminative features. The GSDA [33] took the class-wise alignment, group-wise alignment and global alignment into consideration during the feature learning.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The CDAN [31] conditioned the adversarial training on discriminative information conveyed in the classifier predictions. By enforcing the cluster assumption, the DTA [32] utilized adversarial dropout to learn discriminative features. The GSDA [33] took the class-wise alignment, group-wise alignment and global alignment into consideration during the feature learning.…”
Section: Related Workmentioning
confidence: 99%
“…We compare our method with class-agnostic discrepancy minimization methods: RevGrad [18], [46], DAN [13], and JAN [14]. Moreover, we compare our method with the ones which explicitly or implicitly take the class information or decision boundary into consideration to learn more discriminative features: MADA [28], MCD [27], ADR [26], CDAN [31], DTA [32] and GSDA [33]. The descriptions of these methods can be found in Section 2.…”
Section: Datasetsmentioning
confidence: 99%
“…The Adversarial Discriminative Domain Adaptation (ADDA) strategy [25] follows the idea of Generative Adversarial Networks, along with discriminative modeling and untied weight sharing to learn domain-invariant features, while keeping a useful representation for the discriminative task. Drop to Adapt (DTA) [26] makes use of adversarial dropout to enforce discriminative domain-invariant features. Damodaran et al [27] proposed the Deep Joint Distribution Optimal Transport (DeepJDOT) approach, which learns both the classifier and aligned data representations between the source and target domain following a single neural framework with a loss functions based on the Optimal Transport theory [28].…”
Section: Introductionmentioning
confidence: 99%
“…Another successful DA approach, unsupervised domain adversarial training (Ganin et al, 2016; Tzeng et al, 2015; Lee et al, 2019; Fernando et al, 2013; Kouw and Loog, 2019; Wang and Deng, 2018) relies on domain invariant features to achieve good domain adaptation. Several adversarial training methods have been proposed, including the recent ones based on discriminator framework (Tzeng et al, 2017), partial transfer learning (Cao et al, 2018) (assuming that the target domain dataset is a subset of the source domain) and using associations between the source and target domains (e.g.…”
Section: Introductionmentioning
confidence: 99%