2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00392
|View full text |Cite
|
Sign up to set email alerts
|

Maximum Classifier Discrepancy for Unsupervised Domain Adaptation

Abstract: In this work, we present a method for unsupervised domain adaptation. Many adversarial learning methods train domain classifier networks to distinguish the features as either a source or target and train a feature generator network to mimic the discriminator. Two problems exist with these methods. First, the domain classifier only tries to distinguish the features as a source or target and thus does not consider task-specific decision boundaries between classes. Therefore, a trained generator can generate ambi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
1,413
0
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 1,884 publications
(1,527 citation statements)
references
References 30 publications
2
1,413
0
1
Order By: Relevance
“…• We test our approach on three domains from EPIC-Kitchens [8], trained end-to-end using I3D [6], and provide the first benchmark of UDA for fine-grained action recognition. Our results show that MM-SADA outperforms source-only generalisation as well as alternative domain adaptation strategies such as batch-based normalisation [29], distribution discrepancy minimisation [32] and classifier discrepancy [45].…”
Section: Introductionmentioning
confidence: 84%
See 1 more Smart Citation
“…• We test our approach on three domains from EPIC-Kitchens [8], trained end-to-end using I3D [6], and provide the first benchmark of UDA for fine-grained action recognition. Our results show that MM-SADA outperforms source-only generalisation as well as alternative domain adaptation strategies such as batch-based normalisation [29], distribution discrepancy minimisation [32] and classifier discrepancy [45].…”
Section: Introductionmentioning
confidence: 84%
“…We show that the self-supervision task of predicting the correspondence of multiple modalities is an effective domain adaptation method. On its own, this can outperform domain alignment methods [32,45], by jointly optimising for the self-supervised task over both domains. Together with adversarial training, the proposed approach outperforms nonadapated models by 4.8%.…”
Section: Discussionmentioning
confidence: 99%
“…Several works [26,9,41] exploit semi-supervised learning for domain adaptation. Besides, MCD [38] and ADR [37] use a minimax training method to push target feature distributions away from the decision boundary, where both methods are composed of the feature extractor and the classifiers. More precisely, in [37], two different classifiers are sampled via stochastic dropout.…”
Section: Related Workmentioning
confidence: 99%
“…Unlike the prior arts [38,37], the proposed algorithm leverages a unified objective function to optimize all network parameters. The overall loss function is defined as a weighted sum of four objective functions:…”
Section: Drop To Adaptmentioning
confidence: 99%
See 1 more Smart Citation