2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00278
|View full text |Cite
|
Sign up to set email alerts
|

Gotta Adapt 'Em All: Joint Pixel and Feature-Level Domain Adaptation for Recognition in the Wild

Abstract: Recent developments in deep domain adaptation have allowed knowledge transfer from a labeled source domain to an unlabeled target domain at the level of intermediate features or input pixels. We propose that advantages may be derived by combining them, in the form of different insights that lead to a novel design and complementary properties that result in better performance. At the feature level, inspired by insights from semi-supervised learning, we propose a classification-aware domain adversarial neural ne… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
35
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
2
2

Relationship

1
9

Authors

Journals

citations
Cited by 46 publications
(35 citation statements)
references
References 50 publications
(149 reference statements)
0
35
0
Order By: Relevance
“…A prominent approach towards domain adaptation for semantic segmentation is distribution alignment by adversarial learning [13,10], where the alignment may happen at different representation layers, such as pixellevel [17,49], feature-level [17,18] or output-level [40]. Despite these efforts, discovering all modes of the data distribution is a key challenge for domain adaptation [39], akin to difficulties also faced by generative tasks [2,27].…”
Section: Introductionmentioning
confidence: 99%
“…A prominent approach towards domain adaptation for semantic segmentation is distribution alignment by adversarial learning [13,10], where the alignment may happen at different representation layers, such as pixellevel [17,49], feature-level [17,18] or output-level [40]. Despite these efforts, discovering all modes of the data distribution is a key challenge for domain adaptation [39], akin to difficulties also faced by generative tasks [2,27].…”
Section: Introductionmentioning
confidence: 99%
“…Adversarial Discriminative Domain Adaptation (ADDA) (Tzeng et al 2017) trains two feature extractors for source and target domains respectively, and produces embeddings fooling the discriminator. Other works optimize the performance on target domain by capturing complex multimode structures (Pei et al 2018;Long et al 2018), exploring task-specific decision boundaries (Saito et al 2018b;Tran et al 2019), aligning the attention regions (Kang et al 2018) and applying structure-aware alignment (Ma, Zhang, and Xu 2019).…”
Section: Related Workmentioning
confidence: 99%
“…Firstly, feature distributions can only be aligned to a certain level, since model capacity of the feature extractor could be large enough to compensate for the less aligned feature distributions. More importantly, given practical difficulties of aligning the source and target distributions with high granularity to the category level (especially for complex distributions with multi-mode structures), the task classifier obtained by minimizing the empirical source risk cannot well generalize to the target data due to an issue of mode collapse (Kurmi and Namboodiri 2019;Tran et al 2019), i.e., the joint distributions of feature and category are not well aligned across the source and target domains.…”
Section: Introductionmentioning
confidence: 99%