2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00154
|View full text |Cite
|
Sign up to set email alerts
|

Domain Adaptation for Structured Output via Discriminative Patch Representations

Abstract: Predicting structured outputs such as semantic segmentation relies on expensive per-pixel annotations to learn supervised models like convolutional neural networks. However, models trained on one data domain may not generalize well to other domains without annotations for model finetuning. To avoid the labor-intensive process of annotation, we develop a domain adaptation method to adapt the source data to the unlabeled target domain. We propose to learn discriminative feature representations of patches in the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
220
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 348 publications
(221 citation statements)
references
References 33 publications
1
220
0
Order By: Relevance
“…[35] does the alignment on the prediction of the segmentation network and [39] proposes to do it on the weighted self-information of the prediction probability. [36] and [23] extend the approach of [35] by patch-level alignment and category-level adversarial loss respectively. Another use of adversarial training for UDA is proposed in [30,31], where the discrepancy between two instances of the same input from target domain is minimized while the classification layer tries to maximize it.…”
Section: Related Workmentioning
confidence: 99%
“…[35] does the alignment on the prediction of the segmentation network and [39] proposes to do it on the weighted self-information of the prediction probability. [36] and [23] extend the approach of [35] by patch-level alignment and category-level adversarial loss respectively. Another use of adversarial training for UDA is proposed in [30,31], where the discrepancy between two instances of the same input from target domain is minimized while the classification layer tries to maximize it.…”
Section: Related Workmentioning
confidence: 99%
“…There has been notable recent interest in learning disentangled representations in various domains, such as computer vision [25], ML fairness [54,69], and domain adaptation [65,78], as they promise to enhance robustness, interpretability, and generalization to unseen examples on downstream tasks. The overall goal of disentangling is to improve the quality of the latent representations by explicitly separating the underlying factors of the observed data [38].…”
Section: Learning Disentangled Representationmentioning
confidence: 99%
“…[28] employed an adversarial learning scheme leveraging this property and achieved a significant performance improvement over previous methods. Recently, this work was extended to also leverage the more powerful local similarities by patch matching [29]. Curriculum Model Adaptation adapts models from an easier task to a harder task in a step-by-step fashion and has achieved great performance in multiple domain adaptation scenarios [34], [6], [5], [26].…”
Section: Domain Adaptationmentioning
confidence: 99%