2021
DOI: 10.48550/arxiv.2105.02001
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Contrastive Learning and Self-Training for Unsupervised Domain Adaptation in Semantic Segmentation

Abstract: Deep convolutional neural networks have considerably improved state-of-the-art results for semantic segmentation. Nevertheless, even modern architectures lack the ability to generalize well to a test dataset that originates from a different domain. To avoid the costly annotation of training data for unseen domains, unsupervised domain adaptation (UDA) attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain. Previous work has mainly focused on minimizing the d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 46 publications
0
5
0
Order By: Relevance
“…2) Self-Training: A very successful strategy, also found in several state-of-the-arts methods for UDA, is self-training (ST) [7], [12], [17], [33], [35]. It basically converts the target predictions into pseudo-labels to minimize the cross-entropy.…”
Section: B Unsupervised Domain Adaptationmentioning
confidence: 99%
See 3 more Smart Citations
“…2) Self-Training: A very successful strategy, also found in several state-of-the-arts methods for UDA, is self-training (ST) [7], [12], [17], [33], [35]. It basically converts the target predictions into pseudo-labels to minimize the cross-entropy.…”
Section: B Unsupervised Domain Adaptationmentioning
confidence: 99%
“…However, ST only works if high quality pseudo-labels (PL) can be provided. One line of work attempts to improve their quality by using an ensemble of three models [7], a memoryefficient temporal ensemble [12], or an average of different outputs of the same network [34]. Nevertheless, most ST-based approaches try to filter out the noisy target predictions by using reliability measures.…”
Section: B Unsupervised Domain Adaptationmentioning
confidence: 99%
See 2 more Smart Citations
“…These meth- ods aim to minimize a series of adversarial losses to learn invariant representations across domains, thereby aligning source and target feature distributions. More recently, an alternative research line to reduce domain shifts focuses on building schemes based on the self-training (ST) framework [16], [17], [18], [19], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [30], [31], [32]. These works iteratively train the model by using both the labeled source domain data and generated pseudo-labels of unlabeled target domain data, thus achieving alignment between source and target domains.…”
Section: Introductionmentioning
confidence: 99%