2020
DOI: 10.48550/arxiv.2003.08040
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Differential Treatment for Stuff and Things: A Simple Unsupervised Domain Adaptation Method for Semantic Segmentation

Abstract: We consider the problem of unsupervised domain adaptation for semantic segmentation by easing the domain shift between the source domain (synthetic data) and the target domain (real data) in this work. State-of-the-art approaches prove that performing semantic-level alignment is helpful in tackling the domain shift issue. Based on the observation that stuff categories usually share similar appearances across images of different domains while things (i.e. object instances) have much larger differences, we propo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 35 publications
0
3
0
Order By: Relevance
“…Specifically, the model F could be for semantic segmentation, object detection or image classification task. The data could be labeled source data with supervised source losses or unlabeled target data with unsupervised target losses such as adversarial loss [81,52,86,93,58,46,32,70], self-training loss [104,103,45,88,40,94] or entropy loss [86,18], etc. Below are a supervised loss and a self-training-based unsupervised loss here for reference:…”
Section: Robust Domain Adaptationmentioning
confidence: 99%
“…Specifically, the model F could be for semantic segmentation, object detection or image classification task. The data could be labeled source data with supervised source losses or unlabeled target data with unsupervised target losses such as adversarial loss [81,52,86,93,58,46,32,70], self-training loss [104,103,45,88,40,94] or entropy loss [86,18], etc. Below are a supervised loss and a self-training-based unsupervised loss here for reference:…”
Section: Robust Domain Adaptationmentioning
confidence: 99%
“…Most existing methods take two typical approaches, namely, adversarial learning based [24,59,62,60,40,26,50,21] 35,36,65,32,67,44]. The adversarial learning based methods perform domain alignment by adopting a discriminator that strives to differentiate the segmentation in the space of inputs [24,66,35,12,32], features [61,24,11,66,40] or outputs [59,62,41,60,26,42,63,50,21]. The self-training based methods exploit self-training to predict pseudo labels for target-domain data and then exploit the predicted pseudo labels to fine-tune the segmentation model iteratively.…”
Section: Domain Adaptive Image Segmentationmentioning
confidence: 99%
“…[72], entropy space [77], patch space [73], and context space [33]. They can also be learnt through sample or class joint AT [47,79], multi-level AT [63], regularized AT [83], or intra-domain AT [52].…”
Section: Introductionmentioning
confidence: 99%