2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.01265
|View full text |Cite
|
Sign up to set email alerts
|

Differential Treatment for Stuff and Things: A Simple Unsupervised Domain Adaptation Method for Semantic Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
209
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 242 publications
(237 citation statements)
references
References 28 publications
2
209
0
Order By: Relevance
“…They perform their experiments on HSR-RSIs and multimodal remote sensing datasets, which shows the performance of LUM is improved than other stateof-the-art domain results. In addition, more recently explored methods [143]- [145] have adopted the adversarial training framework, where the feature network generates domaininvariant features to fool the discriminator that works on image-level. Another method of semantic segmentation based on unsupervised domain adaptation is pseudo-label retraining [146], which finetunes the trained model on the source images by taking high-confident predictions as pseudo ground truth for the unlabeled images.…”
Section: Summary Of Dl-based Lum Methodsmentioning
confidence: 99%
“…They perform their experiments on HSR-RSIs and multimodal remote sensing datasets, which shows the performance of LUM is improved than other stateof-the-art domain results. In addition, more recently explored methods [143]- [145] have adopted the adversarial training framework, where the feature network generates domaininvariant features to fool the discriminator that works on image-level. Another method of semantic segmentation based on unsupervised domain adaptation is pseudo-label retraining [146], which finetunes the trained model on the source images by taking high-confident predictions as pseudo ground truth for the unlabeled images.…”
Section: Summary Of Dl-based Lum Methodsmentioning
confidence: 99%
“…More recent approaches [33,16,19,57,23,14] employed adversarial training in image level. [2,15,30,42,48,45] leveraged adversarial training to learn domain-invariant representations in feature level. There are some works [38,51,2,49,22] using self-training to mitigate the domain gap via assigning labels to the most confident samples in the target domain.…”
Section: Related Workmentioning
confidence: 99%
“…Adversarial training based methods usually learn domain-invariant feature representations to achieve adaptation, while self-training based methods usually mitigate the domain gap iteratively through various strategies. Notably, many works [33,44,45,47] integrate both of them to achieve better performance.…”
Section: Introductionmentioning
confidence: 99%