2020
DOI: 10.1007/978-3-030-58574-7_25
|View full text |Cite
|
Sign up to set email alerts
|

Instance Adaptive Self-training for Unsupervised Domain Adaptation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
147
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 220 publications
(148 citation statements)
references
References 29 publications
1
147
0
Order By: Relevance
“…The goal of unsupervised domain adaptive segmentation is to train a segmentation network that achieves good performance in an unlabeled target domain when only the source domain data are annotated. Methodology-wise, existing methods build on three techniques: 1) adversarial learning [7,22,32,33,36,42,44,46], 2) image-toimage translation [2,10,12,14,17,25,31,40] and 3) self-training [1,15,18,19,24,36,43,47,49,53].…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…The goal of unsupervised domain adaptive segmentation is to train a segmentation network that achieves good performance in an unlabeled target domain when only the source domain data are annotated. Methodology-wise, existing methods build on three techniques: 1) adversarial learning [7,22,32,33,36,42,44,46], 2) image-toimage translation [2,10,12,14,17,25,31,40] and 3) self-training [1,15,18,19,24,36,43,47,49,53].…”
Section: Related Workmentioning
confidence: 99%
“…Versatility. As Table 6 shows, we justify the versatility of our method by testing its performance with a different pseudo label thresholding scheme: instance adaptive selection (IAS) [24]. Instead Table 6: Justification of the versatility of our method on GTA→Cityscapes.…”
Section: Training Detailsmentioning
confidence: 99%
See 2 more Smart Citations
“…Recently, self-training based UDA presents a powerful means to counter unknown labels in the target domain [33], surpassing the adversarial learning-based methods in many discriminative UDA benchmarks, e.g., classification and segmentation (i.e., pixel-wise classification) [31,23,26]. The core idea behind the deep self-training based UDA is to iteratively generate a set of one-hot (or smoothed) pseudo-labels in the target domain, followed by retraining the network based on these pseudo-labels with target data [33].…”
Section: Introductionmentioning
confidence: 99%