2020
DOI: 10.1007/978-3-030-58601-0_32
|View full text |Cite
|
Sign up to set email alerts
|

Two-Phase Pseudo Label Densification for Self-training Based Domain Adaptation

Abstract: Recently, deep self-training approaches emerged as a powerful solution to the unsupervised domain adaptation. The self-training scheme involves iterative processing of target data; it generates target pseudo labels and retrains the network. However, since only the confident predictions are taken as pseudo labels, existing self-training approaches inevitably produce sparse pseudo labels in practice. We see this is critical because the resulting insufficient training-signals lead to a suboptimal, error-prone mod… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
47
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 89 publications
(47 citation statements)
references
References 31 publications
0
47
0
Order By: Relevance
“…Many works have focused on the first step, how to get believable pseudo labels. They presented weakly-supervised learning by image-level domain transfer and annotation information [66], exploiting temporal cues using a tracker on unlabeled videos is presented [67], relying on pseudo labels of the intermediate domain and the ground-truth of source domain in an imbalanced sampling way [68], using k-reciprocal encoding and clustering algorithm in feature space [69], and treating easy and hard pseudo labels differently [70].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Many works have focused on the first step, how to get believable pseudo labels. They presented weakly-supervised learning by image-level domain transfer and annotation information [66], exploiting temporal cues using a tracker on unlabeled videos is presented [67], relying on pseudo labels of the intermediate domain and the ground-truth of source domain in an imbalanced sampling way [68], using k-reciprocal encoding and clustering algorithm in feature space [69], and treating easy and hard pseudo labels differently [70].…”
Section: Related Workmentioning
confidence: 99%
“…Like other computer vision problems, this cut-and-paste approach also suffers from performance degradation when there is a distribution mismatch between the training domain (or source domain) and the test domain (or target domain). Domain adaptation methods have been introduced to tackle this domain shift problem by aligning features [39]- [46], [61]- [65] and self-training scheme [66]- [70]. However, these approaches require knowledge of the target domain.…”
Section: Introductionmentioning
confidence: 99%
“…Semantic segmentation adaptation. We divide the existing UDA semantic segmentation methods into two categories: domain alignment [6,7,18,19] and self-training [11,12,15,24,25], and the existing state-of-art approaches are usually a combination of two methods. The main motivation of domain alignment is to reduce the discrepancy between two domains.…”
Section: Related Workmentioning
confidence: 99%
“…In CRST [25], a confidence regularized self-training method is proposed to address the problem of overconfident wrong label. [15] presents a two-phase pseudo label densification framework through voting-based and easy-hard classification based method. In [12], weak labels are explored to enhance pseudo label learning.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, self-training based UDA presents a powerful means to counter unknown labels in the target domain [33], surpassing the adversarial learning-based methods in many discriminative UDA benchmarks, e.g., classification and segmentation (i.e., pixel-wise classification) [31,23,26]. The core idea behind the deep self-training based UDA is to iteratively generate a set of one-hot (or smoothed) pseudo-labels in the target domain, followed by retraining the network based on these pseudo-labels with target data [33].…”
Section: Introductionmentioning
confidence: 99%