2020
DOI: 10.48550/arxiv.2008.12197
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Instance Adaptive Self-Training for Unsupervised Domain Adaptation

Abstract: The divergence between labeled training data and unlabeled testing data is a significant challenge for recent deep learning models. Unsupervised domain adaptation (UDA) attempts to solve such a problem. Recent works show that self-training is a powerful approach to UDA. However, existing methods have difficulty in balancing scalability and performance. In this paper, we propose an instance adaptive self-training framework for UDA on the task of semantic segmentation. To effectively improve the quality of pseud… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
53
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 17 publications
(53 citation statements)
references
References 33 publications
(67 reference statements)
0
53
0
Order By: Relevance
“…In MFA, the ensemble of two warmup models further benefits the accuracy of the pseudo labels. However, the offline manner is prone to the problems of ignorance of hard samples [10] and longer exposure to the potential noisy labels. We thus introduce the online pseudo labels as a supplementary.…”
Section: Offline Pseudo Labelmentioning
confidence: 99%
See 1 more Smart Citation
“…In MFA, the ensemble of two warmup models further benefits the accuracy of the pseudo labels. However, the offline manner is prone to the problems of ignorance of hard samples [10] and longer exposure to the potential noisy labels. We thus introduce the online pseudo labels as a supplementary.…”
Section: Offline Pseudo Labelmentioning
confidence: 99%
“…Among the iterations of "assigning pseudo label" and "re-training", the pseudo labels are updated after several training epochs, yielding the "offline" manner. While the offline manner allows additional postprocessing and has the advantage of balanced pseudo labels [24], it is prone to the ignorance of hard and informative samples [10]. It is because the offline manner only preserves the most confident predictions among all the target domain data, which are relatively easy.…”
mentioning
confidence: 99%
“…Therefore, unsupervised domain adaptation (UDA) [15] addresses this problem by utilizing and transferring the knowledge of label-rich data (source data) to unlabeled data (target data), which can reduce the labeling cost dramatically [38]. According to the adaptation methodology, UDA can be largely divided into Adversarial learning based [27,49,50,52] DA and Self-training based [31,34,45,55,57] DA. While the former focuses on minimizing task-specific loss for Our novel human-in-the-loop framework, LabOR (PPL and SPL) significantly outperforms not only previous UDA state-of-the-art models (e.g., IAST [31]) but also DA model with few labels(e.g., WDA [36]).…”
Section: Introductionmentioning
confidence: 99%
“…According to the adaptation methodology, UDA can be largely divided into Adversarial learning based [27,49,50,52] DA and Self-training based [31,34,45,55,57] DA. While the former focuses on minimizing task-specific loss for Our novel human-in-the-loop framework, LabOR (PPL and SPL) significantly outperforms not only previous UDA state-of-the-art models (e.g., IAST [31]) but also DA model with few labels(e.g., WDA [36]). Note that our PPL requires negligible number of label to achieve such performance improvements (25 labeled points per image), and our SPL shows the performance comparable with fully supervised learning (0.1% mIoU gap).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation