2021
DOI: 10.48550/arxiv.2106.00417
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Semi-supervised Models are Strong Unsupervised Domain Adaptation Learners

Yabin Zhang,
Haojian Zhang,
Bin Deng
et al.

Abstract: Unsupervised domain adaptation (UDA) and semi-supervised learning (SSL) are two typical strategies to reduce expensive manual annotations in machine learning. In order to learn effective models for a target task, UDA utilizes the available labeled source data, which may have different distributions from unlabeled samples in the target domain, while SSL employs few manually annotated target samples. Although UDA and SSL are seemingly very different strategies, we find that they are closely related in terms of t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(9 citation statements)
references
References 32 publications
2
7
0
Order By: Relevance
“…Inspired by some previous semi-supervised domain adaptation methods [33,35], we also add FixMatch to GVB, and it can achieve remarkable performance improvement. Such results are consistent with [77] which reveals that the semi-supervised models are strong unsupervised domain adaptation learners. We evaluate our proposed method by applying PCL it to GVB and GVB with Fix-Match.…”
Section: Results On Unsupervised Domain Adaptationsupporting
confidence: 88%
See 1 more Smart Citation
“…Inspired by some previous semi-supervised domain adaptation methods [33,35], we also add FixMatch to GVB, and it can achieve remarkable performance improvement. Such results are consistent with [77] which reveals that the semi-supervised models are strong unsupervised domain adaptation learners. We evaluate our proposed method by applying PCL it to GVB and GVB with Fix-Match.…”
Section: Results On Unsupervised Domain Adaptationsupporting
confidence: 88%
“…It contains 152,397 synthetic images for the source domain and 55,388 real-world images for the target domain. Following the standard transductive setting [77] for UDA, we use all labeled source data and all unlabeled target data, and test on the same unlabeled target data. Experimental Details.…”
Section: Results On Unsupervised Domain Adaptationmentioning
confidence: 99%
“…In fact, top performing approaches from the VisDA2018 and VisDA2019 competitions use semi-supervised learning techniques explicitly in their UDA approaches. 11,12 Most similar to our work, Zhang et.al 13 recently demonstrated that SSL algorithms on their own and in combination with modern UDA techniques are strong baselines for UDA problems, when compared to UDA specific algorithms. Berthelot et.al 10 explicitly unify SSL and UDA with the AdaMatch algorithm which extends prior SSL work, FixMatch, 14 to SSDA applications by adding additional steps to the FixMatch training process, including "logit interpolation, distribution alignment, and a relative confidence threshold".…”
Section: Background and Related Worksupporting
confidence: 76%
“…Inspired by previous works [32,88,93] in domain adaptation, we add FixMatch [63] to the existing method to construct a stronger baseline. Specifically, let T (x) and T (x) denote the weakly and strongly augmented views for x ∈ D tu , respectively.…”
Section: Methods 41 a Stronger Baselinementioning
confidence: 99%