2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00115
|View full text |Cite
|
Sign up to set email alerts
|

FixBi: Bridging Domain Spaces for Unsupervised Domain Adaptation

Abstract: Unsupervised domain adaptation (UDA) methods for learning domain invariant representations have achieved remarkable progress. However, most of the studies were based on direct adaptation from the source domain to the target domain and have suffered from large domain discrepancies. In this paper, we propose a UDA method that effectively handles such large domain discrepancies. We introduce a fixed ratio-based mixup to augment multiple intermediate domains between the source and target domain. From the augmented… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
62
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 177 publications
(62 citation statements)
references
References 25 publications
0
62
0
Order By: Relevance
“…To increase discriminability, more recent DA methods attempt to investigate the data structure in unlabeled target domain. Self-training as a typical approach generates target domain pseudo labels [12,26,29,36,39,43,47,49,58,89,92,96,97]. Another category is to construct the protoptypes [4,50,77,78,88,94] or cluster centers [10,24,66] across domains and then perform class-wise alignment.…”
Section: Unsupervised Domain Adaptationmentioning
confidence: 99%
See 4 more Smart Citations
“…To increase discriminability, more recent DA methods attempt to investigate the data structure in unlabeled target domain. Self-training as a typical approach generates target domain pseudo labels [12,26,29,36,39,43,47,49,58,89,92,96,97]. Another category is to construct the protoptypes [4,50,77,78,88,94] or cluster centers [10,24,66] across domains and then perform class-wise alignment.…”
Section: Unsupervised Domain Adaptationmentioning
confidence: 99%
“…Another category is to construct the protoptypes [4,50,77,78,88,94] or cluster centers [10,24,66] across domains and then perform class-wise alignment. Most of these approaches use a fixed probability threshold [12,36,50,58,78], a dynamic probability threshold [47,89], a fixed sample ratio [29,49,92], a dynamic sample ratio [58,96,97], or a threshold of other metrics [4,14,26,39] to choose trustworthy samples (i.e., highconfidence samples) and reject other low-confidence samples. To mitigate the harmful effect of noisy labels, it is reasonable to only utilize the reliable samples.…”
Section: Unsupervised Domain Adaptationmentioning
confidence: 99%
See 3 more Smart Citations