2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00966
|View full text |Cite
|
Sign up to set email alerts
|

Model Adaptation: Unsupervised Domain Adaptation Without Source Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
230
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 392 publications
(245 citation statements)
references
References 28 publications
0
230
0
Order By: Relevance
“…Based on hypothesis transfer learning, [21] proposes a self-training framework with mutual information maximization and pseudo-labeling strategy. In [20], a collaborative class conditional generative adversarial network is employed to avoid the usage of source data with target data generation and model adaptation. In this paper, instead of using entropy minimization, we leverage contrastive learning for cross-domain alignment.…”
Section: Related Workmentioning
confidence: 99%
“…Based on hypothesis transfer learning, [21] proposes a self-training framework with mutual information maximization and pseudo-labeling strategy. In [20], a collaborative class conditional generative adversarial network is employed to avoid the usage of source data with target data generation and model adaptation. In this paper, instead of using entropy minimization, we leverage contrastive learning for cross-domain alignment.…”
Section: Related Workmentioning
confidence: 99%
“…Model Adaptation Most of the previously mentioned methods need the explicit availability of source domain data during adaptation too, and have made tremendous strides in improving the segmentation performance in that case. A few recent papers tackle model adaptation for classification problems [44,42,16]. [39] proposes source-free domain adaptation in the case where label knowledge of the target domain is not available, and show their efficiency on a set of classification problems with varying levels of label overlap.…”
Section: Related Workmentioning
confidence: 99%
“…Different from the above methods, 3C‐GAN 28 first generates images of each category as the style of target domain to adjust the source network and then also minimizes entropy. DAAS 29 trains a transformation network from the target domain to the source domain to make the entropy of the transformed target domain input to the classifier as small as possible than before.…”
Section: Related Workmentioning
confidence: 99%
“…Only few recent works studied source data-free domain adaptation in the following three categories: The first one adjusts the source model by self-supervised learning based on pseudo labels [25][26][27] ; the second one is to generate some target domain style samples which can be correctly classified by the source model 28 ; and the final is to transform the target domain samples to the source domain style samples. 29 First of all, without source domain data, it is not easy to do satisfied image translation between the target and source domains.…”
Section: Introductionmentioning
confidence: 99%