2021
DOI: 10.48550/arxiv.2103.12371
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Unsupervised domain adaptation via coarse-to-fine feature alignment method using contrastive learning

Abstract: Previous feature alignment methods in Unsupervised domain adaptation(UDA) mostly only align global features without considering the mismatch between class-wise features. In this work, we propose a new coarse-tofine feature alignment method using contrastive learning called CFContra. It draws class-wise features closer than coarse feature alignment or class-wise feature alignment only, therefore improves the model's performance to a great extent. We build it upon one of the most effective methods of UDA called … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(7 citation statements)
references
References 44 publications
0
7
0
Order By: Relevance
“…The style transfer is performed using the standard AdaIN method. Also, the CFContra method by Tang et al [87] employs an encoder-decoder network with standard AdaIN layers for style transfer.…”
Section: B: Normalization Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The style transfer is performed using the standard AdaIN method. Also, the CFContra method by Tang et al [87] employs an encoder-decoder network with standard AdaIN layers for style transfer.…”
Section: B: Normalization Methodsmentioning
confidence: 99%
“…Apart from the implicit adaptation through self-supervised learning and the construction of semantic pairs in the source and target domain, one can identify a third class of self-supervised domain adaptation approaches. Semantic self-supervised approaches as presented in DANCE [178] CAM [166], CFContra [87], SCDA [167], BAPA-Net [74], SWLS [53], and SSS+ST [165] which all aim to cluster the pre-logit feature space towards so-called class prototypes directly. These class prototypes are vectors that represent the pre-logit feature representations of their respective class.…”
Section: C: Semantic Clusteringmentioning
confidence: 99%
“…Great progress in contrastive learning [1,8,31,40,51,59,68,68,[71][72][73] has been achieved by encouraging the positive pairs to get closer and pulling the negative pairs apart. For semantic segmentation tasks, [1,68,73] are proposed to fit the dense pixel prediction requirements.…”
Section: Contrastive Learningmentioning
confidence: 99%
“…For semantic segmentation tasks, [1,68,73] are proposed to fit the dense pixel prediction requirements. The definition of positive pairs and negative pairs can be various, and [59,68] treated the same category samples as the positive pairs and others as the negative pairs. [40] divided the positive pairs and negative pairs according to the label distribution similarity between different patches.…”
Section: Contrastive Learningmentioning
confidence: 99%
See 1 more Smart Citation