2021
DOI: 10.48550/arxiv.2103.16765
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Prototypical Cross-domain Self-supervised Learning for Few-shot Unsupervised Domain Adaptation

Abstract: Unsupervised Domain Adaptation (UDA) transfers predictive models from a fully-labeled source domain to an unlabeled target domain. In some applications, however, it is expensive even to collect labels in the source domain, making most previous works impractical. To cope with this problem, recent work performed instance-wise cross-domain self-supervised learning, followed by an additional fine-tuning stage. However, the instance-wise selfsupervised learning only learns and aligns low-level discriminative featur… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
18
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(18 citation statements)
references
References 64 publications
0
18
0
Order By: Relevance
“…To address this task, we propose to learn the latent feature-space clustering of each source and target domain, and align clusters with the same category across different domains in a self-supervised manner. Specifically, we use a ProtoNCE [39] loss to learn the semantic feature of a single domain as it has been shown to semantically align data across a single source and target domain [33]. We further extend it into the multi-source adaptation scenario to learn better discriminative and domain-invariant features across all domains.…”
Section: Multi-domain Prototypical Self-supervised Learningmentioning
confidence: 99%
See 4 more Smart Citations
“…To address this task, we propose to learn the latent feature-space clustering of each source and target domain, and align clusters with the same category across different domains in a self-supervised manner. Specifically, we use a ProtoNCE [39] loss to learn the semantic feature of a single domain as it has been shown to semantically align data across a single source and target domain [33]. We further extend it into the multi-source adaptation scenario to learn better discriminative and domain-invariant features across all domains.…”
Section: Multi-domain Prototypical Self-supervised Learningmentioning
confidence: 99%
“…To learn a well-clustered semantic structure in the feature space, it is problematic to apply ProtoNCE on a mixed dataset with different distributions, because images of different categories from different domains may be incorrectly aggregated into the same cluster. As a result, due to the domain shift between sources and target, we cannot directly apply ProtoNCE to M i=1 (S i ∪ S u i ) ∪ T as in [39], and due to the domain shift among different sources, we cannot apply ProtoNCE to the source M i=1 (S i ∪ S u i ) and target T separately as [33]. Instead, we perform prototypical contrastive learning in each source S i ∪ S u i ) and target T .…”
Section: Multi-domain Prototypical Self-supervised Learningmentioning
confidence: 99%
See 3 more Smart Citations