2022
DOI: 10.1109/tpami.2020.3029948
|View full text |Cite
|
Sign up to set email alerts
|

Contrastive Adaptation Network for Single- and Multi-Source Domain Adaptation

Abstract: Unsupervised Domain Adaptation (UDA) makes predictions for the target domain data while manual annotations are only available in the source domain. Previous methods minimize the domain discrepancy neglecting the class information, which may lead to misalignment and poor generalization performance. To tackle this issue, this paper proposes Contrastive Adaptation Network (CAN) that optimizes a new metric named Contrastive Domain Discrepancy explicitly modeling the intra-class domain discrepancy and the inter-cla… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
29
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 79 publications
(29 citation statements)
references
References 34 publications
0
29
0
Order By: Relevance
“…Domain shifts may exist not only between D s,e and D t , but also between the source domains. Hence, we assign each source domain D s,e a different domain translator D e (•) and a classifier L e (•), whereas the feature extractor F (•) is shared across all domains to learn domain-invariant features as [51]- [53]. For the target domain, data X u is forwarded into F (•) to extract feature Z u , which is then fed into the multiple classifiers {L e (•)}.…”
Section: F Extension To Multi-source Domain Adaptationmentioning
confidence: 99%
See 1 more Smart Citation
“…Domain shifts may exist not only between D s,e and D t , but also between the source domains. Hence, we assign each source domain D s,e a different domain translator D e (•) and a classifier L e (•), whereas the feature extractor F (•) is shared across all domains to learn domain-invariant features as [51]- [53]. For the target domain, data X u is forwarded into F (•) to extract feature Z u , which is then fed into the multiple classifiers {L e (•)}.…”
Section: F Extension To Multi-source Domain Adaptationmentioning
confidence: 99%
“…Table VIII shows the comparisons of M-FLARE with some recent methods on the multi-source DA task (S1,S2,S3)→S1. Following [51]- [53], we introduce two standards: 1) singlebest domain adaptation, which reports the single-source adaptation result best-performing in the test set, and 2) source combine domain adaptation, which reports the single-source adaptation result by transferring the combination of multiple source domains to a target domain. The first standard evaluates whether the single-source adaptation performance can be improved by introducing other source domains; the second testifies the need of exploiting a multi-source DA…”
Section: G Evaluation On Multi-source Domain Adaptationmentioning
confidence: 99%
“…Maximum Mean Discrepancy (MMD): We replaced adversarial loss with a statistical test to minimize the distributional discrepancy from different domains [67]. Similar to previous work, we applied MMD only on the representations before the projection layer independently on both modalities [35,50]. Similar to the GLR baseline, we first trained 10 epochs only using the contrastive loss, and trained using the combined losses L N CE + λ M M D L M M D for the remainder.…”
Section: Regression (Convolutional)mentioning
confidence: 99%
“…However, labeling behavioral-neural datasets requires expensive and arduous manual labor by trained scientists, and thus often leaving the vast majority of data unlabeled. Similarly, it is non-trivial to generalize few-shot domain adaptation methods to multimodal tasks [50,51]. Thus, the field of neuroscience needs new computational approaches that can extract information from ever-increasing amounts of unlabeled multimodal datasets that also suffer from extensive domain gaps across subjects.…”
Section: Introductionmentioning
confidence: 99%
“…This problem has persisted in the deep learning era, even when deep convolutional neural networks (CNNs) have demonstrated great successes in solving many recognition tasks [32,50]. As a key solution to overcome the domain shift problem, unsupervised domain adaptation (UDA) has been extensively studied [2,8,9,16,23,33,49]. UDA aims to transfer the knowledge learned from one or multiple labeled source domains to a target domain in which only unlabeled data are given for model adaptation.…”
Section: Introductionmentioning
confidence: 99%