Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2022
DOI: 10.18653/v1/2022.naacl-main.217
|View full text |Cite
|
Sign up to set email alerts
|

Domain Confused Contrastive Learning for Unsupervised Domain Adaptation

Abstract: In this work, we study Unsupervised Domain Adaptation (UDA) in a challenging selfsupervised approach. One of the difficulties is how to learn task discrimination in the absence of target labels. Unlike previous literature which directly aligns cross-domain distributions or leverages reverse gradient, we propose Domain Confused Contrastive Learning (DCCL) to bridge the source and the target domains via domain puzzles, and retain discriminative representations after adaptation. Technically, DCCL searches for a m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(10 citation statements)
references
References 28 publications
0
10
0
Order By: Relevance
“…Data We use the Amazon Reviews dataset (Ni et al, 2019), a dataset that facilitates research in tasks like sentiment analysis (Zhang et al, 2020), aspect-based sentiment analysis, and recommendation systems different product categories that serve as domains, which makes it a natural testbed for many multidomain studies. A noteworthy example of a research field that heavily relies on this dataset is domain adaptation (Blitzer et al, 2007;Ziser and Reichart, 2018;Du et al, 2020;Lekhtman et al, 2021;Long et al, 2022), which is the task of learning robust models across different domains, closely related to our research. 2 We sort the domains by their review counts and pick the top five, which results in: Books, Clothing Shoes and Jewelry, Electronics, Home and Kitchen, and Movies and TV domains.…”
Section: Methodsmentioning
confidence: 76%
“…Data We use the Amazon Reviews dataset (Ni et al, 2019), a dataset that facilitates research in tasks like sentiment analysis (Zhang et al, 2020), aspect-based sentiment analysis, and recommendation systems different product categories that serve as domains, which makes it a natural testbed for many multidomain studies. A noteworthy example of a research field that heavily relies on this dataset is domain adaptation (Blitzer et al, 2007;Ziser and Reichart, 2018;Du et al, 2020;Lekhtman et al, 2021;Long et al, 2022), which is the task of learning robust models across different domains, closely related to our research. 2 We sort the domains by their review counts and pick the top five, which results in: Books, Clothing Shoes and Jewelry, Electronics, Home and Kitchen, and Movies and TV domains.…”
Section: Methodsmentioning
confidence: 76%
“…For instance, in computer vision, it can be used for tasks such as image recognition [21], object detection [22], and image segmentation [23]. In natural language processing, it can be used for tasks such as text classification [24], machine translation [25], and language modeling [26].…”
Section: Unsupervised Transfer Learningmentioning
confidence: 99%
“…• Model adaptation for recommendation: Given large pretrained models, it is often necessary to adapt the models to a recommendation task with domain-specific data. We will review the common paradigms for model adaptation, including representationbased transfer, fine-tuning, adapter tuning [26], prompt tuning [13], and retrieval-augmented adaptation [33].…”
Section: Multimodal Pretraining For Recommendationmentioning
confidence: 99%