Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining 2022
DOI: 10.1145/3534678.3539125
|View full text |Cite
|
Sign up to set email alerts
|

Contrastive Cross-domain Recommendation in Matching

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 55 publications
(8 citation statements)
references
References 34 publications
0
8
0
Order By: Relevance
“…The basic idea is to extract valuable and transferable knowledge from many unlabeled data through a self-supervised task. Contrastive learning to solve problems such as cross-domain recommendation [33], social recommendation [34], and mitigating the long-tail effect of recommendation systems [35]. Xie et al [33] designed intra-domain comparisons and inter-domain comparisons to learn user representations and cross-domain knowledge transfer.…”
Section: Cotrastive Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…The basic idea is to extract valuable and transferable knowledge from many unlabeled data through a self-supervised task. Contrastive learning to solve problems such as cross-domain recommendation [33], social recommendation [34], and mitigating the long-tail effect of recommendation systems [35]. Xie et al [33] designed intra-domain comparisons and inter-domain comparisons to learn user representations and cross-domain knowledge transfer.…”
Section: Cotrastive Learningmentioning
confidence: 99%
“…Contrastive learning to solve problems such as cross-domain recommendation [33], social recommendation [34], and mitigating the long-tail effect of recommendation systems [35]. Xie et al [33] designed intra-domain comparisons and inter-domain comparisons to learn user representations and cross-domain knowledge transfer. We refer to cross-domain recommended methods.…”
Section: Cotrastive Learningmentioning
confidence: 99%
“…The Self-Supervised Learning (SSL) paradigm has been widely studied in multiple research communities, e.g., Computer Vision [3,5,12,15,33,34], Natural Language Processing [7,23], and non-sequential recommendation systems [2,51,56,[62][63][64]. SSL has recently been introduced into sequential recommendation systems to alleviate the data sparsity issue.…”
Section: Self-supervised Learning In Srmentioning
confidence: 99%
“…Given word embedding 𝑅 H 𝑢 and 𝑅 H 𝑖 from each domain, we aim to extract useful information. Several feature extraction strategies have been proposed, including domain adaptation [71], variational reconstruction [41], personalized transfer [78], contrastive learning [64], and domain disentanglement [49]. Among them, we adopt the domain disentanglement algorithm, which does not obligate user overlapping in both domains [8].…”
Section: Feature Extractionmentioning
confidence: 99%