Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery &Amp; Data Mining 2021
DOI: 10.1145/3447548.3467340
|View full text |Cite
|
Sign up to set email alerts
|

Socially-Aware Self-Supervised Tri-Training for Recommendation

Abstract: Self-supervised learning (SSL), which can automatically generate ground-truth samples from raw data, holds vast potential to improve recommender systems. Most existing SSL-based methods perturb the raw data graph with uniform node/edge dropout to generate new data views and then conduct the self-discrimination based contrastive learning over different views to learn generalizable representations. Under this scheme, only a bijective mapping is built between nodes in two different views, which means that the sel… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
48
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 143 publications
(48 citation statements)
references
References 41 publications
0
48
0
Order By: Relevance
“…Following [28], it removes the redundant operations such as the transformation matrices and the nonlinear activation functions. Such a design is proved efficient and effective, and inspires a lot of follow-up CL-based recommendation models like SGL [29] and MHCN [39].…”
Section: Performance Comparison With Different Types Of Noisesmentioning
confidence: 99%
See 4 more Smart Citations
“…Following [28], it removes the redundant operations such as the transformation matrices and the nonlinear activation functions. Such a design is proved efficient and effective, and inspires a lot of follow-up CL-based recommendation models like SGL [29] and MHCN [39].…”
Section: Performance Comparison With Different Types Of Noisesmentioning
confidence: 99%
“…Inspired by the achievements of CL in other fields, there has been a wave of new research that integrates CL with recommendation [19,29,33,39,41,45]. Zhou et al [45] adopted random masking on attributes and items to create sequence augmentations for sequential model pretraining with mutual information maximization.…”
Section: Contrastive Learning In Recommendationmentioning
confidence: 99%
See 3 more Smart Citations