Proceedings of the ACM Web Conference 2022 2022
DOI: 10.1145/3485447.3512104
|View full text |Cite
|
Sign up to set email alerts
|

Improving Graph Collaborative Filtering with Neighborhood-enriched Contrastive Learning

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
62
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 279 publications
(97 citation statements)
references
References 23 publications
0
62
0
Order By: Relevance
“…In this way, the prediction is not only based on behavior data. On the other hand, as contrastive learning has been shown effective on sparse data [20], the proposed dual-perspective contrastive learning manner (Eqn. ( 9)) can also benefit recommendation for those users with sparse interactions.…”
Section: Data Sparsity Levels (Rq3)mentioning
confidence: 99%
“…In this way, the prediction is not only based on behavior data. On the other hand, as contrastive learning has been shown effective on sparse data [20], the proposed dual-perspective contrastive learning manner (Eqn. ( 9)) can also benefit recommendation for those users with sparse interactions.…”
Section: Data Sparsity Levels (Rq3)mentioning
confidence: 99%
“…The sequence-item contrastive task aims to capture the intrinsic correlation between sequential contexts (i.e., the observed subsequence) and potential next items in an interaction sequence. Different from previous next-item prediction task [10,15] (using in-domain negatives), for a given sequence, we adopt across-domain items as negatives. Such a way can enhance both the semantic fusion and adaptation across domains, which is helpful to learn universal sequence representations.…”
Section: Multi-domain Sequentialmentioning
confidence: 99%
“…𝑗 + indicates an observed interaction between user 𝑢 𝑖 and item 𝑣 𝑗 + and 𝑗 − indicates an unobserved one. As high-order neighboring relations within contributors are also useful for recommendations, we enforce users to have similar representations as their structural neighbors through the structure-contrastive learning objective [28]:…”
Section: Optimizationmentioning
confidence: 99%
“…For Node Semantic Modeling (Sec. [28], we adopt the hyperparameter setting from the original implementation and set 𝜆 2 = 1𝑒 − 6, 𝜂 = 2 without further tuning. For the baseline models, the hyperparameters are set to the optimal settings as reported in their original papers.…”
Section: Implementation Detailsmentioning
confidence: 99%
See 1 more Smart Citation