Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval 2022
DOI: 10.1145/3477495.3532059
|View full text |Cite
|
Sign up to set email alerts
|

Self-Guided Learning to Denoise for Robust Recommendation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 42 publications
(9 citation statements)
references
References 23 publications
0
9
0
Order By: Relevance
“…The data sets were randomly divided into training, validation, and test sets in an 8:1:1 ratio. Meanwhile, we employed two advanced and effective evaluation metrics that are widely used in top conferences [31,33,36,43]-namely Recall@K and NDCG@K (normalized discounted cumulative gain)-to assess the model performance. In our SSGCL model, we stacked three GNN layers for the influence diffusion, and the embedding size for both users and items was set to 64.…”
Section: Methodsmentioning
confidence: 99%
“…The data sets were randomly divided into training, validation, and test sets in an 8:1:1 ratio. Meanwhile, we employed two advanced and effective evaluation metrics that are widely used in top conferences [31,33,36,43]-namely Recall@K and NDCG@K (normalized discounted cumulative gain)-to assess the model performance. In our SSGCL model, we stacked three GNN layers for the influence diffusion, and the embedding size for both users and items was set to 64.…”
Section: Methodsmentioning
confidence: 99%
“…There are also some works to solve this problem through causal inference [16]. To deal with this problem of learning from noisy labels [37], existing works adopt techniques like adding graphbased priors [39], cross-model agreements [48] and graph sparsification [17] to denoise observed user-item interaction data. Different from above works, we focus on denoising social network, i.e., useruser relation data, so as to leverage power of social homophily and social influence in a more effective and efficient way.…”
Section: Denoising For Recommendationmentioning
confidence: 99%
“…Wang et al [43] assume that predictions on noisy items vary across different recommendation models and propose an ensemble method to minimize the KL-divergence between the two models' predictions. Gao et al [8] argue that the models are prone to memorize easy and clean patterns at the early stage of training, so they collect memorized interactions at the early stage as guidance for the following training.…”
Section: Denoising Recommendationmentioning
confidence: 99%