Proceedings of the ACM Web Conference 2022 2022
DOI: 10.1145/3485447.3512075
|View full text |Cite
|
Sign up to set email alerts
|

Learning Recommenders for Implicit Feedback with Importance Resampling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(15 citation statements)
references
References 26 publications
0
15
0
Order By: Relevance
“…However, these solutions are only applicable in certain scenarios and cannot produce hard negative samples at scale based on current user interest representations. Finally, the most related hard negative sampling strategies are those that produce negative samples according to the current user representation [5,6,11,33,51]. For example, the work [51] uses a dynamic rejection sampling strategy according to item ranking, the work [11] leverages score-based memory update and variance-based sampling, and a concurrent work [6] generates hard negative samples for sequential recommendation using the Next Negative Item (NNI) Sampler with Pre-selection and Post-selection.…”
Section: Negative Sampling In Recommender Systemsmentioning
confidence: 99%
See 1 more Smart Citation
“…However, these solutions are only applicable in certain scenarios and cannot produce hard negative samples at scale based on current user interest representations. Finally, the most related hard negative sampling strategies are those that produce negative samples according to the current user representation [5,6,11,33,51]. For example, the work [51] uses a dynamic rejection sampling strategy according to item ranking, the work [11] leverages score-based memory update and variance-based sampling, and a concurrent work [6] generates hard negative samples for sequential recommendation using the Next Negative Item (NNI) Sampler with Pre-selection and Post-selection.…”
Section: Negative Sampling In Recommender Systemsmentioning
confidence: 99%
“…More details can be found in [2,4]. We adopt the widely used metrics, i.e., Recall, Hit Rate, and NDCG (Normalized Discounted Cumulative Gain) 5 , to evaluate our proposed solution. The metrics are computed with the top 20/50 matched candidates.…”
Section: Training and Evaluationmentioning
confidence: 99%
“…We include projects with contribution history of at least 3 months according to their commit history. To ensure that our model generalizes on a wide range of topics, popularity, and project scales, we first select 3 subsets of repositories using their GitHub topics 3 , which are project labels created by the project owners. Then, we randomly sample 300 repositories from each subset considering their numbers of project files and stars.…”
Section: Experiments 41 Experimental Settingsmentioning
confidence: 99%
“…The advances in deep learning have greatly facilitated the evolution of recommender systems [3,16,49,64,65]. In particular, motivated by the success of Graph Neural Networks (GNN) [15,26,70,71], a series of graph-based recommender systems [27,49,52] are proposed, which organize user behaviors into heterogeneous interaction graphs.…”
Section: Recommender Systemsmentioning
confidence: 99%
“…However, uniformly sampled negative items may not be informative, contributing little to the gradients and the convergence [28,40]. To overcome this obstacle, researchers have proposed many Hard Negative Sampling (HNS) methods, such as Dynamic Negative Sampling (DNS) [40] and Softmax-based Sampling methods [9,21,33]. Superior to uniform sampling, HNS methods oversample high-scored negative items, which are more informative with large gradients and thus accelerate the convergence [8].…”
Section: Introductionmentioning
confidence: 99%