Proceedings of the 36th International ACM SIGIR Conference on Research and Development in Information Retrieval 2013
DOI: 10.1145/2484028.2484126
|View full text |Cite
|
Sign up to set email alerts
|

Optimizing top-n collaborative filtering via dynamic negative item sampling

Abstract: Collaborative filtering techniques rely on aggregated user preference data to make personalized predictions. In many cases, users are reluctant to explicitly express their preferences and many recommender systems have to infer them from implicit user behaviors, such as clicking a link in a webpage or playing a music track. The clicks and the plays are good for indicating the items a user liked (i.e., positive training examples), but the items a user did not like (negative training examples) are not directly ob… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
165
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
6
2

Relationship

3
5

Authors

Journals

citations
Cited by 201 publications
(169 citation statements)
references
References 7 publications
4
165
0
Order By: Relevance
“…This is because the top ranked unobserved items hurt the ranking performance more than those with lower ranked positions [27,28]. With the intuitive observation and above analysis, we devise two simple yet effective sampling schemes to further optimize RankingFM for top-N item ranking.…”
Section: Sampling Analysismentioning
confidence: 95%
See 1 more Smart Citation
“…This is because the top ranked unobserved items hurt the ranking performance more than those with lower ranked positions [27,28]. With the intuitive observation and above analysis, we devise two simple yet effective sampling schemes to further optimize RankingFM for top-N item ranking.…”
Section: Sampling Analysismentioning
confidence: 95%
“…(16). For typical IR tasks, the candidate documents for a query in training datasets have usually been limited to a small size (e.g., 1000) because of query filtering [27]. However, for recommendation with implicit feedback, the size of candidate items is usually very huge (e.g., 10 million) as all unobserved items should be considered as candidates.…”
Section: Sampling Analysismentioning
confidence: 99%
“…us, it is natural to leverage GAN to distil a generative retrieval model to t such an unknown conditional data distribution using the observed user feedback data. Second, the uni ed training scheme of two schools of IR models o ers a potential of obtaining be er retrieval models because (i) the generative retrieval adaptively provides di erent negative samples to the discriminative retrieval training, which is strategically diverse compared with the static negative sampling [3,35] or dynamic negative sampling using the discriminative retrieval model itself [4,49]; and (ii) the reward signal from the discriminative retrieval model provides strategic guidance of training the generative retrieval model, which is otherwise unavailable in traditional generative retrieval model training. From the generative retrieval's perspective, IR-GAN is superior to traditional maximum likelihood estimation [18].…”
Section: Mle Based Retrieval Modelsmentioning
confidence: 99%
“…It is proved that the gradient of SCE matches the one of MLE, with no prerequisite of in nite noise samples, which is a very promising property of SCE learning. Dynamic negative item sampling work [34,45,49] in top-N item recommendation with implicit feedback turns to be a practical use case of SCE, with speci c solution of e cient sampling strategies.…”
Section: Noise-contrastive Estimationmentioning
confidence: 99%
“…The reason is to calculate 6NDCG(c) i j of different item pairs, the recommender has to score all the items in ζ c to find the rank of i and j. In IR tasks, the candidate documents returned by the retrieval model have already been reduced to a small size [34]. However, in recommendation scenario, since there is no query to filter candidate items, all unobserved items have to be considered as candidates (i.e., |ζ c | = |I|).…”
Section: S = {(C I J)|i ∈ Imentioning
confidence: 99%