Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval 2020
DOI: 10.1145/3397271.3401233
|View full text |Cite
|
Sign up to set email alerts
|

A Re-visit of the Popularity Baseline in Recommender Systems

Abstract: Popularity is often included in experimental evaluation to provide a reference performance for a recommendation task. To understand how popularity baseline is defined and evaluated, we sample 12 papers from top-tier conferences including KDD, WWW, SIGIR, and RecSys, and 6 open source toolkits. We note that the widely adopted MostPop baseline simply ranks items based on the number of interactions in the training data. We argue that the current evaluation of popularity (i) does not reflect the popular items at t… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
25
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 43 publications
(25 citation statements)
references
References 18 publications
0
25
0
Order By: Relevance
“…Positivity bias leads to an over-representation of positive feedback because users rate the items they like more often [37]. The effect of these biases is generally dynamic: they can change drastically over time [9,26,58]. For instance, items are rarely popular for very extended periods of time, and therefore, we may expect a dynamic effect between the age of items and popularity bias.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Positivity bias leads to an over-representation of positive feedback because users rate the items they like more often [37]. The effect of these biases is generally dynamic: they can change drastically over time [9,26,58]. For instance, items are rarely popular for very extended periods of time, and therefore, we may expect a dynamic effect between the age of items and popularity bias.…”
Section: Related Workmentioning
confidence: 99%
“…While the impact of dynamic bias has previously been pointed out [26,58], no prior debiasing method considers a scenario in which both selection bias and user preferences change over time. All existing debiased recommendation methods assume a static effect of selection bias regardless of whether they model dynamic user preferences.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…We compare PP-Rec with two groups of baselines. The first group is popularity-based news recommendation methods, including: (1) ViewNum (Yang, 2016): using the number of news view to measure news popularity; (2) RecentPop (Ji et al, 2020): using the number of news view in recent time to measure news popularity; (3) SCENE (Li et al, 2011): using view frequency to measure news popularity and adjusting the ranking of news with same topics based on their popularity; (4) CTR (Ji et al, 2020): using news CTR to measure news popularity. The second group is personalized news recommendation methods, containing: (1) EBNR (Okura et al, 2017): utilizing an auto-encoder to learn news representations and a GRU network to learn user representations; (2) DKN (Wang et al, 2018): utilizing a knowledge-aware CNN network to learn news representations from news titles and entities;…”
Section: Performance Evaluationmentioning
confidence: 99%