Proceedings of the ACM Web Conference 2022 2022
DOI: 10.1145/3485447.3512010
|View full text |Cite
|
Sign up to set email alerts
|

Cross Pairwise Ranking for Unbiased Item Recommendation

Abstract: Most recommender systems optimize the model on observed interaction data, which is affected by the previous exposure mechanism and exhibits many biases like popularity bias. The loss functions, such as the mostly used pointwise Binary Cross-Entropy and pairwise Bayesian Personalized Ranking, are not designed to consider the biases in observed data. As a result, the model optimized on the loss would inherit the data biases, or even worse, amplify the biases. For example, a few popular items take up more and mor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 28 publications
(5 citation statements)
references
References 33 publications
0
5
0
Order By: Relevance
“…They assign high sampling probability to top-ranked negative items, accounting for model status. There are also some fine-grained negative sampling methods [23,32,34,42]. Empirical experiments verify the effectiveness and efficiency of HNS.…”
Section: Related Work 71 Negative Sampling For Recommendationmentioning
confidence: 99%
“…They assign high sampling probability to top-ranked negative items, accounting for model status. There are also some fine-grained negative sampling methods [23,32,34,42]. Empirical experiments verify the effectiveness and efficiency of HNS.…”
Section: Related Work 71 Negative Sampling For Recommendationmentioning
confidence: 99%
“…The following representative sequential recommendation models were chosen as the baselines: STAMP [18] which models the long-and short-term preference of users; GRU4Rec+ [32] is an improved version of GRU4Rec with data augmentation and accounting for the shifts in the inputs; BERT4Rec [31] employs an attention module to model user behaviors and trains with unsupervised style; FPMC [25] captures users' preference by combing matrix factorization with first-order Markov chains; DIN [43] applies an attention module to adaptively learn the user interests from their historical behaviors; BST [7] applies the transformer architecture to adaptively learn user interests from historical behaviors and the side information of users and items; LightSANs [11] is a lowrank decomposed SANs-based recommender model. We also chose the following unbiased recommendation models as the baselines: UIR [29] is an unbiased recommendation model that estimates the propensity score using heuristics; CPR [33] is a pairwise debiasing approach for exposure bias; UBPR [27]is an IPS method for nonnegative pair-wise loss. DICE [41]: A debiasing model focused on the user communities.…”
Section: Experimental Settingsmentioning
confidence: 99%
“…Unbiased, and thus more accurate, recommendations are also the focus in [128]. In this work, Wan et al propose a modified loss function, named "cross pairwise" loss.…”
Section: Bias Mitigation Approachesmentioning
confidence: 99%