2022
DOI: 10.1609/aaai.v36i4.20326
|View full text |Cite
|
Sign up to set email alerts
|

Obtaining Calibrated Probabilities with Personalized Ranking Models

Abstract: For personalized ranking models, the well-calibrated probability of an item being preferred by a user has great practical value. While existing work shows promising results in image classification, probability calibration has not been much explored for personalized ranking. In this paper, we aim to estimate the calibrated probability of how likely a user will prefer an item. We investigate various parametric distributions and propose two parametric calibration methods, namely Gaussian calibration and Gamma … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 10 publications
(13 citation statements)
references
References 9 publications
0
13
0
Order By: Relevance
“…They often output the ranking score that can have any value of an unbounded real number [12,50,64], making it difficult to treat it as a probability. Furthermore, even when a model is trained to output probabilities [13,35], it has been demonstrated that these probabilities may not accurately reflect the true likelihood (i.e., model miscalibration) [11,29].…”
Section: Calibrated Interaction Probabilitymentioning
confidence: 99%
See 3 more Smart Citations
“…They often output the ranking score that can have any value of an unbounded real number [12,50,64], making it difficult to treat it as a probability. Furthermore, even when a model is trained to output probabilities [13,35], it has been demonstrated that these probabilities may not accurately reflect the true likelihood (i.e., model miscalibration) [11,29].…”
Section: Calibrated Interaction Probabilitymentioning
confidence: 99%
“…We adopt Platt scaling 𝑔 πœ™ (𝑠) = 𝜎 (π‘Žπ‘  + 𝑏) [46], a generalized form of temperature scaling [11]. This calibration function has been deployed effectively for model calibration in computer vision 7 πœ‹ perK = πœ‹ π‘˜ max [10,40], natural language processing [9], and recommender system [29,30]. The key difference is that PerK instantiates the calibration function for each user, while previous calibration work [29] deploys one global calibration function covering all users.…”
Section: Calibrated Interaction Probabilitymentioning
confidence: 99%
See 2 more Smart Citations
“…The uncertainty quantification can be characterized by using Bayesian methods, ensemble methods or calibration with binning and scaling [1]. The uncertainty of personalized ranking probabilities is learned by uncertainty calibration methods [23,15] and later applied in the online advertising systems [40,43].…”
Section: Introductionmentioning
confidence: 99%