2021
DOI: 10.48550/arxiv.2112.10327
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Classifier Calibration: How to assess and improve predicted class probabilities: a survey

Abstract: This paper provides both an introduction to and a detailed overview of the principles and practice of classifier calibration. A well-calibrated classifier correctly quantifies the level of uncertainty or confidence associated with its instance-wise predictions. This is essential for critical applications, optimal decision making, cost-sensitive classification, and for some types of context change. Calibration research has a rich history which predates the birth of machine learning as an academic field by decad… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 33 publications
0
4
0
Order By: Relevance
“…Imbalanced class data can result in models for which the overall performance is not representative of the performance for the underrepresented classes. Post hoc Dirichlet calibration (DC) was used to adjust the model’s output probabilities and address potential problems with overconfidence in the predictions [ 13 ].…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Imbalanced class data can result in models for which the overall performance is not representative of the performance for the underrepresented classes. Post hoc Dirichlet calibration (DC) was used to adjust the model’s output probabilities and address potential problems with overconfidence in the predictions [ 13 ].…”
Section: Methodsmentioning
confidence: 99%
“…Probability calibration can be used to reduce the false positive rates of multiclass classifiers. In accordance with the main concept of calibration [ 13 ], a multiclass probabilistic classifier should only be considered well-calibrated if instances of a particular class receive probabilities in accordance with the actual class distribution of the data. For example, if we have amongst the test instances a predicted probability vector s = [0.1, 0.2, 0.7], the class distribution of s should be approximately 10%, 20%, and 70% for the first, second, and third classes, respectively.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…where w ∈ R is the shape parameter and b ∈ R is the location parameter. The parameters are estimated by maximizing the log-likelihood on the validation set [32], [33].…”
Section: ) Calibrationmentioning
confidence: 99%