2021
DOI: 10.1007/978-3-030-86380-7_12
|View full text |Cite
|
Sign up to set email alerts
|

Estimating Expected Calibration Errors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 8 publications
0
6
0
Order By: Relevance
“…Model calibration estimates a crucial aspect of ML to ensure the predicted probabilities align with the true probabilities of events and poorly calibrated models may provide misleading confidence scores, impacting the interpretability and trustworthiness of predictions [38, 39]. However, few studies focused on evaluation of the calibration of the classification models investigated in the clinical settings [36, 38, 40]. In our study, we generated the calibration curves for the four models to investigate the relationship between the mean predicted probabilities of the positive class and the observed fraction of positive instances.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…Model calibration estimates a crucial aspect of ML to ensure the predicted probabilities align with the true probabilities of events and poorly calibrated models may provide misleading confidence scores, impacting the interpretability and trustworthiness of predictions [38, 39]. However, few studies focused on evaluation of the calibration of the classification models investigated in the clinical settings [36, 38, 40]. In our study, we generated the calibration curves for the four models to investigate the relationship between the mean predicted probabilities of the positive class and the observed fraction of positive instances.…”
Section: Discussionmentioning
confidence: 99%
“…In our study, we generated the calibration curves for the four models to investigate the relationship between the mean predicted probabilities of the positive class and the observed fraction of positive instances. Additionally, we calculated the Expected Calibration Error (ECE) as a key metric used to quantify the calibration performance of a model [36]. Our ECE results revealed good calibration of the four models we investigated for prediction of acute pain intensity, MEDD, and analgesic efficacy in patients with OC/OPC receiving RT.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…They further introduce another binning-based estimator, which is also biased. Consequently, [28] concluded that current calibration error estimators are unfit for the low data regime. In later sections, we will further demonstrate that, even if there exists a perfect estimator, the TCE p and CWCE p fail to quantify the extent to which a model violates condition 2.1 of being strongly calibrated.…”
Section: Calibration Errorsmentioning
confidence: 99%