2022
DOI: 10.48550/arxiv.2202.12780
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Model Comparison and Calibration Assessment: User Guide for Consistent Scoring Functions in Machine Learning and Actuarial Practice

Abstract: One of the main tasks of actuaries and data scientists is to build good predictive models for certain phenomena such as the claim size or the number of claims in insurance. These models ideally exploit given feature information to enhance the accuracy of prediction. This user guide revisits and clarifies statistical techniques to assess the calibration or adequacy of a model on the one hand, and to compare and rank different models on the other hand. In doing so, it emphasises the importance of specifying the … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 59 publications
(156 reference statements)
0
1
0
Order By: Relevance
“…As a baseline, we also show results for a dummy classifier, which always predicts the most frequent class. For a discussion and explanation of different scoring metrics see Fissler et al (2022).…”
Section: Case Study 1: Use English Accident Reports To Predict the Nu...mentioning
confidence: 99%
“…As a baseline, we also show results for a dummy classifier, which always predicts the most frequent class. For a discussion and explanation of different scoring metrics see Fissler et al (2022).…”
Section: Case Study 1: Use English Accident Reports To Predict the Nu...mentioning
confidence: 99%