2023
DOI: 10.1088/2632-2153/acd749
|View full text |Cite
|
Sign up to set email alerts
|

Theoretical characterization of uncertainty in high-dimensional linear classification

Abstract: Being able to reliably assess not only the accuracy but also the uncertainty of models' predictions is an important endeavour in modern machine learning. Even if the model generating the data and labels is known, computing the intrinsic uncertainty after learning the model from a limited number of samples amounts to sampling the corresponding posterior probability measure. Such sampling is computationally challenging in high-dimensional problems and theoretical results on heuristic uncertainty estimators in h… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 43 publications
0
2
0
Order By: Relevance
“…It appears that a model becomes increasingly underconfident as the training set grows, hence the increase in miscalibration area. This is a surprising result as it has been shown that a well-specified model should be calibrated in the limit where the training set size is much larger than the number of features [42]. Nonetheless, similar behavior has been reported with GP-deep neural network hybrids [43], and overparameterized deep neural networks [44] in which calibration error increases with training set size and training time, respectively.…”
Section: Uncertainty Quantificationmentioning
confidence: 52%
“…It appears that a model becomes increasingly underconfident as the training set grows, hence the increase in miscalibration area. This is a surprising result as it has been shown that a well-specified model should be calibrated in the limit where the training set size is much larger than the number of features [42]. Nonetheless, similar behavior has been reported with GP-deep neural network hybrids [43], and overparameterized deep neural networks [44] in which calibration error increases with training set size and training time, respectively.…”
Section: Uncertainty Quantificationmentioning
confidence: 52%
“…In all cases in which an uncertainty-quantified models is miscalibrated (e.g. for ensembles of neural networks that tend to produce overconfident uncertainty estimates [68,69,71,72]), it is possible to apply a post-hoc calibration step on a hold-out set (in practice we use the validation set) to globally correct a model's uncertainty estimates. In the simplest case, assuming that the target properties are Gaussian distributed, one can apply a simple rescaling σ ← ασ.…”
Section: Post-hoc Calibrationmentioning
confidence: 99%