2020
DOI: 10.1609/aaai.v34i04.6062
|View full text |Cite
|
Sign up to set email alerts
|

Building Calibrated Deep Models via Uncertainty Matching with Auxiliary Interval Predictors

Abstract: With rapid adoption of deep learning in critical applications, the question of when and how much to trust these models often arises, which drives the need to quantify the inherent uncertainties. While identifying all sources that account for the stochasticity of models is challenging, it is common to augment predictions with confidence intervals to convey the expected variations in a model's behavior. We require prediction intervals to be well-calibrated, reflect the true uncertainties, and to be sharp. Howeve… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
10
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1
1
1

Relationship

2
5

Authors

Journals

citations
Cited by 21 publications
(10 citation statements)
references
References 10 publications
0
10
0
Order By: Relevance
“…Similarly, Tagasovska et al recently developed a conditional quantile-based estimator for measuring aleatoric uncertainties 45 . Due to the lack of suitable evaluation mechanisms for validating the quality of these estimates, it is common to utilize empirical calibration as a quality metric 20,[46][47][48][49] . Interestingly, it has been reported in several studies that these estimators are not inherently well-calibrated 47 .…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Similarly, Tagasovska et al recently developed a conditional quantile-based estimator for measuring aleatoric uncertainties 45 . Due to the lack of suitable evaluation mechanisms for validating the quality of these estimates, it is common to utilize empirical calibration as a quality metric 20,[46][47][48][49] . Interestingly, it has been reported in several studies that these estimators are not inherently well-calibrated 47 .…”
Section: Discussionmentioning
confidence: 99%
“…Following 56 , for each sample, we make T forward passes with the dropout rate set to τ and obtain the final prediction as the average from the T runs. This is known to produce more robust estimates in regression problems 20 . In our experiments, we set T = 20 and the dropout rate τ = 0.3. ig.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…To this end, uncertainty estimation methods are being adopted to determine the deficiencies of a model and/or the training data 13 . Meaningful uncertainties can play a crucial role in supporting practical objectives that range from assessing regimes of over (or under)-confidence and active data collection, to ultimately improving the predictive models themselves 14 . However, in practice, uncertainties are known to be challenging to communicate to decision-makers 15 , and the robustness of decisions with respect to uncertainties can vary considerably between use-cases 16 .…”
Section: Introductionmentioning
confidence: 99%
“…As such, generating confidence intervals or uncertainty estimates along with the predictions is crucial for reliable § Equal contribution and safe deployment of machine learning systems in safetycritical settings (such as healthcare) [4]- [7]. It is critical to understand what a model does not know when building and deploying machine learning systems to help mitigate possible risks and biases in decision making [8].…”
Section: Introductionmentioning
confidence: 99%