2018
DOI: 10.1109/access.2018.2871713
|View full text |Cite
|
Sign up to set email alerts
|

Calibrated Prediction Intervals for Neural Network Regressors

Abstract: Ongoing developments in neural network models are continually advancing the state of the art in terms of system accuracy. However, the predicted labels should not be regarded as the only core output; also important is a well-calibrated estimate of the prediction uncertainty. Such estimates and their calibration are critical in many practical applications. Despite their obvious aforementioned advantage in relation to accuracy, contemporary neural networks can, generally, be regarded as poorly calibrated and as … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(16 citation statements)
references
References 29 publications
0
16
0
Order By: Relevance
“…Given the “black-box” nature of deep learning, there have been numerous approaches to quantifying confidence (Kendall and Cipolla, 2016 ; Keren et al, 2018 ). One popular procedure for measuring confidence is the Monte Carlo dropout .…”
Section: Discussion: Representation and Infrastructurementioning
confidence: 99%
See 1 more Smart Citation
“…Given the “black-box” nature of deep learning, there have been numerous approaches to quantifying confidence (Kendall and Cipolla, 2016 ; Keren et al, 2018 ). One popular procedure for measuring confidence is the Monte Carlo dropout .…”
Section: Discussion: Representation and Infrastructurementioning
confidence: 99%
“…In AI data, the terms confidence and trust are applied to ensure reliability, i.e., having confidence in the data results in deeper trust (Arnold et al, 2019 ). In this context, trust is a qualitative term, and although confidence can fall into these interpretations relating to enhanced moral understanding (Blass, 2018 ), the term confidence typically refers to a quantifiable measure to base trust on (Zhang et al, 2001 ; Keren et al, 2018 ).…”
Section: Methodology: Ethical Data Considerationsmentioning
confidence: 99%
“…For each mode we build the mean vector with elements uniformly drawn in [0, 1], and the covariance matrix is built as follows: we first sample a matrix with elements uniformly drawn in [−0.3, 0.3] then multiply it with its transposition to get the required positive definite matrix. This sample set generation is produced with various number of classes (2,5,7) and dimensions of the feature space (2, 5, 7) with 5 different large datasets sampled from each combination, resulting into 45 synthetic distributions.…”
Section: Methodsmentioning
confidence: 99%
“…More recently [7] made clearer the notion of calibration for multiclass classifiers, and new estimators of the ECE with adaptive binning have been proposed in [12] along side with uncertainty aware reliability diagrams [1]. Although the notion of calibration was originally defined for classifiers, this notion is currently being generalized to regression [5,15].…”
Section: Context and Related Workmentioning
confidence: 99%
“…The literature on this topic is not extensive and the question is still open. Some scholars (Khosravi et al (2011), Keren et al (2019) have faced this problem, but none of them has so far treated the question concerning NNs (and more generally deep learning) applied to time series. However, the LSTM network demonstrates being a good candidate to meet the need of predicting the mortality trend over time more accurately.…”
Section: Mean Absolute Error (Mae)mentioning
confidence: 99%