2021
DOI: 10.48550/arxiv.2109.09048
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A framework for benchmarking uncertainty in deep regression

Abstract: We propose a framework for the assessment of uncertainty quantification in deep regression. The framework is based on regression problems where the regression function is a linear combination of nonlinear functions. Basically, any level of complexity can be realized through the choice of the nonlinear functions and the dimensionality of their domain. Results of an uncertainty quantification for deep regression are compared against those obtained by a statistical reference method. The reference method utilizes … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 24 publications
0
2
0
Order By: Relevance
“…Finally, while we studied in this work both, real and simulated data, only the simulated cases allow us to evaluate the quality of the uncertainty u in a conclusive manner. The difficulty of assessing uncertainties without access to a ground truth is a well-known problem [14,36], whose solution is well beyond the scope of this article. A future work that would deepen the understanding of the performance for data without a ground truth could involve a study, again on simulated data, about the sensitivity of the EiV uncertainty quantification to deviations between the model assumptions in (3) and the generation of the data, e.g.…”
Section: Discussion and Outlookmentioning
confidence: 99%
See 1 more Smart Citation
“…Finally, while we studied in this work both, real and simulated data, only the simulated cases allow us to evaluate the quality of the uncertainty u in a conclusive manner. The difficulty of assessing uncertainties without access to a ground truth is a well-known problem [14,36], whose solution is well beyond the scope of this article. A future work that would deepen the understanding of the performance for data without a ground truth could involve a study, again on simulated data, about the sensitivity of the EiV uncertainty quantification to deviations between the model assumptions in (3) and the generation of the data, e.g.…”
Section: Discussion and Outlookmentioning
confidence: 99%
“…In many applications, especially those in which reliability and safety are crucial [25,30,39], it is valuable, if not indispensable, to know the uncertainty behind a prediction of a neural network. This work focuses on the uncertainty evaluation for neural networks that are trained for regression tasks [6,16,21,26,28,36]. Regression problems arise in a variety of areas [12,20,23,29] and are typically given by a model f θ , parameterized by θ , that links input data x to outputs y :…”
Section: Introductionmentioning
confidence: 99%