2023
DOI: 10.1007/s10462-023-10562-9
|View full text |Cite
|
Sign up to set email alerts
|

A survey of uncertainty in deep neural networks

Abstract: Over the last decade, neural networks have reached almost every field of science and become a crucial part of various real world applications. Due to the increasing spread, confidence in neural network predictions has become more and more important. However, basic neural networks do not deliver certainty estimates or suffer from over- or under-confidence, i.e. are badly calibrated. To overcome this, many researchers have been working on understanding and quantifying uncertainty in a neural network’s prediction… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
73
1

Year Published

2023
2023
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 367 publications
(76 citation statements)
references
References 233 publications
2
73
1
Order By: Relevance
“…But, approaches with statistical correct Bayesian or Frequentist epistemic uncertainty provide better uncertainty estimates. We expect that in both cases well calibrated uncertainties can be provided by computationally expensive ensembles techniques [37][38][39].…”
Section: Discussionmentioning
confidence: 98%
“…But, approaches with statistical correct Bayesian or Frequentist epistemic uncertainty provide better uncertainty estimates. We expect that in both cases well calibrated uncertainties can be provided by computationally expensive ensembles techniques [37][38][39].…”
Section: Discussionmentioning
confidence: 98%
“…Therefore, in the following, we only consider noninformative prior distributions, such as uniform distributions, which typically are used when there is no prior information about the parameters. By applying the Bayes theorem and given training data, D = (x, y), the prior distribution can be updated to a posterior probability distribution p(θ|D) = p(θ|x, y) (Gawlikowski et al, 2021). Then, given new observations of the input data, x', a probability distribution over a new realization, y', of the model output is given by:…”
Section: Bayesian Approachmentioning
confidence: 99%
“…Brenowitz and Bretherton (2019) first noted that the training bias fluctuates significantly from one training epoch to another, and thus determining when to stop the training can lead to considerable uncertainties. Furthermore, individual predictions from a deep learning model can contain sizable uncertainties even though the model performs well on average (Gawlikowski et al., 2021; Pearce et al., 2018). The prediction uncertainty from NN‐based parameterizations can come from two major sources: aleatoric and epistemic uncertainties.…”
Section: Introductionmentioning
confidence: 99%