2021
DOI: 10.1109/tnnls.2020.2980004
|View full text |Cite
|
Sign up to set email alerts
|

Correlated Parameters to Accurately Measure Uncertainty in Deep Neural Networks

Abstract: In this article a novel approach for training deep neural networks using Bayesian techniques is presented. The Bayesian methodology allows for an easy evaluation of model uncertainty and additionally is robust to overfitting. These are commonly the two main problems classical, i.e. non-Bayesian, architectures have to struggle with. The proposed approach applies variational inference in order to approximate the intractable posterior distribution. In particular, the variational distribution is defined as product… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
2
1

Relationship

2
8

Authors

Journals

citations
Cited by 32 publications
(15 citation statements)
references
References 33 publications
0
14
0
1
Order By: Relevance
“…Thus, the network weights and biases are also normally distributed, The diagonal covariance matrix of the Gaussian variational distributions assumes the independence of all the network parameters and biases. The calculation of the respective gradients and an extension to a tridiagonal covariance matrix can be found in [ 44 ] and [ 52 ]. We implement the Bayesian segmentation network using the open source Python library PyTorch [ 53 ].…”
Section: Methodsmentioning
confidence: 99%
“…Thus, the network weights and biases are also normally distributed, The diagonal covariance matrix of the Gaussian variational distributions assumes the independence of all the network parameters and biases. The calculation of the respective gradients and an extension to a tridiagonal covariance matrix can be found in [ 44 ] and [ 52 ]. We implement the Bayesian segmentation network using the open source Python library PyTorch [ 53 ].…”
Section: Methodsmentioning
confidence: 99%
“…In most of the previous studies, researchers in UQ are splitting the dataset randomly 10,11 . The performance of the trained NN varies based on the data splitting 12,13 . Researchers often randomly split data several times and then train NNs for each split and usually get average performance.…”
Section: Background and Summarymentioning
confidence: 99%
“…The following subsections present in more details about the most common and recent calibration techniques (temperature scaling [27], Monte Carlo Dropout [29], [42]) and regularization techniques (penalization of overconfidence output distributions [24], [26], [28], [39], [41], label smoothing [43]). Additionally, we would discuss the disadvantages of the mentioned techniques when predicting objects belonging to out-of-training-distribution data (which may be critical in autonomous driving and robotics).…”
Section: Related Work On Overconfidence Predictionmentioning
confidence: 99%