2020
DOI: 10.48550/arxiv.2006.08437
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Depth Uncertainty in Neural Networks

Abstract: Existing methods for estimating uncertainty in deep learning tend to require multiple forward passes, making them unsuitable for applications where computational resources are limited. To solve this, we perform probabilistic reasoning over the depth of neural networks. Different depths correspond to subnetworks which share weights and whose predictions are combined via marginalisation, yielding model uncertainty. By exploiting the sequential structure of feed-forward networks, we are able to both evaluate our … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(13 citation statements)
references
References 11 publications
0
13
0
Order By: Relevance
“…Later, Gal and Ghahramani [12] showed that dropout can be applied at test-time, called Monte Carlo (MC) dropout, which can be viewed as an approximate Bayesian technique and yields better estimates of uncertainty. Several improvements have also been proposed to improve MC dropout [1,4,13,26,40,51]. Unlike MC dropout which generates random masks on the fly at every iteration, we demonstrate that pruning-derived dropout masks can lead to significantly better performance.…”
Section: Introductionmentioning
confidence: 86%
See 1 more Smart Citation
“…Later, Gal and Ghahramani [12] showed that dropout can be applied at test-time, called Monte Carlo (MC) dropout, which can be viewed as an approximate Bayesian technique and yields better estimates of uncertainty. Several improvements have also been proposed to improve MC dropout [1,4,13,26,40,51]. Unlike MC dropout which generates random masks on the fly at every iteration, we demonstrate that pruning-derived dropout masks can lead to significantly better performance.…”
Section: Introductionmentioning
confidence: 86%
“…The resulting algorithm enables us to obtain a diverse ensemble of non-overlapping subnetworks within one deep neural network. That is, we are able create many models out of one 1 . We experimentally demonstrate that subnetwork 2 Related Works Ensemble Learning An ensemble of models has long known to be an effective method to boost the performance of machine learning models [7,52].…”
Section: Introductionmentioning
confidence: 99%
“…A more tractable model class are deep ensemble methods [35,9,13,12], although they are computationally still expensive. There are however some ideas to make them less expensive by distilling their uncertainties into simpler models [39,55,26,3].…”
Section: Related Workmentioning
confidence: 99%
“…Recently, calibration has been rediscovered with the deep learning booming trend. Indeed, deep learning models are no exception to the rule and are even more than traditional methods subject to model uncertainty due to their complex architectures [5]. That's why measuring calibration error and develop calibration techniques for deep neural networks have become an important research topic these recent years [44,26,47].…”
Section: Literature Reviewmentioning
confidence: 99%