ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2019
DOI: 10.1109/icassp.2019.8682930
|View full text |Cite
|
Sign up to set email alerts
|

Understanding Deep Neural Networks through Input Uncertainties

Abstract: Techniques for understanding the functioning of complex machine learning models are becoming increasingly popular, not only to improve the validation process, but also to extract new insights about the data via exploratory analysis. Though a large class of such tools currently exists, most assume that predictions are point estimates and use a sensitivity analysis of these estimates to interpret the model. Using lightweight probabilistic networks we show how including prediction uncertainties in the sensitivity… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
3
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 12 publications
(4 citation statements)
references
References 10 publications
0
4
0
Order By: Relevance
“…Approaches include ‘Bayes by backpropagation’ (Blundell et al ), normalising flows for variational approximations (Kingma et al ), adversarial training (Lakshminarayanan et al ), methods that use dropout or its continuous relaxation (Gal et al ), and ensemble approaches (McDermott & Wikle ). These methods can also help explain neural networks, for example to probabilistically estimate sensitivity to model inputs to random masking (Chang et al ), or to decompose predictive uncertainty into component parts (Thiagarajan et al ).…”
Section: Discussionmentioning
confidence: 99%
“…Approaches include ‘Bayes by backpropagation’ (Blundell et al ), normalising flows for variational approximations (Kingma et al ), adversarial training (Lakshminarayanan et al ), methods that use dropout or its continuous relaxation (Gal et al ), and ensemble approaches (McDermott & Wikle ). These methods can also help explain neural networks, for example to probabilistically estimate sensitivity to model inputs to random masking (Chang et al ), or to decompose predictive uncertainty into component parts (Thiagarajan et al ).…”
Section: Discussionmentioning
confidence: 99%
“…In conventional statistics, uncertainty quantification (UQ) provides this characterization by measuring how accurately a model reflects the physical reality, and by studying the impact of different error sources on the prediction 35,38,39 . Consequently, several recent efforts have proposed to utilize prediction uncertainties in deep models to shed light onto when and how much to trust the predictions 35,[40][41][42][43] . These uncertainty estimates can also be used for enabling safe ML practice, e.g., identifying out-of-distribution samples, detecting anomalies/outliers, delegating high-risk predictions to experts, and defending against adversarial attacks etc.…”
Section: Discussionmentioning
confidence: 99%
“…In conventional statistics, uncertainty quantification (UQ) provides this characterization by studying the impact of different error sources on the prediction [32][33][34] . Consequently, several recent efforts have proposed to utilize prediction uncertainties in deep models to shed light onto when and how much to trust the predictions [35][36][37] . Some of the most popular uncertainty estimation methods today include: (i) Bayesian neural networks 34,38 : (ii) methods that use the discrepancy between different models as a proxy for uncertainty, such as deep ensembles 39 and Monte-Carlo dropout that approximates Bayesian posteriors on the weight-space of a model 35 ; and (iii) approaches that use a single model to estimate uncertainties, such as orthonormal certificates 40 , deterministic uncertainty quantification 41 , distance awareness 42 , depth uncertainty 43 , direct epistemic uncertainty prediction 44 and accuracy versus uncertainty calibration 45 .…”
Section: Background and Related Workmentioning
confidence: 99%