2021
DOI: 10.48550/arxiv.2106.10185
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

NoiseGrad: Enhancing Explanations by Introducing Stochasticity to Model Weights

Abstract: Attribution methods remain a practical instrument that is used in real-world applications to explain the decision-making process of complex learning machines. It has been shown that a simple method called SmoothGrad can effectively reduce the visual diffusion of gradient-based attribution methods and has established itself among both researchers and practitioners. What remains unexplored in research, however, is how explanations can be improved by introducing stochasticity to the model weights. In the light of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 33 publications
0
5
0
Order By: Relevance
“…Most of the work on the topic of interpreting the BNNs concentrate on uncertainty quantification and visualization: [66] proposes a method to decompose the momentbased predictive uncertainty into two parts: aleatoric and epistemic: in [67] author proposes a model-agnostic method to visualize the contribution of individual features to predictive, epistemic and aleatoric uncertainty [66]. Recently, it has been shown that explanations of DNN can be enhanced by introducing stochasticities to the model weights [68], which, to some extend, lead to explanations similar to the ones using Diagonal or KFAC Laplace approximation. The so-called NoiseGrad method [68] adds multiplicative Gaussian noise to the model weights, which significantly reduces the gradient shattering effect [69] similar to the SmoothGrad method [70].…”
Section: Xai For Bnnsmentioning
confidence: 99%
See 4 more Smart Citations

Explaining Bayesian Neural Networks

Bykov,
Höhne,
Creosteanu
et al. 2021
Preprint
Self Cite
“…Most of the work on the topic of interpreting the BNNs concentrate on uncertainty quantification and visualization: [66] proposes a method to decompose the momentbased predictive uncertainty into two parts: aleatoric and epistemic: in [67] author proposes a model-agnostic method to visualize the contribution of individual features to predictive, epistemic and aleatoric uncertainty [66]. Recently, it has been shown that explanations of DNN can be enhanced by introducing stochasticities to the model weights [68], which, to some extend, lead to explanations similar to the ones using Diagonal or KFAC Laplace approximation. The so-called NoiseGrad method [68] adds multiplicative Gaussian noise to the model weights, which significantly reduces the gradient shattering effect [69] similar to the SmoothGrad method [70].…”
Section: Xai For Bnnsmentioning
confidence: 99%
“…Recently, it has been shown that explanations of DNN can be enhanced by introducing stochasticities to the model weights [68], which, to some extend, lead to explanations similar to the ones using Diagonal or KFAC Laplace approximation. The so-called NoiseGrad method [68] adds multiplicative Gaussian noise to the model weights, which significantly reduces the gradient shattering effect [69] similar to the SmoothGrad method [70].…”
Section: Xai For Bnnsmentioning
confidence: 99%
See 3 more Smart Citations

Explaining Bayesian Neural Networks

Bykov,
Höhne,
Creosteanu
et al. 2021
Preprint
Self Cite