2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) 2019
DOI: 10.1109/iccvw.2019.00510
|View full text |Cite
|
Sign up to set email alerts
|

Why are Saliency Maps Noisy? Cause of and Solution to Noisy Saliency Maps

Abstract: Saliency Map, the gradient of the score function with respect to the input, is the most basic technique for interpreting deep neural network decisions. However, saliency maps are often visually noisy. Although several hypotheses were proposed to account for this phenomenon, there are few works that provide rigorous analyses of noisy saliency maps. In this paper, we firstly propose a new hypothesis that noise may occur in saliency maps when irrelevant features pass through ReLU activation functions. Then, we pr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 47 publications
(19 citation statements)
references
References 8 publications
0
19
0
Order By: Relevance
“…In Guided Backpropagation [43], negative gradients are set to 0, effectively discarding suppression of neuron activation. Rectified Gradient [103] generalizes this by allowing layerwise thresholding with an extra hyper parameter. Grad-CAM [104] calculates the gradient of the class score with respect to channels, i.e.…”
Section: Importance Estimators and Relevance Scoresmentioning
confidence: 99%
“…In Guided Backpropagation [43], negative gradients are set to 0, effectively discarding suppression of neuron activation. Rectified Gradient [103] generalizes this by allowing layerwise thresholding with an extra hyper parameter. Grad-CAM [104] calculates the gradient of the class score with respect to channels, i.e.…”
Section: Importance Estimators and Relevance Scoresmentioning
confidence: 99%
“…Recently, deep convolutional neural nets have been proposed to predict salience maps as their output [ 54 ]. Alternatively, “salience maps” of the deepest layers in neural networks are explored not for attention modeling, but mainly for visualization and explanatory purposes [ 55 , 56 ]. We tested two of such deep learning models: the multiduration model [ 26 ], which predicts how the duration of each observation affects salience, and the temporally-aggregating spatial encoder-decoder network (TASED) [ 32 ], which was proposed as a video-specific salience model.…”
Section: Experiments With Computational Salience Modelsmentioning
confidence: 99%
“…Despite early successes, direct calculation of the gradients often leads to noisy saliency maps without clearly focused regions. Several propositions have been made to improve them, such as Guided Backropagation (GuidedBP) [14], Rectified Gradient (RectGrad) [23] and SmoothGrad [15]. The first two methods modify the back-propagation of the gradient through the Rectified Linear Activation Unit (ReLU) f (x) = max(x, 0), for a detailed description see sec.…”
Section: B Saliency Mapsmentioning
confidence: 99%
“…and this additional guiding of the gradient leads to sharper activation maps. Recently, Rectified Gradient [23] has been proposed as yet another method to calculate gradients. It introduces an external parameter τ which acts as a threshold for gradients to be backpropagated:…”
Section: B Calculating the Gradientsmentioning
confidence: 99%
See 1 more Smart Citation