2016 IEEE International Conference on Image Processing (ICIP) 2016
DOI: 10.1109/icip.2016.7532763
|View full text |Cite
|
Sign up to set email alerts
|

Controlling explanatory heatmap resolution and semantics via decomposition depth

Abstract: We present an application of the Layer-wise Relevance Propagation (LRP) algorithm to state of the art deep convolutional neural networks and Fisher Vector classifiers to compare the image perception and prediction strategies of both classifiers with the use of visualized heatmaps. Layer-wise Relevance Propagation (LRP) is a method to compute scores for individual components of an input image, denoting their contribution to the prediction of the classifier for one particular test point. We demonstrate the impac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
17
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 17 publications
(18 citation statements)
references
References 15 publications
1
17
0
Order By: Relevance
“…Rule-based learner: [82,83,147,148,251,252,253,254,255,256] Decision Tree: [21,56,79,81,97,135,257,258,259] Others: [80] Feature relevance explanation Importance/Contribution: [60,61,110,260,261] Sensitivity / Saliency: [260] [262] Local explanation Decision Tree / Sensitivity: [233] [263] Explanation by Example Activation clusters: [264,144] Text explanation Caption generation: [111] [150] Visual explanation Saliency / Weights: [265] Architecture modification Others: [264] [266] [267] Convolutional Neural Networks Explanation by simplification Decision Tree: [78] Feature relevance explanation Activations: [72,268] [46] Feature Extraction: [72,268] Visual explanation Filter / Activation: [63,136,137,...…”
Section: Explanation By Simplificationmentioning
confidence: 99%
“…Rule-based learner: [82,83,147,148,251,252,253,254,255,256] Decision Tree: [21,56,79,81,97,135,257,258,259] Others: [80] Feature relevance explanation Importance/Contribution: [60,61,110,260,261] Sensitivity / Saliency: [260] [262] Local explanation Decision Tree / Sensitivity: [233] [263] Explanation by Example Activation clusters: [264,144] Text explanation Caption generation: [111] [150] Visual explanation Saliency / Weights: [265] Architecture modification Others: [264] [266] [267] Convolutional Neural Networks Explanation by simplification Decision Tree: [78] Feature relevance explanation Activations: [72,268] [46] Feature Extraction: [72,268] Visual explanation Filter / Activation: [63,136,137,...…”
Section: Explanation By Simplificationmentioning
confidence: 99%
“…The relevance decompositions for the SVM predictor layer are computed using the "simple" ( = 0) rule (cf. [34] and [140] for further technical details). Since the fine-tuned DNN model structure is composed of a repeating sequence of convolutional or fully connected layers interleaved with pooling and ReLU activation layers, we uniformly apply the αβ-rule with α = 2, β = −1 throughout the network, which complements the ReLU-activated inputs fed into hidden layers especially well for explanation.…”
Section: E3 Comparing Fisher Vector and Deep Network Prediction Stramentioning
confidence: 99%
“…For happiness and disgust, images from the Actorstudy dataset were used. The classification results for the subimages in Figure 5 showing emotion (from (1) to (4)) and the subimage (5) showing pain are correct. A misclassification can be seen in subfigure (6).…”
Section: Resultsmentioning
confidence: 90%
“…Using the preset approach, the relevance scores for all neurons of the lowest (first) layer are uniformly distributed to the input neuron instead of using the α and β values [13]. To control the resolution of the heatmaps generated by LRP, Bach et al [5] describes an approach for 'mapping influence cut-off point'. This point describes the moment from which the forward mapping function of the classifier no longer influences relevance propagation, since only the receptive field of the classifier is relevant.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation