2019
DOI: 10.1007/978-3-030-28954-6_10
|View full text |Cite
|
Sign up to set email alerts
|

Layer-Wise Relevance Propagation: An Overview

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
468
0
2

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
3
2

Relationship

2
6

Authors

Journals

citations
Cited by 523 publications
(472 citation statements)
references
References 38 publications
2
468
0
2
Order By: Relevance
“…Such explanations help to verify the predictions and establish trust in the correct functioning on the system. Layer-wise Relevance Propagation (LRP) [9,58] provides a general framework for explaining individual predictions, i.e., it is applicable to various ML models, including neural networks [9], LSTMs [7], Fisher Vector classifiers [44] and Support Vector Machines [35]. Section 4 gives an overview over recently proposed methods for computing individual explanations.…”
Section: Explaining Individual Predictionsmentioning
confidence: 99%
“…Such explanations help to verify the predictions and establish trust in the correct functioning on the system. Layer-wise Relevance Propagation (LRP) [9,58] provides a general framework for explaining individual predictions, i.e., it is applicable to various ML models, including neural networks [9], LSTMs [7], Fisher Vector classifiers [44] and Support Vector Machines [35]. Section 4 gives an overview over recently proposed methods for computing individual explanations.…”
Section: Explaining Individual Predictionsmentioning
confidence: 99%
“…(a) Due to its attention-based design, a trained model can be used to compute the attention weights of a sequence, which directly indicates its importance. (b) DeepRC furthermore allows for the usage of contribution analysis methods, such as Integrated Gradients (IG) (Sundararajan et al, 2017) or Layer-Wise Relevance Propagation (Montavon et al, 2018;Arras et al, 2019;Montavon et al, 2019;Preuer et al, 2019). We apply IG to identify the input patterns that are relevant for the classification.…”
Section: A8 Interpreting Deeprcmentioning
confidence: 99%
“…LRP has been most commonly applied to deep rectifier networks. In these networks, the activations at the current layer can be computed from activations in the previous layer as: a k = max 0, 0,j a j w jk A general family of propagation rules for such types of layer is given by [51]:…”
Section: Lrp In Deep Neuralmentioning
confidence: 99%
“…Specific propagation rules such as LRP-, LRP-α 1 β 0 and LRP-γ fall under this umbrella. They are easy to implement [42,51] and can be interpreted as the result of a deep Taylor decomposition of the neural network function [52]. On convolutional neural networks for computer vision, composite strategies making use of different rules at different layers have shown to work well in practice [43,51].…”
Section: Lrp In Deep Neuralmentioning
confidence: 99%
See 1 more Smart Citation