2015
DOI: 10.1371/journal.pone.0130140
|View full text |Cite
|
Sign up to set email alerts
|

On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation

Abstract: Understanding and interpreting classification decisions of automated image classification systems is of high value in many applications, as it allows to verify the reasoning of the system and provides additional information to the human expert. Although machine learning methods are solving very successfully a plethora of tasks, they have in most cases the disadvantage of acting as a black box, not providing any information about what made them arrive at a particular decision. This work proposes a general solut… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

7
3,130
0
11

Year Published

2016
2016
2022
2022

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 3,622 publications
(3,382 citation statements)
references
References 35 publications
7
3,130
0
11
Order By: Relevance
“…Our work is closely related to previous visualization approaches that compute the contribution of a unit at the input layer to the final decision at the output layer (Simonyan et al, 2014;Mahendran and Vedaldi, 2015;Nguyen et al, 2015;Girshick et al, 2014;Bach et al, 2015;. Among them, our approach bears most resemblance to (Bach et al, 2015) since we adapt layer-wise relevance propagation to neural machine translation.…”
Section: Related Workmentioning
confidence: 88%
See 3 more Smart Citations
“…Our work is closely related to previous visualization approaches that compute the contribution of a unit at the input layer to the final decision at the output layer (Simonyan et al, 2014;Mahendran and Vedaldi, 2015;Nguyen et al, 2015;Girshick et al, 2014;Bach et al, 2015;. Among them, our approach bears most resemblance to (Bach et al, 2015) since we adapt layer-wise relevance propagation to neural machine translation.…”
Section: Related Workmentioning
confidence: 88%
“…The contextual word set of a hidden state v ∈ R M ×1 is denoted as C(v), which is a set of source and target contextual word vectors u ∈ R N ×1 that influences the generation of v. (Bach et al, 2015).…”
Section: Definitionmentioning
confidence: 99%
See 2 more Smart Citations
“…This redistribution rule has been showed to fulfill the layer-wise conservation property [10] and to be closely related to a deep variant of Taylor decomposition [11].…”
Section: B Interpretabilitymentioning
confidence: 94%