2016
DOI: 10.1007/978-981-10-0557-2_87
|View full text |Cite
|
Sign up to set email alerts
|

Layer-Wise Relevance Propagation for Deep Neural Network Architectures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
87
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
3
2

Relationship

1
9

Authors

Journals

citations
Cited by 134 publications
(88 citation statements)
references
References 12 publications
1
87
0
Order By: Relevance
“…An important improvement in relation to clinical acceptance would be to implement supporting explanation methods in the predictions, such as layerwise relevance propagation, deep Taylor decomposition, pattern attribution, or other DL explanation approaches [34,35]. It is easy to imagine that a model that is more interpretable and supported by explanations would be more easily accepted in the clinic.…”
Section: Black Boxmentioning
confidence: 99%
“…An important improvement in relation to clinical acceptance would be to implement supporting explanation methods in the predictions, such as layerwise relevance propagation, deep Taylor decomposition, pattern attribution, or other DL explanation approaches [34,35]. It is easy to imagine that a model that is more interpretable and supported by explanations would be more easily accepted in the clinic.…”
Section: Black Boxmentioning
confidence: 99%
“…Attribution, a term introduced by Ancona et al (2018), also referred to as relevance (Bach et al, 2015;Binder et al, 2016;Zintgraf et al, 2017;Robnik-Šikonja and Kononenko, 2008), contribution (Shrikumar et al, 2017), class saliency (Simonyan et al, 2013) or influence (Kindermans et al, 2016;Adler et al, 2016;Koh and Liang, 2017), aims to reveal components of high importance in the input to the DNN and their effect as the input is propagated through the network. Because of this property we can categorize the following methods to the attribution category: occlusion (Guçlütürk et al, 2017), erasure (Li et al, 2016), perturbation (Fong andVedaldi, 2017), adversarial examples (Papernot et al, 2017) and prediction difference analysis (Zintgraf et al, 2017).…”
Section: Attribution Methodsmentioning
confidence: 99%
“…Analysis of the DNN-based background subtraction is needed for discussing the characteristics and the issues. Visualization methods for analyzing DNNs are proposed [28][29][30]. The authors visualized features contributing to classification by DNNs.…”
Section: Related Workmentioning
confidence: 99%