2019
DOI: 10.1016/j.imavis.2019.02.005
|View full text |Cite
|
Sign up to set email alerts
|

Beyond saliency: Understanding convolutional neural networks from saliency prediction on layer-wise relevance propagation

Abstract: Despite the tremendous achievements of deep convolutional neural networks (CNNs) in many computer vision tasks, understanding how they actually work remains a significant challenge. In this paper, we propose a novel two-step understanding method, namely Salient Relevance (SR) map, which aims to shed light on how deep CNNs recognize images and learn features from areas, referred to as attention areas, therein. Our proposed method starts out with a layer-wise relevance propagation (LRP) step which estimates a… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 32 publications
(14 citation statements)
references
References 40 publications
0
12
0
Order By: Relevance
“…In [203], the use of reinforcement learning (RL) is proposed to build interpretable decision support systems for heart patients and it learns what is interpretable to each user by their interactions. One yet common method for interpreting/explaining deep models, in particular, CNN is the use of saliency maps [204], [205]. These methods are particularly focused on general applications, however, more research that is specifically focused on the interpretation of ML/DL systems used in healthcare applications is required.…”
Section: A Interpretable MLmentioning
confidence: 99%
“…In [203], the use of reinforcement learning (RL) is proposed to build interpretable decision support systems for heart patients and it learns what is interpretable to each user by their interactions. One yet common method for interpreting/explaining deep models, in particular, CNN is the use of saliency maps [204], [205]. These methods are particularly focused on general applications, however, more research that is specifically focused on the interpretation of ML/DL systems used in healthcare applications is required.…”
Section: A Interpretable MLmentioning
confidence: 99%
“…One research area generates translucent heatmaps that overlay images to highlight important regions that contribute towards classification and their sensitivity [4], [69], [92], [93], [94]. One technique called visual backpropagation attempts to visualize which parts of an image have contributed to the classification, and can do so in real-time in a model debugging tool for self-driving vehicles [30].…”
Section: How To Visualize Deep Learningmentioning
confidence: 99%
“…A variant of -lrp is spray [80] which builds a specrtal clustering on top of the local instance-based -lrp explanations. Similar work is done in [82]: it starts with the -lrp of the input instance and finds the LRP attribution relevance for a single input of interest x.…”
Section: Saliency Mapsmentioning
confidence: 99%