2022
DOI: 10.1016/j.displa.2022.102239
|View full text |Cite
|
Sign up to set email alerts
|

A review of visualisation-as-explanation techniques for convolutional neural networks and their evaluation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 34 publications
(12 citation statements)
references
References 43 publications
0
12
0
Order By: Relevance
“…To combine LRP and DTD, LRP can be thought of as providing the framework for propagating relevance through a network, whereas DTD provides the means for approximating the complex non-linear functions used by the network. LRP and DTD may lead to overcoming the limitations of saliency maps and provide more accurate explanations [ 75 ].…”
Section: Introduction Of the Explainable Ai Method: A Brief Overviewmentioning
confidence: 99%
“…To combine LRP and DTD, LRP can be thought of as providing the framework for propagating relevance through a network, whereas DTD provides the means for approximating the complex non-linear functions used by the network. LRP and DTD may lead to overcoming the limitations of saliency maps and provide more accurate explanations [ 75 ].…”
Section: Introduction Of the Explainable Ai Method: A Brief Overviewmentioning
confidence: 99%
“…A different approach is necessary in the case of CNNs. For example, in the case of a CNN that classify images into different categories, a common approach is to use saliency maps, which measure the support that different groups of pixels in an image provides for a particular class ( Mohamed et al 2022 ). This is implemented by feeding the CNN an image of a particular class and using visualization techniques to generate heatmaps overlayed on the original image; the image elements that are being used by the CNN to identify the class are highlighted in red.…”
Section: Interpretable Machine Learningmentioning
confidence: 99%
“…To validate the reliability of the best single-head YOLO V3 detector, novel techniques for visualising the network decisions are applied. The proposed techniques can be applied to classification tasks like other visualisation techniques [ 54 ]. However, applying them to different tasks, such as object detection, is a novel contribution.…”
Section: Visualisation Of Detector Predictionsmentioning
confidence: 99%