2021
DOI: 10.1101/2021.05.05.21256683
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Clinical Validation of Saliency Maps for Understanding Deep Neural Networks in Ophthalmology

Abstract: Deep neural networks (DNNs) have achieved physician-level accuracy on many imaging-based medical diagnostic tasks, for example classification of retinal images in ophthalmology. However, their decision mechanisms are often considered impenetrable leading to a lack of trust by clinicians and patients. To alleviate this issue, a range of explanation methods have been proposed to expose the inner workings of DNNs leading to their decisions. For imaging-based tasks, this is often achieved via saliency maps. The qu… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
16
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(16 citation statements)
references
References 64 publications
0
16
0
Order By: Relevance
“…Saliency maps highlight critical regions for the model’s decision and thus allow a quick visual control of its reasoning. However, it needs to be kept in mind, that first various methods of saliency map generation exist with different degrees of agreement with clinical validation [6, 48, 56] and secondly, saliency maps can lead to overdiagnosis [45], while some methods have also been shown to generate maps independent of the final decision of the algorithm [36]. Therefore we only displayed saliency maps in case of a confidence of the algorithm > 0,5.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…Saliency maps highlight critical regions for the model’s decision and thus allow a quick visual control of its reasoning. However, it needs to be kept in mind, that first various methods of saliency map generation exist with different degrees of agreement with clinical validation [6, 48, 56] and secondly, saliency maps can lead to overdiagnosis [45], while some methods have also been shown to generate maps independent of the final decision of the algorithm [36]. Therefore we only displayed saliency maps in case of a confidence of the algorithm > 0,5.…”
Section: Discussionmentioning
confidence: 99%
“…We used Layer-wise Relevance Propagation (LRP) [7] to compute saliency maps, to highlight the regions in the OCT images which contributed to the DNN decisions. We have recently shown that a propagation rule known as LRP-PresetBFlat performs best in obtaining clinically relevant saliency maps from InceptionV3 networks trained to detect active nAMD from OCT B-scans [6]. Using this rule, we created three saliency maps for each OCT slice, namely, one for each task: subretinal (cyan), intraretinal (magenta) and diesease activity in nAMD (yellow) (Fig.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…However, it is di cult to understand the characteristic areas at more detailed points, such as the defect of the cortical bone on the upper wall of the inferior alveolar canal. Thus, further research is required to examine the approach of explainable AI with guided backprop [33] .…”
Section: Discussionmentioning
confidence: 99%