2022
DOI: 10.1007/s11517-021-02487-8
|View full text |Cite
|
Sign up to set email alerts
|

The adoption of deep learning interpretability techniques on diabetic retinopathy analysis: a review

Abstract: Diabetic retinopathy (DR) is a chronic eye condition that is rapidly growing due to the prevalence of diabetes. There are challenges such as the dearth of ophthalmologists, healthcare resources, and facilities that are unable to provide patients with appropriate eye screening services. As a result, deep learning (DL) has the potential to play a critical role as a powerful automated diagnostic tool in the field of ophthalmology, particularly in the early detection of DR when compared to traditional detection te… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 25 publications
(13 citation statements)
references
References 35 publications
0
13
0
Order By: Relevance
“…To the best of our knowledge, no studies have ever predicted common outcomes while maximizing the value of these models of patients in general ICU. Furthermore, because of the complexity of these deep learning models, they are not easy to interpret, which restricts their practical application to clinical decisions ( 21 , 22 ). Therefore, transparency and explainability must be considered when constructing prediction models.…”
Section: Introductionmentioning
confidence: 99%
“…To the best of our knowledge, no studies have ever predicted common outcomes while maximizing the value of these models of patients in general ICU. Furthermore, because of the complexity of these deep learning models, they are not easy to interpret, which restricts their practical application to clinical decisions ( 21 , 22 ). Therefore, transparency and explainability must be considered when constructing prediction models.…”
Section: Introductionmentioning
confidence: 99%
“…Among XAI methods applied to these AI systems, most employ attribution-based methods to generate posthoc local heatmaps to represent regions of the input image that contribute most to output decision [33,34 ▪ ,35]. By visualizing that attention areas of DL models correlate to clinically relevant features, these visualization heatmaps can potentially boost clinicians’ confidence in model output decisions [17 ▪▪ ,36].…”
Section: Clinical Applications Of Explainable Artificial Intelligence...mentioning
confidence: 99%
“…The model introduces the significance of data augmentation in better model performance. Lim et al [ 55 ] have performed a literature review on gradient-based interpretability methods in DL models such as saliency map, integrated gradient, layer-wise relevance propagation, occlusion testing, sensitivity analysis, class activation map, gradient-weighted class activation map and layer-wise relevance propagation, in the detection of DR. It identifies the drawbacks of these interpretability methods in detection of correct lesions for a given class and the lack of reliable ground truth.…”
Section: Literature Reviewmentioning
confidence: 99%