2023
DOI: 10.1097/apo.0000000000000619
|View full text |Cite
|
Sign up to set email alerts
|

Review of Visualization Approaches in Deep Learning Models of Glaucoma

Abstract: Glaucoma is a major cause of irreversible blindness worldwide. As glaucoma often presents without symptoms, early detection and intervention are important in delaying progression. Deep learning (DL) has emerged as a rapidly advancing tool to help achieve these objectives. In this narrative review, data types and visualization approaches for presenting model predictions, including models based on tabular data, functional data, and/or structural data, are summarized, and the importance of data source diversity f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

2
4

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 72 publications
0
2
0
Order By: Relevance
“…The Shapley analysis that was used to interrogate model decision-making may help explain the model performance at different time horizons. The incorporation of this analysis is also helpful for enhancing the explainability of the models, which is important considering that ML models have frequently been criticized for their "black box" nature [36]. Across all time horizons, several features had a moderate-to-large impact on the model predictions, including age, gender, self-reported race, axial length, CCT, IOP, MD, VFI, and PSD.…”
Section: Feature Importancementioning
confidence: 99%
“…The Shapley analysis that was used to interrogate model decision-making may help explain the model performance at different time horizons. The incorporation of this analysis is also helpful for enhancing the explainability of the models, which is important considering that ML models have frequently been criticized for their "black box" nature [36]. Across all time horizons, several features had a moderate-to-large impact on the model predictions, including age, gender, self-reported race, axial length, CCT, IOP, MD, VFI, and PSD.…”
Section: Feature Importancementioning
confidence: 99%
“…Gu et al also discussed the strategies to improve the interpretability of clinical features from tabular data used to train explainable AI models. 6 Among these, Local Interpretable Model-Agnostic Explanations (LIME) 11 provide key feature visualizations for a model's glaucoma classification, increasing medical professionals' trust, and the Submodular Pick Local Interpretable Model-Agnostic Explanation (SP-LIME) 12 explicates predictive results and glaucoma risk factors, facilitating clearer decision-making. Shapley's value from Cooperative Game Theory, used to quantify individual contributions, is another form of explainable AI.…”
mentioning
confidence: 99%