2022
DOI: 10.1109/tim.2022.3171613
|View full text |Cite
|
Sign up to set email alerts
|

Explainable AI for Glaucoma Prediction Analysis to Understand Risk Factors in Treatment Planning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
14
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 49 publications
(14 citation statements)
references
References 48 publications
0
14
0
Order By: Relevance
“…Chayan et al 37 proposed that providing explainability through LIME would provide medical professionals with comprehensive information to make decisions and thus would build their trust in the DL model. Kamal et al 36 proposed another model, submodular pick LIME (SP-LIME), that explained the predictive results and associated risk factors for the determination of glaucoma class. They claimed their model allowed clinicians to better understand the decision-making process and obtain convincing and consistent decisions 36 …”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Chayan et al 37 proposed that providing explainability through LIME would provide medical professionals with comprehensive information to make decisions and thus would build their trust in the DL model. Kamal et al 36 proposed another model, submodular pick LIME (SP-LIME), that explained the predictive results and associated risk factors for the determination of glaucoma class. They claimed their model allowed clinicians to better understand the decision-making process and obtain convincing and consistent decisions 36 …”
Section: Resultsmentioning
confidence: 99%
“…Kamal et al 36 proposed another model, submodular pick LIME (SP-LIME), that explained the predictive results and associated risk factors for the determination of glaucoma class. They claimed their model allowed clinicians to better understand the decision-making process and obtain convincing and consistent decisions 36 …”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…To gain further insights into the decision-making process of the ML algorithms, we employed XAI techniques, specifically utilizing the SHapley Additive exPlanations (SHAP) method [125,126]. By leveraging SHAP, we were able to delve into the impact of different features on the classification outcomes, enhancing our understanding of species-specific information and the distinctive effects of individual features on each species.…”
Section: Explainable Artificial Intelligencementioning
confidence: 99%
“…The robust XAI or explainable machine learning ( XML) methods LIME [47] and SHAP [48] methods were used in various fields and were certificated efficient, especially in medical and clinical areas. Some research confirmed that LIME [49][50][51][52] and SHAP [51][52][53][54][55] could be used to explain models and give reasons for model decisions. Because of the robustness of XML methods, we used them to analyse our data and acquired perfect/hoped results.…”
Section: Introductionmentioning
confidence: 99%