2021
DOI: 10.1097/icu.0000000000000780
|View full text |Cite
|
Sign up to set email alerts
|

Applications of interpretability in deep learning models for ophthalmology

Abstract: Purpose of review:In this article, we introduce the concept of model interpretability, review its applications in deep learning models for clinical ophthalmology, and discuss its role in the integration of artificial intelligence in healthcare. Recent findings:The advent of deep learning in medicine has introduced models with remarkable accuracy. However, the inherent complexity of these models undermines its users' ability to understand, debug and ultimately trust them in clinical practice. Novel methods are … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 18 publications
(15 citation statements)
references
References 67 publications
0
15
0
Order By: Relevance
“…In addition, Weinreb RN contributed significantly to the research of glaucoma disease and made a breakthrough in the use of AI in the classification and progress prediction of glaucoma in multiple dimensions and methods ( 23 , 24 ). Keane PA focused on the diversity and interpretability of AI models, as well as the transformation and clinical application of AI ( 25 , 26 ). Sun Yat-sen University’s Lin HT has made significant contributions to the advancement of AI ophthalmology in China.…”
Section: Discussionmentioning
confidence: 99%
“…In addition, Weinreb RN contributed significantly to the research of glaucoma disease and made a breakthrough in the use of AI in the classification and progress prediction of glaucoma in multiple dimensions and methods ( 23 , 24 ). Keane PA focused on the diversity and interpretability of AI models, as well as the transformation and clinical application of AI ( 25 , 26 ). Sun Yat-sen University’s Lin HT has made significant contributions to the advancement of AI ophthalmology in China.…”
Section: Discussionmentioning
confidence: 99%
“… 19–21 However, as models increase in complexity, such models are not only inherently challenging to audit for quality but also limit the user’s ability to “debug” or make informed adjustments to the underlying algorithm. 22 Nomogram-based clinical modelling is a reliable statistical and straightforward visual tool. A comprehensive analysis of all related risk factors can accurately calculate and predict disease occurrence, development, prognosis, and survival by nomogram.…”
Section: Discussionmentioning
confidence: 99%
“…Interpretability and transparency are crucial to achieving augmented intelligence. There have been increasing innovative efforts in exploring approaches to improve AI's interpretability [50] and explaining how the prediction is derived. Interpretability should be inherently considered in model construction for bedside clinical use, guiding the choice of algorithm to train a model.…”
Section: Interpretabilitymentioning
confidence: 99%