2020
DOI: 10.1148/ryai.2020190043
|View full text |Cite
|
Sign up to set email alerts
|

On the Interpretability of Artificial Intelligence in Radiology: Challenges and Opportunities

Abstract: As artificial intelligence (AI) systems begin to make their way into clinical radiology practice, it is crucial to assure that they function correctly and that they gain the trust of experts. Toward this goal, approaches to make AI "interpretable" have gained attention to enhance the understanding of a machine learning algorithm, despite its complexity. This article aims to provide insights into the current state of the art of interpretability methods for radiology AI. This review discusses radiologists' opini… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
243
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 309 publications
(244 citation statements)
references
References 33 publications
0
243
0
1
Order By: Relevance
“…Although that term is not completely accurate as the model parameters are available and can be inspected, the reality is that those parameters are hard to explain, and it is difficult to translate their meaning into general principles and rules that can be understood by humans. The ability to draw a line between the inference that the network derives from the data and understandable governing principles is an area of active research in the AI community that needs to mature 79 .…”
Section: Ai Limitations and Challengesmentioning
confidence: 99%
“…Although that term is not completely accurate as the model parameters are available and can be inspected, the reality is that those parameters are hard to explain, and it is difficult to translate their meaning into general principles and rules that can be understood by humans. The ability to draw a line between the inference that the network derives from the data and understandable governing principles is an area of active research in the AI community that needs to mature 79 .…”
Section: Ai Limitations and Challengesmentioning
confidence: 99%
“…• The use of patient data to develop and commercialize these models [14]; • The (lack of) interpretability and transparency on how an algorithm arrived at its output [15]; • The potential sources of bias that may cloud a model's predictions and reinforce social inequality [16].…”
Section: Noteworthy Topic 1: Ethics In Ssiimentioning
confidence: 99%
“…(ii) Explainable AI, where neural networks can, for example, produce the regions of the image that provide the most decisive information supporting the predicted image level label, are covered in more detail in recent reviews [16], [17]. Recently, these approaches are accompanied by domain fusion, for example augmenting MRI of Alzheimer [18] patients with meta-data to learn the MRI signature of Alzheimer disease, or fusing diagnostic reports with image data [19] to offer interpretable improved diagnosis.…”
Section: Challenges In Current Approachesmentioning
confidence: 99%