2021
DOI: 10.2196/28776
|View full text |Cite
|
Sign up to set email alerts
|

Key Technology Considerations in Developing and Deploying Machine Learning Models in Clinical Radiology Practice

Abstract: The use of machine learning to develop intelligent software tools for the interpretation of radiology images has gained widespread attention in recent years. The development, deployment, and eventual adoption of these models in clinical practice, however, remains fraught with challenges. In this paper, we propose a list of key considerations that machine learning researchers must recognize and address to make their models accurate, robust, and usable in practice. We discuss insufficient training data, decentra… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

2
20
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1

Relationship

2
5

Authors

Journals

citations
Cited by 17 publications
(22 citation statements)
references
References 99 publications
2
20
0
Order By: Relevance
“…The performance of a machine learning system in practice depends on a large number of considerations [47]. State-of-art neural network architectures for image classification contain millions of trainable parameters, and they are trained on hundreds of thousands of scans [37].…”
Section: Discussionmentioning
confidence: 99%
“…The performance of a machine learning system in practice depends on a large number of considerations [47]. State-of-art neural network architectures for image classification contain millions of trainable parameters, and they are trained on hundreds of thousands of scans [37].…”
Section: Discussionmentioning
confidence: 99%
“…Specialized explainer systems of explainable AI (widely acknowledged as an important feature of practical deployment of AI models) aim at explaining AI inferences to human users [ 19 ]. Explainability in radiology can be improved by using localization models, which can highlight the region of suspected abnormality (region of interest) in the scan, instead of using classification models, which only indicate the presence or absence of an abnormality [ 20 ]. Although an explainable system does not refer to an explicit human model and only indicates or highlights the decision-relevant parts of the AI model (ie, parts that contributed to a specific prediction), causability refers primarily to a human-understandable model.…”
Section: Level 3: Conditional Automationmentioning
confidence: 99%
“…This may lead to uncertainty in the ground truth labels. The problem of ambiguous ground truth can be mitigated by using expert adjudication [ 21 ] or multiphasic review [ 22 ] to create high-quality labels, which may help yield better models than other approaches in improving model performance on original labels [ 20 ]. Additionally, imaging protocols, manufacturers of imaging modalities, and the process of storing and processing medical data differ between organizations, which impedes the use of data from different sources for AI applications [ 23 ].…”
Section: Level 3: Conditional Automationmentioning
confidence: 99%
See 2 more Smart Citations