2021
DOI: 10.1038/s42256-021-00399-8
|View full text |Cite
|
Sign up to set email alerts
|

Designing clinically translatable artificial intelligence systems for high-dimensional medical imaging

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
28
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 46 publications
(28 citation statements)
references
References 52 publications
0
28
0
Order By: Relevance
“… 3 , 4 Certain methods, such as saliency maps or feature attribution attempt to deduce how these learning algorithms detect complex features. 99 However, just 2.1% ( n = 5) of studies reported such methods, hindering model interpretation. This highlights the importance of future work reporting DL interpretation to improve comprehension and transparency of algorithmic predictions.…”
Section: Discussionmentioning
confidence: 99%
“… 3 , 4 Certain methods, such as saliency maps or feature attribution attempt to deduce how these learning algorithms detect complex features. 99 However, just 2.1% ( n = 5) of studies reported such methods, hindering model interpretation. This highlights the importance of future work reporting DL interpretation to improve comprehension and transparency of algorithmic predictions.…”
Section: Discussionmentioning
confidence: 99%
“…In the vein of personalized medicine, this process is solely dependent on data from the patient in question, which avoids the necessity for consideration of confounders introduced by different patients 33, 50 . This constitutes an extremely relevant feature in the face of increasing evidence that many research studies overestimate the performance of DL algorithms due to poor selection of test data relative to the training data 51, 52 . Furthermore, algorithms trained on data from a specific camera may not necessarily generalize to slightly adapted conditions 33, 53 .…”
Section: Discussionmentioning
confidence: 99%
“…In prospective applications, even a robust deep learning model could still be vulnerable to new or unknown confounding factors or new data that fall outside the purview of the original training data (e.g., images acquired with a new MRI scanner). Uncertainty estimation to detect out-of-distribution samples is a recent area of interest in deep learning [ 5 , 15 ], which has seen the development of sophisticated means of measuring uncertainty [ 16 – 18 ]), usually formulated as methods for quantifying and detecting the “distance” of an input datapoint from the training set. A simpler approach is to simply train an ensemble of independent base learners and average their output, quantifying uncertainty by measuring how consistently they agree with one another [ 19 , 20 ].…”
Section: Introductionmentioning
confidence: 99%