2023
DOI: 10.1038/s41746-023-00868-x
|View full text |Cite
|
Sign up to set email alerts
|

Challenges of implementing computer-aided diagnostic models for neuroimages in a clinical setting

Abstract: Advances in artificial intelligence have cultivated a strong interest in developing and validating the clinical utilities of computer-aided diagnostic models. Machine learning for diagnostic neuroimaging has often been applied to detect psychological and neurological disorders, typically on small-scale datasets or data collected in a research setting. With the collection and collation of an ever-growing number of public datasets that researchers can freely access, much work has been done in adapting machine le… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(8 citation statements)
references
References 127 publications
0
8
0
Order By: Relevance
“…CE-marked and FDA-approved commercial tools for clinical decision support by brain morphometry have meanwhile become available for application in patients with multiple sclerosis and various forms of dementia. Despite formal approval for diagnostic purposes, a deficiency of these tools is that validation, especially in clinical terms, in many cases still is an open topic of research (Pemberton et al, 2021; Mendelson et al, 2023) due to a multitude of factors (Haller et al, 2022; Leming et al, 2023; Hedderich et al, 2023). This is remarkable, since an international survey among practitioners investigating their application of (commercial or scientific) brain morphometry tools has clearly shown that user acceptance is associated with the availability of technical and clinical validation studies (Vernooij et al, 2019).…”
Section: Summary and Discussionmentioning
confidence: 99%
“…CE-marked and FDA-approved commercial tools for clinical decision support by brain morphometry have meanwhile become available for application in patients with multiple sclerosis and various forms of dementia. Despite formal approval for diagnostic purposes, a deficiency of these tools is that validation, especially in clinical terms, in many cases still is an open topic of research (Pemberton et al, 2021; Mendelson et al, 2023) due to a multitude of factors (Haller et al, 2022; Leming et al, 2023; Hedderich et al, 2023). This is remarkable, since an international survey among practitioners investigating their application of (commercial or scientific) brain morphometry tools has clearly shown that user acceptance is associated with the availability of technical and clinical validation studies (Vernooij et al, 2019).…”
Section: Summary and Discussionmentioning
confidence: 99%
“…With industry salaries increasing relative to what healthcare systems can pay, finding staff to fill such roles is becoming more and more of a challenge. [2] Several no-code platforms for training ML models exist from companies like Amazon, Apple, Calrifai, Google, and Microsoft. [9] Some of these platforms have been tested on publicly available medical imaging datasets.…”
Section: Introductionmentioning
confidence: 99%
“…[1] However, this pace of AI adoption within healthcare has been slow. [2] One cause of this is that hospitals are cost strapped and still reeling from pandemic, and a tiny fraction of FDA-approved AI devices are covered by insurance. [3] A perhaps even bigger and more serious issue is that external validations of AI algorithms often show substantial drops in performance compared to what was originally reported in the FDA submission.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…They struggle to generalize : they cannot perform as well on new data as on the data used for development. This may be due to differences in demographics—such as age, sex, acuity of presentation, and disease prevalence—as well as technical differences in hardware and protocols from the carefully curated datasets normally used to train the AI [10, 13]. The performance gap may be immediately evident, or drifting conditions may cause it to appear over time [12].…”
Section: Introductionmentioning
confidence: 99%