Background Chest radiography may play an important role in triage for coronavirus disease 2019 (COVID-19), particularly in low-resource settings. Purpose To evaluate the performance of an artificial intelligence (AI) system for detection of COVID-19 pneumonia on chest radiographs. Materials and Methods An AI system (CAD4COVID-XRay) was trained on 24 678 chest radiographs, including 1540 used only for validation while training. The test set consisted of a set of continuously acquired chest radiographs ( n = 454) obtained in patients suspected of having COVID-19 pneumonia between March 4 and April 6, 2020, at one center (223 patients with positive reverse transcription polymerase chain reaction [RT-PCR] results, 231 with negative RT-PCR results). Radiographs were independently analyzed by six readers and by the AI system. Diagnostic performance was analyzed with the receiver operating characteristic curve. Results For the test set, the mean age of patients was 67 years ± 14.4 (standard deviation) (56% male). With RT-PCR test results as the reference standard, the AI system correctly classified chest radiographs as COVID-19 pneumonia with an area under the receiver operating characteristic curve of 0.81. The system significantly outperformed each reader ( P < .001 using the McNemar test) at their highest possible sensitivities. At their lowest sensitivities, only one reader significantly outperformed the AI system ( P = .04). Conclusion The performance of an artificial intelligence system in the detection of coronavirus disease 2019 on chest radiographs was comparable with that of six independent readers. © RSNA, 2020
Objectives Map the current landscape of commercially available artificial intelligence (AI) software for radiology and review the availability of their scientific evidence. Methods We created an online overview of CE-marked AI software products for clinical radiology based on vendor-supplied product specifications (www.aiforradiology.com). Characteristics such as modality, subspeciality, main task, regulatory information, deployment, and pricing model were retrieved. We conducted an extensive literature search on the available scientific evidence of these products. Articles were classified according to a hierarchical model of efficacy. Results The overview included 100 CE-marked AI products from 54 different vendors. For 64/100 products, there was no peer-reviewed evidence of its efficacy. We observed a large heterogeneity in deployment methods, pricing models, and regulatory classes. The evidence of the remaining 36/100 products comprised 237 papers that predominantly (65%) focused on diagnostic accuracy (efficacy level 2). From the 100 products, 18 had evidence that regarded level 3 or higher, validating the (potential) impact on diagnostic thinking, patient outcome, or costs. Half of the available evidence (116/237) were independent and not (co-)funded or (co-)authored by the vendor. Conclusions Even though the commercial supply of AI software in radiology already holds 100 CE-marked products, we conclude that the sector is still in its infancy. For 64/100 products, peer-reviewed evidence on its efficacy is lacking. Only 18/100 AI products have demonstrated (potential) clinical impact. Key Points • Artificial intelligence in radiology is still in its infancy even though already 100 CE-marked AI products are commercially available. • Only 36 out of 100 products have peer-reviewed evidence of which most studies demonstrate lower levels of efficacy. • There is a wide variety in deployment strategies, pricing models, and CE marking class of AI products for radiology.
To assess the variability in accuracy of contrast media introduction, leakage, required time and patient discomfort in four different centres, each using a different image-guided glenohumeral injection technique. Each centre included 25 consecutive patients. The ultrasound-guided anterior (USa) and posterior approach (USp), fluoroscopic-guided anterior (FLa) and posterior (FLp) approach were used. Number of injection attempts, effect of contrast leakage on diagnostic quality, and total room, radiologist and procedure times were measured. Pain was documented with a visual analogue scale (VAS) pain score. Access to the joint was achieved in all patients. A successful first attempt significantly occurred more often with US (94%) than with fluoroscopic guidance (72%). Leakage of contrast medium did not cause interpretative difficulties. With US guidance mean room, procedure and radiologist times were significantly shorter (p < 0.001). The USa approach was rated with the lowest pre- and post-injection VAS scores. The four image-guided injection techniques are successful in injection of contrast material into the glenohumeral joint. US-guided injections and especially the anterior approach are significantly less time consuming, more successful on the first attempt, cause less patient discomfort and obviate the need for radiation and iodine contrast.
Anticipated patient management by the GP changed in 64% of patients following upper abdominal US. Abdominal US substantially reduced the number of intended referrals to a medical specialist, and more patients could be reassured by their GP.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.