CORADS-AI is a freely accessible deep learning algorithm that automatically assigns CO-RADS and CT severity scores to non-contrast CT scans of patients suspected of COVID-19 with high diagnostic performance.
Background Chest radiography may play an important role in triage for coronavirus disease 2019 (COVID-19), particularly in low-resource settings. Purpose To evaluate the performance of an artificial intelligence (AI) system for detection of COVID-19 pneumonia on chest radiographs. Materials and Methods An AI system (CAD4COVID-XRay) was trained on 24 678 chest radiographs, including 1540 used only for validation while training. The test set consisted of a set of continuously acquired chest radiographs ( n = 454) obtained in patients suspected of having COVID-19 pneumonia between March 4 and April 6, 2020, at one center (223 patients with positive reverse transcription polymerase chain reaction [RT-PCR] results, 231 with negative RT-PCR results). Radiographs were independently analyzed by six readers and by the AI system. Diagnostic performance was analyzed with the receiver operating characteristic curve. Results For the test set, the mean age of patients was 67 years ± 14.4 (standard deviation) (56% male). With RT-PCR test results as the reference standard, the AI system correctly classified chest radiographs as COVID-19 pneumonia with an area under the receiver operating characteristic curve of 0.81. The system significantly outperformed each reader ( P < .001 using the McNemar test) at their highest possible sensitivities. At their lowest sensitivities, only one reader significantly outperformed the AI system ( P = .04). Conclusion The performance of an artificial intelligence system in the detection of coronavirus disease 2019 on chest radiographs was comparable with that of six independent readers. © RSNA, 2020
Background. Triazole resistance is an increasing problem in invasive aspergillosis (IA). Small case series show mortality rates of 50%-100% in patients infected with a triazole-resistant Aspergillus fumigatus, but a direct comparison with triazole-susceptible IA is lacking.Methods. A 5-year retrospective cohort study (2011)(2012)(2013)(2014)(2015) was conducted to compare mortality in patients with voriconazole-susceptible and voriconazole-resistant IA. Aspergillus fumigatus culture-positive patients were investigated to identify patients with proven, probable, and putative IA. Clinical characteristics, day 42 and day 90 mortality, triazole-resistance profiles, and antifungal treatments were investigated.Results. Of 196 patients with IA, 37 (19%) harbored a voriconazole-resistant infection. Hematological malignancy was the underlying disease in 103 (53%) patients, and 154 (79%) patients were started on voriconazole. Compared with voriconazole-susceptible cases, voriconazole resistance was associated with an increase in overall mortality of 21% on day 42 (49% vs 28%; P = .017) and 25% on day 90 (62% vs 37%; P = .0038). In non-intensive care unit patients, a 19% lower survival rate was observed in voriconazole-resistant cases at day 42 (P = .045). The mortality in patients who received appropriate initial voriconazole therapy was 24% compared with 47% in those who received inappropriate therapy (P = .016), despite switching to appropriate antifungal therapy after a median of 10 days.Conclusions. Voriconazole resistance was associated with an excess overall mortality of 21% at day 42 and 25% at day 90 in patients with IA. A delay in the initiation of appropriate antifungal therapy was associated with increased overall mortality.
Objectives Map the current landscape of commercially available artificial intelligence (AI) software for radiology and review the availability of their scientific evidence. Methods We created an online overview of CE-marked AI software products for clinical radiology based on vendor-supplied product specifications (www.aiforradiology.com). Characteristics such as modality, subspeciality, main task, regulatory information, deployment, and pricing model were retrieved. We conducted an extensive literature search on the available scientific evidence of these products. Articles were classified according to a hierarchical model of efficacy. Results The overview included 100 CE-marked AI products from 54 different vendors. For 64/100 products, there was no peer-reviewed evidence of its efficacy. We observed a large heterogeneity in deployment methods, pricing models, and regulatory classes. The evidence of the remaining 36/100 products comprised 237 papers that predominantly (65%) focused on diagnostic accuracy (efficacy level 2). From the 100 products, 18 had evidence that regarded level 3 or higher, validating the (potential) impact on diagnostic thinking, patient outcome, or costs. Half of the available evidence (116/237) were independent and not (co-)funded or (co-)authored by the vendor. Conclusions Even though the commercial supply of AI software in radiology already holds 100 CE-marked products, we conclude that the sector is still in its infancy. For 64/100 products, peer-reviewed evidence on its efficacy is lacking. Only 18/100 AI products have demonstrated (potential) clinical impact. Key Points • Artificial intelligence in radiology is still in its infancy even though already 100 CE-marked AI products are commercially available. • Only 36 out of 100 products have peer-reviewed evidence of which most studies demonstrate lower levels of efficacy. • There is a wide variety in deployment strategies, pricing models, and CE marking class of AI products for radiology.
There is a growing interest in the automated analysis of chest X-Ray (CXR) as a sensitive and inexpensive means of screening susceptible populations for pulmonary tuberculosis. In this work we evaluate the latest version of CAD4TB, a software platform designed for this purpose. Version 6 of CAD4TB was released in 2018 and is here tested on an independent dataset of 5565 CXR images with GeneXpert (Xpert) sputum test results available (854 Xpert positive subjects). A subset of 500 subjects (50% Xpert positive) was reviewed and annotated by 5 expert observers independently to obtain a radiological reference standard. The latest version of CAD4TB is found to outperform all previous versions in terms of area under receiver operating curve (ROC) with respect to both Xpert and radiological reference standards. Improvements with respect to Xpert are most apparent at high sensitivity levels with a specificity of 76% obtained at 90% sensitivity. When compared with the radiological reference standard, CAD4TB v6 also outperformed previous versions by a considerable margin and achieved 98% specificity at 90% sensitivity. No substantial difference was found between the performance of CAD4TB v6 and any of the various expert observers against the Xpert reference standard. A cost and efficiency analysis on this dataset demonstrates that in a standard clinical situation, operating at 90% sensitivity, users of CAD4TB v6 can process 132 subjects per day at an average cost per screen of $5.95 per subject, while users of version 3 process only 85 subjects per day at a cost of $8.41 per subject. At all tested operating points version 6 is shown to be more efficient and cost effective than any other version.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.