Background Deep learning offers considerable promise for medical diagnostics. We aimed to evaluate the diagnostic accuracy of deep learning algorithms versus health-care professionals in classifying diseases using medical imaging.Methods In this systematic review and meta-analysis, we searched Ovid-MEDLINE, Embase, Science Citation Index, and Conference Proceedings Citation Index for studies published from Jan 1, 2012, to June 6, 2019. Studies comparing the diagnostic performance of deep learning models and health-care professionals based on medical imaging, for any disease, were included. We excluded studies that used medical waveform data graphics material or investigated the accuracy of image segmentation rather than disease classification. We extracted binary diagnostic accuracy data and constructed contingency tables to derive the outcomes of interest: sensitivity and specificity. Studies undertaking an out-of-sample external validation were included in a meta-analysis, using a unified hierarchical model. This study is registered with PROSPERO, CRD42018091176.Findings Our search identified 31 587 studies, of which 82 (describing 147 patient cohorts) were included. 69 studies provided enough data to construct contingency tables, enabling calculation of test accuracy, with sensitivity ranging from 9•7% to 100•0% (mean 79•1%, SD 0•2) and specificity ranging from 38•9% to 100•0% (mean 88•3%, SD 0•1). An out-of-sample external validation was done in 25 studies, of which 14 made the comparison between deep learning models and health-care professionals in the same sample. Comparison of the performance between health-care professionals in these 14 studies, when restricting the analysis to the contingency table for each study reporting the highest accuracy, found a pooled sensitivity of 87•0% (95% CI 83•0-90•2) for deep learning models and 86•4% (79•9-91•0) for health-care professionals, and a pooled specificity of 92•5% (95% CI 85•1-96•4) for deep learning models and 90•5% (80•6-95•7) for health-care professionals.Interpretation Our review found the diagnostic performance of deep learning models to be equivalent to that of health-care professionals. However, a major finding of the review is that few studies presented externally validated results or compared the performance of deep learning models and health-care professionals using the same sample. Additionally, poor reporting is prevalent in deep learning studies, which limits reliable interpretation of the reported diagnostic accuracy. New reporting standards that address specific challenges of deep learning could improve future studies, enabling greater confidence in the results of future evaluations of this promising technology.
Purpose: New instrument-based techniques for anterior chamber (AC) cell counting can offer automation and objectivity above clinician assessment. This review aims to identify such instruments and its correlation with clinician estimates. Methods: Using standard systematic review methodology, we identified and tabulated the outcomes of studies reporting reliability and correlation between instrument-based measurements and clinician AC cell grading. Results: From 3470 studies, 6 reported correlation between an instrument-based AC cell count to clinician grading. The two instruments were optical coherence tomography (OCT) and laser flare-cell photometry (LFCP). Correlation between clinician grading and LFCP was 0.66-0.87 and 0.06-0.97 between clinician grading and OCT. OCT volume scans demonstrated correlation between 0.75 and 0.78. Line scans in the middle AC demonstrated higher correlation (0.73-0.97) than in the inferior AC (0.06-0.56). Conclusion: AC cell count by OCT and LFP can achieve high levels of correlation with clinician grading, whilst offering additional advantages of speed, automation, and objectivity.
ObjectiveThis study aims to evaluate the feasibility of retinal imaging in critical care using a novel mobile optical coherence tomography (OCT) device. The Heidelberg SPECTRALIS FLEX module (Heidelberg Engineering, Heidelberg, Germany) is an OCT unit with a boom arm, enabling ocular OCT assessment in less mobile patients.DesignWe undertook an evaluation of the feasibility of using the SPECTRALIS FLEX for undertaking ocular OCT images in unconscious and critically ill patients.SettingThis study was conducted in the critical care unit of a large tertiary referral unit in the United Kingdom.Participants13 systemically unwell patients admitted to the critical care unit were purposively sampled to enable evaluation in patients with a range of clinical states.Outcome measuresThe primary outcome was the feasibility of acquiring clinically interpretable OCT scans on a consecutive series of patients. The standardised scanning protocol included macula-focused OCT, OCT optic nerve head (ONH), OCT angiography (OCTA) of the macula and ONH OCTA.ResultsOCT images from 13 patients were attempted. The success rates of each scan type are 84% for OCT macula, 76% for OCT ONH, 56% for OCTA macula and 36% for OCTA ONH. The overall mean success rate of scans per patient was 64% (95% CI 46% to 81%). Clinicians reported clinical value in 100% scans which were successfully obtained, including both ruling in and ruling out relevant ocular complications such as corneal thinning, macular oedema and optic disc swelling. The most common causes of failure to achieve clinically interpretable scans were inadequately sustained OCT alignment in delirious patients and a compromised ocular surface due to corneal exposure.ConclusionsThis prospective evaluation indicates the feasibility and potential clinical value of the SPECTRALIS FLEX OCT system on the critical care unit. Portable OCT systems have the potential to bring instrument-based ophthalmic assessment to critically ill patients, enabling detection and micron-level monitoring of ocular complications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.