PurposeTo establish and validate a universal artificial intelligence (AI) platform for collaborative management of cataracts involving multilevel clinical scenarios and explored an AI-based medical referral pattern to improve collaborative efficiency and resource coverage.MethodsThe training and validation datasets were derived from the Chinese Medical Alliance for Artificial Intelligence, covering multilevel healthcare facilities and capture modes. The datasets were labelled using a three-step strategy: (1) capture mode recognition; (2) cataract diagnosis as a normal lens, cataract or a postoperative eye and (3) detection of referable cataracts with respect to aetiology and severity. Moreover, we integrated the cataract AI agent with a real-world multilevel referral pattern involving self-monitoring at home, primary healthcare and specialised hospital services.ResultsThe universal AI platform and multilevel collaborative pattern showed robust diagnostic performance in three-step tasks: (1) capture mode recognition (area under the curve (AUC) 99.28%–99.71%), (2) cataract diagnosis (normal lens, cataract or postoperative eye with AUCs of 99.82%, 99.96% and 99.93% for mydriatic-slit lamp mode and AUCs >99% for other capture modes) and (3) detection of referable cataracts (AUCs >91% in all tests). In the real-world tertiary referral pattern, the agent suggested 30.3% of people be ‘referred’, substantially increasing the ophthalmologist-to-population service ratio by 10.2-fold compared with the traditional pattern.ConclusionsThe universal AI platform and multilevel collaborative pattern showed robust diagnostic performance and effective service for cataracts. The context of our AI-based medical referral pattern will be extended to other common disease conditions and resource-intensive situations.
Background Medical artificial intelligence (AI) has entered the clinical implementation phase, although real-world performance of deep-learning systems (DLSs) for screening fundus disease remains unsatisfactory. Our study aimed to train a clinically applicable DLS for fundus diseases using data derived from the real world, and externally test the model using fundus photographs collected prospectively from the settings in which the model would most likely be adopted.Methods In this national real-world evidence study, we trained a DLS, the Comprehensive AI Retinal Expert (CARE) system, to identify the 14 most common retinal abnormalities using 207 228 colour fundus photographs derived from 16 clinical settings with different disease distributions. CARE was internally validated using 21 867 photographs and externally tested using 18 136 photographs prospectively collected from 35 real-world settings across China where CARE might be adopted, including eight tertiary hospitals, six community hospitals, and 21 physical examination centres. The performance of CARE was further compared with that of 16 ophthalmologists and tested using datasets with non-Chinese ethnicities and previously unused camera types. This study was registered with ClinicalTrials.gov, NCT04213430, and is currently closed. FindingsThe area under the receiver operating characteristic curve (AUC) in the internal validation set was 0•955 (SD 0•046). AUC values in the external test set were 0•965 (0•035) in tertiary hospitals, 0•983 (0•031) in community hospitals, and 0•953 (0•042) in physical examination centres. The performance of CARE was similar to that of ophthalmologists. Large variations in sensitivity were observed among the ophthalmologists in different regions and with varying experience. The system retained strong identification performance when tested using the non-Chinese dataset (AUC 0•960, 95% CI 0•957-0•964 in referable diabetic retinopathy).Interpretation Our DLS (CARE) showed satisfactory performance for screening multiple retinal abnormalities in real-world settings using prospectively collected fundus photographs, and so could allow the system to be implemented and adopted for clinical care.
Summary Background Pioneering effort has been made to facilitate the recognition of pathology in malignancies based on whole‐slide images (WSIs) through deep learning approaches. It remains unclear whether we can accurately detect and locate basal cell carcinoma (BCC) using smartphone‐captured images. Objectives To develop deep neural network frameworks for accurate BCC recognition and segmentation based on smartphone‐captured microscopic ocular images (MOIs). Methods We collected a total of 8046 MOIs, 6610 of which had binary classification labels and the other 1436 had pixelwise annotations. Meanwhile, 128 WSIs were collected for comparison. Two deep learning frameworks were created. The ‘cascade’ framework had a classification model for identifying hard cases (images with low prediction confidence) and a segmentation model for further in‐depth analysis of the hard cases. The ‘segmentation’ framework directly segmented and classified all images. Sensitivity, specificity and area under the curve (AUC) were used to evaluate the overall performance of BCC recognition. Results The MOI‐ and WSI‐based models achieved comparable AUCs around 0·95. The ‘cascade’ framework achieved 0·93 sensitivity and 0·91 specificity. The ‘segmentation’ framework was more accurate but required more computational resources, achieving 0·97 sensitivity, 0·94 specificity and 0·987 AUC. The runtime of the ‘segmentation’ framework was 15·3 ± 3·9 s per image, whereas the ‘cascade’ framework took 4·1 ± 1·4 s. Additionally, the ‘segmentation’ framework achieved 0·863 mean intersection over union. Conclusions Based on the accessible MOIs via smartphone photography, we developed two deep learning frameworks for recognizing BCC pathology with high sensitivity and specificity. This work opens a new avenue for automatic BCC diagnosis in different clinical scenarios. What's already known about this topic? The diagnosis of basal cell carcinoma (BCC) is labour intensive due to the large number of images to be examined, especially when consecutive slide reading is needed in Mohs surgery. Deep learning approaches have demonstrated promising results on pathological image‐related diagnostic tasks. Previous studies have focused on whole‐slide images (WSIs) and leveraged classification on image patches for detecting and localizing breast cancer metastases. What does this study add? Instead of WSIs, microscopic ocular images (MOIs) photographed from microscope eyepieces using smartphone cameras were used to develop neural network models for recognizing BCC automatically. The MOI‐ and WSI‐based models achieved comparable areas under the curve around 0·95. Two deep learning frameworks for recognizing BCC pathology were developed with high sensitivity and specificity. Recognizing BCC through a smartphone could be considered a future clinical choice.
This diagnostic study develops and prospectively validates a deep learning algorithm that uses ocular fundus images to recognize numerous retinal diseases in a clinical setting at 65 screening centers in 19 Chinese provinces.
This study aimed to develop an automated computer-based algorithm to estimate axial length and subfoveal choroidal thickness (SFCT) based on color fundus photographs. In the population-based Beijing Eye Study 2011, we took fundus photographs and measured SFCT by optical coherence tomography (OCT) and axial length by optical low-coherence reflectometry. Using 6394 color fundus images taken from 3468 participants, we trained and evaluated a deep-learning-based algorithm for estimation of axial length and SFCT. The algorithm had a mean absolute error (MAE) for estimating axial length and SFCT of 0.56 mm [95% confidence interval (CI): 0.53,0.61] and 49.20 μm (95% CI: 45.83,52.54), respectively. Estimated values and measured data showed coefficients of determination of r2 = 0.59 (95% CI: 0.50,0.65) for axial length and r2 = 0.62 (95% CI: 0.57,0.67) for SFCT. Bland–Altman plots revealed a mean difference in axial length and SFCT of −0.16 mm (95% CI: −1.60,1.27 mm) and of −4.40 μm (95% CI, −131.8,122.9 μm), respectively. For the estimation of axial length, heat map analysis showed that signals predominantly from overall of the macular region, the foveal region, and the extrafoveal region were used in the eyes with an axial length of < 22 mm, 22–26 mm, and > 26 mm, respectively. For the estimation of SFCT, the convolutional neural network (CNN) used mostly the central part of the macular region, the fovea or perifovea, independently of the SFCT. Our study shows that deep-learning-based algorithms may be helpful in estimating axial length and SFCT based on conventional color fundus images. They may be a further step in the semiautomatic assessment of the eye.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.