PurposeTo establish and validate a universal artificial intelligence (AI) platform for collaborative management of cataracts involving multilevel clinical scenarios and explored an AI-based medical referral pattern to improve collaborative efficiency and resource coverage.MethodsThe training and validation datasets were derived from the Chinese Medical Alliance for Artificial Intelligence, covering multilevel healthcare facilities and capture modes. The datasets were labelled using a three-step strategy: (1) capture mode recognition; (2) cataract diagnosis as a normal lens, cataract or a postoperative eye and (3) detection of referable cataracts with respect to aetiology and severity. Moreover, we integrated the cataract AI agent with a real-world multilevel referral pattern involving self-monitoring at home, primary healthcare and specialised hospital services.ResultsThe universal AI platform and multilevel collaborative pattern showed robust diagnostic performance in three-step tasks: (1) capture mode recognition (area under the curve (AUC) 99.28%–99.71%), (2) cataract diagnosis (normal lens, cataract or postoperative eye with AUCs of 99.82%, 99.96% and 99.93% for mydriatic-slit lamp mode and AUCs >99% for other capture modes) and (3) detection of referable cataracts (AUCs >91% in all tests). In the real-world tertiary referral pattern, the agent suggested 30.3% of people be ‘referred’, substantially increasing the ophthalmologist-to-population service ratio by 10.2-fold compared with the traditional pattern.ConclusionsThe universal AI platform and multilevel collaborative pattern showed robust diagnostic performance and effective service for cataracts. The context of our AI-based medical referral pattern will be extended to other common disease conditions and resource-intensive situations.
Background Medical artificial intelligence (AI) has entered the clinical implementation phase, although real-world performance of deep-learning systems (DLSs) for screening fundus disease remains unsatisfactory. Our study aimed to train a clinically applicable DLS for fundus diseases using data derived from the real world, and externally test the model using fundus photographs collected prospectively from the settings in which the model would most likely be adopted.Methods In this national real-world evidence study, we trained a DLS, the Comprehensive AI Retinal Expert (CARE) system, to identify the 14 most common retinal abnormalities using 207 228 colour fundus photographs derived from 16 clinical settings with different disease distributions. CARE was internally validated using 21 867 photographs and externally tested using 18 136 photographs prospectively collected from 35 real-world settings across China where CARE might be adopted, including eight tertiary hospitals, six community hospitals, and 21 physical examination centres. The performance of CARE was further compared with that of 16 ophthalmologists and tested using datasets with non-Chinese ethnicities and previously unused camera types. This study was registered with ClinicalTrials.gov, NCT04213430, and is currently closed. FindingsThe area under the receiver operating characteristic curve (AUC) in the internal validation set was 0•955 (SD 0•046). AUC values in the external test set were 0•965 (0•035) in tertiary hospitals, 0•983 (0•031) in community hospitals, and 0•953 (0•042) in physical examination centres. The performance of CARE was similar to that of ophthalmologists. Large variations in sensitivity were observed among the ophthalmologists in different regions and with varying experience. The system retained strong identification performance when tested using the non-Chinese dataset (AUC 0•960, 95% CI 0•957-0•964 in referable diabetic retinopathy).Interpretation Our DLS (CARE) showed satisfactory performance for screening multiple retinal abnormalities in real-world settings using prospectively collected fundus photographs, and so could allow the system to be implemented and adopted for clinical care.
Background: The aim of this study was to develop an intelligent system based on a deep learning algorithm for automatically diagnosing fungal keratitis (FK) in in vivo confocal microscopy (IVCM) images.Methods: A total of 2,088 IVCM images were included in the training dataset. The positive group consisted of 688 images with fungal hyphae, and the negative group included 1,400 images without fungal hyphae. A total of 535 images in the testing dataset were not included in the training dataset. Deep Residual Learning for Image Recognition (ResNet) was used to build the intelligent system for diagnosing FK automatically. The system was verified by external validation in the testing dataset using the area under the receiver operating characteristic curve (AUC), accuracy, specificity and sensitivity.Results: In the testing dataset, 515 images were diagnosed correctly and 20 were misdiagnosed (including 6 with fungal hyphae and 14 without). The system achieved an AUC of 0.9875 with an accuracy of 0.9626 in detecting fungal hyphae. The sensitivity of the system was 0.9186, with a specificity of 0.9834. When 349 diabetic patients were included in the training dataset, 501 images were diagnosed correctly and thirtyfour were misdiagnosed (including 4 with fungal hyphae and 30 without). The AUC of the system was 0.9769.The accuracy, specificity and sensitivity were 0.9364, 0.9889 and 0.8256, respectively. Conclusions:The intelligent system based on a deep learning algorithm exhibited satisfactory diagnostic performance and effectively classified FK in various IVCM images. The context of this deep learning automated diagnostic system can be extended to other types of keratitis.
Background Common diseases are not satisfactorily managed under the current health-care system because of inadequate medical resources and limited accessibility. We aimed to establish and validate a universal artificial intelligence (AI) platform for collaborative management of cataracts involving multilevel clinical scenarios, and explored an AI-based medical referral pattern to improve collaborative efficiency and resource coverage. Methods The training and validation datasets were derived from the Chinese Medical Alliance for Artificial Intelligence, covering multilevel health-care facilities and capture modes. The datasets were labeled using a three-step strategy: capture mode recognition (modes: mydriatic-diffuse, mydriatic-slit lamp, non-mydriatic-diffuse, and nonmydriatic-slit lamp); cataract diagnosis as a normal lens, cataract, or a postoperative eye; and detection of referable cataracts with respect to cause and severity. Area under curve [AUC] was measured at each stage. We also integrated the above cataract AI agent with a real-world multilevel referral pattern involving self-monitoring at home, primary health care, and specialised hospital services. The diagnostic accuracy, treatment referral, and ophthalmologist-topopulation service ratio were used to evaluate the performance and efficacy of the system. Findings The universal AI platform and multilevel collaborative pattern showed robust diagnostic performance in threestep tasks: capture mode recognition (AUC 99•28-99•71% for the four different capture modes), cataract diagnosis (AUC for mydriatic-slit lamp mode 99•82% [95%CI 98•93-100] for normal lens vs 99•96% [99•90-100] for cataract vs 99•93% [99•78-100] for postoperative eye, and AUCs >99% for other capture modes), and detection of referable cataracts (AUCs >91% in all tests). In the real-world tertiary referral pattern, the agent suggested 30•3% of people be referred to treatment, substantially increasing the ophthalmologist-to-population service ratio by 10•2-times compared with the traditional pattern. Interpretation The universal AI platform and multilevel collaborative pattern showed robust diagnostic performance and effective service for cataracts. The context of our AI-based medical referral pattern will be extended to other common disease conditions and resource-intensive situations.
Background: Artificial intelligence (AI) has great potential to detect fungal keratitis using in vivo confocal microscopy images, but its clinical value remains unclarified. A major limitation of its clinical utility is the lack of explainability and interpretability.Methods: An explainable AI (XAI) system based on Gradient-weighted Class Activation Mapping (Grad-CAM) and Guided Grad-CAM was established. In this randomized controlled trial, nine ophthalmologists (three expert ophthalmologists, three competent ophthalmologists, and three novice ophthalmologists) read images in each of the conditions: unassisted, AI-assisted, or XAI-assisted. In unassisted condition, only the original IVCM images were shown to the readers. AI assistance comprised a histogram of model prediction probability. For XAI assistance, explanatory maps were additionally shown. The accuracy, sensitivity, and specificity were calculated against an adjudicated reference standard. Moreover, the time spent was measured.Results: Both forms of algorithmic assistance increased the accuracy and sensitivity of competent and novice ophthalmologists significantly without reducing specificity. The improvement was more pronounced in XAI-assisted condition than that in AI-assisted condition. Time spent with XAI assistance was not significantly different from that without assistance.Conclusion: AI has shown great promise in improving the accuracy of ophthalmologists. The inexperienced readers are more likely to benefit from the XAI system. With better interpretability and explainability, XAI-assistance can boost ophthalmologist performance beyond what is achievable by the reader alone or with black-box AI assistance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.