AimTo develop a deep learning (DL) model that predicts age from fundus images (retinal age) and to investigate the association between retinal age gap (retinal age predicted by DL model minus chronological age) and mortality risk.MethodsA total of 80 169 fundus images taken from 46 969 participants in the UK Biobank with reasonable quality were included in this study. Of these, 19 200 fundus images from 11 052 participants without prior medical history at the baseline examination were used to train and validate the DL model for age prediction using fivefold cross-validation. A total of 35 913 of the remaining 35 917 participants had available mortality data and were used to investigate the association between retinal age gap and mortality.ResultsThe DL model achieved a strong correlation of 0.81 (p<0·001) between retinal age and chronological age, and an overall mean absolute error of 3.55 years. Cox regression models showed that each 1 year increase in the retinal age gap was associated with a 2% increase in risk of all-cause mortality (hazard ratio (HR)=1.02, 95% CI 1.00 to 1.03, p=0.020) and a 3% increase in risk of cause-specific mortality attributable to non-cardiovascular and non-cancer disease (HR=1.03, 95% CI 1.00 to 1.05, p=0.041) after multivariable adjustments. No significant association was identified between retinal age gap and cardiovascular- or cancer-related mortality.ConclusionsOur findings indicate that retinal age gap might be a potential biomarker of ageing that is closely related to risk of mortality, implying the potential of retinal image as a screening tool for risk stratification and delivery of tailored interventions.
PurposeTo provide a self-adaptive deep learning (DL) method to automatically detect the eye laterality based on fundus images.MethodsA total of 18394 fundus images with real-world eye laterality labels were used for model development and internal validation. A separate dataset of 2000 fundus images with eye laterality labeled manually was used for external validation. A DL model was developed based on a fine-tuned Inception-V3 network with self-adaptive strategy. The area under receiver operator characteristic curve (AUC) with sensitivity and specificity and confusion matrix were applied to assess the model performance. The class activation map (CAM) was used for model visualization.ResultsIn the external validation (N = 2000, 50% labeled as left eye), the AUC of the DL model for overall eye laterality detection was 0.995 (95% CI, 0.993–0.997) with an accuracy of 99.13%. Specifically for left eye detection, the sensitivity was 99.00% (95% CI, 98.11%-99.49%) and the specificity was 99.10% (95% CI, 98.23%-99.56%). Nineteen images were wrongly classified as compared to the human labels: 12 were due to human wrong labelling, while 7 were due to poor image quality. The CAM showed that the region of interest for eye laterality detection was mainly the optic disc and surrounding areas.ConclusionWe proposed a self-adaptive DL method with a high performance in detecting eye laterality based on fundus images. Results of our findings were based on real world labels and thus had practical significance in clinical settings.
Purpose: To assess the accuracy and efficacy of a semi-automated deep learning algorithm (DLA) assisted approach to detect vision-threatening diabetic retinopathy (DR).Methods: We developed a two-step semi-automated DLA-assisted approach to grade fundus photographs for vision-threatening referable DR. Study images were obtained from the Lingtou Cohort Study, and captured at participant enrollment in 2009–2010 (“baseline images”) and annual follow-up between 2011 and 2017. To begin, a validated DLA automatically graded baseline images for referable DR and classified them as positive, negative, or ungradable. Following, each positive image, all other available images from patients who had a positive image, and a 5% random sample of all negative images were selected and regraded by trained human graders. A reference standard diagnosis was assigned once all graders achieved consistent grading outcomes or with a senior ophthalmologist's final diagnosis. The semi-automated DLA assisted approach combined initial DLA screening and subsequent human grading for images identified as high-risk. This approach was further validated within the follow-up image datasets and its time and economic costs evaluated against fully human grading.Results: For evaluation of baseline images, a total of 33,115 images were included and automatically graded by the DLA. 2,604 images (480 positive results, 624 available other images from participants with a positive result, and 1500 random negative samples) were selected and regraded by graders. The DLA achieved an area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and accuracy of 0.953, 0.970, 0.879, and 88.6%, respectively. In further validation within the follow-up image datasets, a total of 88,363 images were graded using this semi-automated approach and human grading was performed on 8975 selected images. The DLA achieved an AUC, sensitivity, and specificity of 0.914, 0.852, 0.853, respectively. Compared against fully human grading, the semi-automated DLA-assisted approach achieved an estimated 75.6% time and 90.1% economic cost saving.Conclusions: The DLA described in this study was able to achieve high accuracy, sensitivity, and specificity in grading fundus images for referable DR. Validated against long-term follow-up datasets, a semi-automated DLA-assisted approach was able to accurately identify suspect cases, and minimize misdiagnosis whilst balancing safety, time, and economic cost.
SummaryBackgroundAgeing varies substantially, thus an accurate quantification of ageing is important. We developed a deep learning (DL) model that predicted age from fundus images (retinal age). We investigated the association between retinal age gap (retinal age-chronological age) and mortality risk in a population-based sample of middle-aged and elderly adults.MethodsThe DL model was trained, validated and tested on 46,834, 15,612 and 8,212 fundus images respectively from participants of the UK Biobank study alive on 28th February 2018. Retinal age gap was calculated for participants in the test (n=8,212) and death (n=1,117) datasets. Cox regression models were used to assess association between retinal age gap and mortality risk. A restricted cubic spline analyses was conducted to investigate possible non-linear association between retinal age gap and mortality risk.FindingsThe DL model achieved a strong correlation of 0·83 (P<0·001) between retinal age and chronological age, and an overall mean absolute error of 3·50 years. Cox regression models showed that each one-year increase in the retinal age gap was associated with a 2% increase in mortality risk (hazard ratio=1·02, 95% confidence interval:1·00-1·04, P=0·021). Restricted cubic spline analyses showed a non-linear relationship between retinal age gap and mortality (Pnon-linear=0·001). Higher retinal age gaps were associated with substantially increased risks of mortality, but only if the gap exceeded 3·71 years.InterpretationOur findings indicate that retinal age gap is a robust biomarker of ageing that is closely related to risk of mortality.FundingNational Health and Medical Research Council Investigator Grant, Science and Technology Program of Guangzhou.Research in contextEvidence before this studyAgeing at an individual level is heterogeneous. An accurate quantification of the biological ageing process is significant for risk stratification and delivery of tailored interventions. To date, cell-, molecular-, and imaging-based biomarkers have been developed, such as epigenetic clock, brain age and facial age. While the invasiveness of cellular and molecular ageing biomarkers, high cost and time-consuming nature of neuroimaging and facial ages, as well as ethical and privacy concerns of facial imaging, have limited their utilities. The retina is considered a window to the whole body, implying that the retina could provide clues for ageing.Added value of this studyWe developed a deep learning (DL) model that can detect footprints of aging in fundus images and predict age with high accuracy for the UK population between 40 and 69 years old. Further, we have been the first to demonstrate that each one-year increase in retinal age gap (retinal age-chronological age) was significantly associated with a 2% increase in mortality risk. Evidence of a non-linear association between retinal age gap and mortality risk was observed. Higher retinal age gaps were associated with substantially increased risks of mortality, but only if the retinal age gap exceeded 3·71 years.Implications of all the available evidenceThis is the first study to link the retinal age gap and mortality risk, implying that retinal age is a clinically significant biomarker of ageing. Our findings show the potential of retinal images as a screening tool for risk stratification and delivery of tailored interventions. Further, the capability to use fundus imaging in predicting ageing may improve the potential health benefits of eye disease screening, beyond the detection of sight-threatening eye diseases.
Purpose To develop and validate a fully automated program for choroidal structure analysis within a 1500-µm-wide region of interest centered on the fovea (deep learning–based choroidal structure assessment program [DCAP]). Methods A total of 2162 fovea-centered radial swept-source optical coherence tomography (SS-OCT) B-scans from 162 myopic children with cycloplegic spherical equivalent refraction ranging from −1.00 to −5.00 diopters were collected to develop the DCAP. Medical Transformer network and Small Attention U-Net were used to automatically segment the choroid boundaries and the nulla (the deepest point within the fovea). Automatic denoising based on choroidal vessel luminance and binarization were applied to isolate choroidal luminal/stromal areas. To further compare the DCAP with the traditional handcrafted method, the luminal/stromal areas and choroidal vascularity index (CVI) values for 20 OCT images were measured by three graders and the DCAP separately. Intraclass correlation coefficients (ICCs) and limits of agreement were used for agreement analysis. Results The mean ± SD pixel-wise distances from the predicted choroidal inner, outer boundary, and nulla to the ground truth were 1.40 ± 1.23, 5.40 ± 2.24, and 1.92 ± 1.13 pixels, respectively. The mean times required for choroidal structure analysis were 1.00, 438.00 ± 75.88, 393.25 ± 78.77, and 410.10 ± 56.03 seconds per image for the DCAP and three graders, respectively. Agreement between the automatic and manual area measurements was excellent (ICCs > 0.900) but poor for the CVI (0.627; 95% confidence interval, 0.279–0.832). Additionally, the DCAP demonstrated better intersession repeatability. Conclusions The DCAP is faster than manual methods. Also, it was able to reduce the intra-/intergrader and intersession variations to a small extent. Translational Relevance The DCAP could aid in choroidal structure assessment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.