Background: Cerebrovascular disease (CeVD), including stroke, is a leading cause of death globally. The retina is an extension of the cerebrum, sharing embryological and vascular pathways. The association between different retinal signs and CeVD has been extensively evaluated. In this review, we summarize recent studies which have examined this association. Evidence Acquisition: We searched 6 databases through July 2019 for studies evaluating the link between retinal vascular signs and diseases with CeVD. CeVD was classified into 2 groups: clinical CeVD (including clinical stroke, silent cerebral infarction, cerebral hemorrhage, and stroke mortality), and sub-clinical CeVD (including MRI-defined lacunar infarct and white matter lesions [WMLs]). Retinal vascular signs were classified into 3 groups: classic hypertensive retinopathy (including retinal microaneurysms, retinal microhemorrhage, focal/generalized arteriolar narrowing, cotton-wool spots, and arteriovenous nicking), clinical retinal diseases (including diabetic retinopathy [DR], age-related macular degeneration [AMD], retinal vein occlusion, retinal artery occlusion [RAO], and retinal emboli), and retinal vascular imaging measures (including retinal vessel diameter and geometry). We also examined emerging retinal vascular imaging measures and the use of artificial intelligence (AI) deep learning (DL) techniques. Results: Hypertensive retinopathy signs were consistently associated with clinical CeVD and subclinical CeVD subtypes including subclinical cerebral large artery infarction, lacunar infarction, and WMLs. Some clinical retinal diseases such as DR, retinal arterial and venous occlusion, and transient monocular vision loss are consistently associated with clinical CeVD. There is an increased risk of recurrent stroke immediately after RAO. Less consistent associations are seen with AMD. Retinal vascular imaging using computer assisted, semi-automated software to measure retinal vascular caliber and other parameters (tortuosity, fractal dimension, and branching angle) has shown strong associations to clinical and subclinical CeVD. Other new retinal vascular imaging techniques (dynamic retinal vessel analysis, adaptive optics, and optical coherence tomography angiography) are emerging technologies in this field. Application of AI-DL is expected to detect subclinical retinal changes and discrete retinal features in predicting systemic conditions including CeVD. Conclusions: There is extensive and increasing evidence that a range of retinal vascular signs and disease are closely linked to CeVD, including subclinical and clinical CeVD. New technology including AI-DL will allow further translation to clinical utilization.
Background Deep learning algorithms have been built for the detection of systemic and eye diseases based on fundus photographs. The retina possesses features that can be affected by gender differences, and the extent to which these features are captured via photography differs depending on the retinal image field. Objective We aimed to compare deep learning algorithms’ performance in predicting gender based on different fields of fundus photographs (optic disc–centered, macula-centered, and peripheral fields). Methods This retrospective cross-sectional study included 172,170 fundus photographs of 9956 adults aged ≥40 years from the Singapore Epidemiology of Eye Diseases Study. Optic disc–centered, macula-centered, and peripheral field fundus images were included in this study as input data for a deep learning model for gender prediction. Performance was estimated at the individual level and image level. Receiver operating characteristic curves for binary classification were calculated. Results The deep learning algorithms predicted gender with an area under the receiver operating characteristic curve (AUC) of 0.94 at the individual level and an AUC of 0.87 at the image level. Across the three image field types, the best performance was seen when using optic disc–centered field images (younger subgroups: AUC=0.91; older subgroups: AUC=0.86), and algorithms that used peripheral field images had the lowest performance (younger subgroups: AUC=0.85; older subgroups: AUC=0.76). Across the three ethnic subgroups, algorithm performance was lowest in the Indian subgroup (AUC=0.88) compared to that in the Malay (AUC=0.91) and Chinese (AUC=0.91) subgroups when the algorithms were tested on optic disc–centered images. Algorithms’ performance in gender prediction at the image level was better in younger subgroups (aged <65 years; AUC=0.89) than in older subgroups (aged ≥65 years; AUC=0.82). Conclusions We confirmed that gender among the Asian population can be predicted with fundus photographs by using deep learning, and our algorithms’ performance in terms of gender prediction differed according to the field of fundus photographs, age subgroups, and ethnic groups. Our work provides a further understanding of using deep learning models for the prediction of gender-related diseases. Further validation of our findings is still needed.
BACKGROUND Deep Learning (DL) algorithms have been built for detection of systemic and eye diseases from retinal photographs. The retina possesses features which can be affected by gender differences, and the extent to which these features are captured upon photography differs depending on the retinal image field. OBJECTIVE To compare DL algorithms’ performance in predicting gender when using different fields of retinal photographs (disc-centered, macula-centered, peripheral). METHODS This retrospective cross-sectional study included 172,170 retinal photographs from 9956 adults aged ≥ 40 years from the Singapore Epidemiology of Eye Diseases (SEED) Study. Optic disc-centered, macula-centered and peripheral field retinal fundus images were included in this study as input to a DL model for gender prediction. Performance was estimated at individual level and image level. Receiver operating characteristic (ROC) curves for binary classification were calculated. RESULTS The DL algorithms predicted gender with area under the ROC (AUC) of 0.94 at individual-level and AUC of 0.87 at image-level. Across the three image fields, the best performance was seen in disc-centered (AUC: 0.91 in younger and 0.86 in older age subgroups), and peripheral field images showed the lowest performance (AUC: 0.85 in younger and 0.76 in older subgroups). Between the three ethnic subgroups, performance was lowest in the Indian subgroup (AUC: 0.88) compared to Malay (AUC: 0.91) and Chinese (AUC: 0.91) when tested on disc-centered images. The performance of gender prediction at the image level was better in younger age subgroups of < 65 years (AUC: 0.89) than in older age subgroups of ≥ 65 years (AUC: 0.82). CONCLUSIONS We confirmed that gender can be predicted from retinal photographs using DL in Asian population, and the performance of gender prediction differ according to field of retinal photographs, age-subgroups, and ethnic groups. Our work provides a further understanding of using DL models for prediction of gender-related diseases. Further validation of our findings is still needed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.