Diabetic retinopathy (DR) is a primary cause of blindness in which damage occurs to the retina due to an accretion of sugar levels in the blood. Therefore, prior detection, classification, and diagnosis of DR can prevent vision loss in diabetic patients. We proposed a novel and hybrid approach for prior DR detection and classification. We combined distinctive models to make the DR detection process robust or less error-prone while determining the classification based on the majority voting method. The proposed work follows preprocessing feature extraction and classification steps. The preprocessing step enhances abnormality presence as well as segmentation; the extraction step acquires merely relevant features; and the classification step uses classifiers such as support vector machine (SVM), K-nearest neighbor (KNN), and binary trees (BT). To accomplish this work, multiple severities of disease grading databases were used and achieved an accuracy of 98.06%, sensitivity of 83.67%, and 100% specificity.
Objective. As an extension of optical coherence tomography (OCT), optical coherence tomographic angiography (OCTA) provides information on the blood flow status at the microlevel and is sensitive to changes in the fundus vessels. However, due to the distinct imaging mechanism of OCTA, existing models, which are primarily used for analyzing fundus images, do not work well on OCTA images. Effectively extracting and analyzing the information in OCTA images remains challenging. To this end, a deep learning framework that fuses multilevel information in OCTA images is proposed in this study. The effectiveness of the proposed model was demonstrated in the task of diabetic retinopathy (DR) classification. Method. First, a U-Net-based segmentation model was proposed to label the boundaries of large retinal vessels and the foveal avascular zone (FAZ) in OCTA images. Then, we designed an isolated concatenated block (ICB) structure to extract and fuse information from the original OCTA images and segmentation results at different fusion levels. Results. The experiments were conducted on 301 OCTA images. Of these images, 244 were labeled by ophthalmologists as normal images, and 57 were labeled as DR images. An accuracy of 93.1% and a mean intersection over union (mIOU) of 77.1% were achieved using the proposed large vessel and FAZ segmentation model. In the ablation experiment with 6-fold validation, the proposed deep learning framework that combines the proposed isolated and concatenated convolution process significantly improved the DR diagnosis accuracy. Moreover, inputting the merged images of the original OCTA images and segmentation results further improved the model performance. Finally, a DR diagnosis accuracy of 88.1% (
95
%
CI
±
3.6
%
) and an area under the curve (AUC) of 0.92 were achieved using our proposed classification model, which significantly outperforms the state-of-the-art classification models. As a comparison, an accuracy of 83.7 (
95
%
CI
±
1.5
%
) and AUC of 0.76 were obtained using EfficientNet. Significance. The visualization results show that the FAZ and the vascular region close to the FAZ provide more information for the model than the farther surrounding area. Furthermore, this study demonstrates that a clinically sophisticated designed deep learning model is not only able to effectively assist in the diagnosis but also help to locate new indicators for certain illnesses.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.