Diabetic Retinopathy (DR) grading into different stages of severity continues to remain a challenging issue due to the complexities of the disease. Diabetic Retinopathy grading classifies retinal images to five levels of severity ranging from 0 to 5, which represents No DR, Mild non-proliferative diabetic retinopathy (NPDR), Moderate NPDR, Severe NPDR, and proliferative diabetic retinopathy. With the advancement of Deep Learning, studies on the application of the Convolutional Neural Network (CNN) in DR grading have been on the rise. High accuracy and sensitivity are the desired outcome of these studies. This paper reviewed recently published studies that employed CNN for DR grading to 5 levels of severity. Various approaches are applied in classifying retinal images which are, (i) by training CNN models to learn the features for each grade and (ii) by detecting and segmenting lesions using information about their location such as microaneurysms, exudates, and haemorrhages. Public and private datasets have been utilised by researchers in classifying retinal images for DR. The performance of the CNN models was measured by accuracy, specificity, sensitivity, and area under the curve. The CNN models and their performance varies for every study. More research into the CNN model is necessary for future work to improve model performance in DR grading. The Inception model can be used as a starting point for subsequent research. It will also be necessary to investigate the attributes that the model uses for grading.
Background The revolutionary technology of smartphone-based retinal imaging has been consistently improving over the years. Smartphone-based retinal image acquisition devices are designed to be portable, and easy to use, besides being low-cost which enables eye care to be more widely accessible especially in geographically remote areas. This enables early disease detection for those who are in low- and middle- income population or just in general has very limited access to eye care. This study investigates the limitation of smartphone compatibility of existing smartphone-based retinal image acquisition devices. Additionally, this study aims to propose a universal adapter that is usable with an existing smartphone-based retinal image acquisition device, the PanOptic ophthalmoscope. This study also aims to simulate the reliability, validity, and performance overall of improved develop prototype. Existing studies have shown that the concept of smartphone-based retinal imaging is still limited to screening purposes only. Furthermore, existing smartphone-based devices also have a limited smartphone compatibility where it is only usable with specific smartphone models. Methods A literature review was conducted that identifies the limitation of smartphone compatibility among existing smartphone-based retinal image acquisition devices. Designing and modelling of proposed adapter was performed using the software AutoCAD 3D. For proposed performance evaluation, finite element analysis (FEA) in the software Autodesk Inventor and 5-point scale method were applied. Results It was identified how a universal adapter is beneficial in broadening the usability of existing smartphone-based retinal image acquisition devices as most of the devices that are available in the market have limited smartphone compatibility. A functional universal adapter was developed and found to be suitable with two smartphones that have different camera placement and dimensions. The proposed performance evaluation method was able to generate efficient stress analysis of the proposed adapter design. Conclusion The concept of a universal and suitable adapter for retinal imaging using the PanOptic ophthalmoscope was presented in this paper. Performance evaluation methods proposed were identified to be sufficient to analyze the behavior of proposed adapter when an external load is applied and determine its suitability with the PanOptic ophthalmoscope.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.