Diabetes is one of the leading fatal diseases globally, putting a huge burden on the global healthcare system. Early diagnosis of diabetes is hence, of utmost importance and could save many lives. However, current techniques to determine whether a person has diabetes or has the risk of developing diabetes are primarily reliant upon clinical biomarkers. In this article, we propose a novel deep learning architecture to predict if a person has diabetes or not from a photograph of his/her retina. Using a relatively small-sized dataset, we develop a multi-stage convolutional neural network (CNN)-based model DiaNet that can reach an accuracy level of over 84% on this task, and in doing so, successfully identifies the regions on the retina images that contribute to its decision-making process, as corroborated by the medical experts in the field. This is the first study that highlights the distinguishing capability of the retinal images for diabetes patients in the Qatari population to the best of our knowledge. Comparing the performance of DiaNet against the existing clinical data-based machine learning models, we conclude that the retinal images contain sufficient information to distinguish the Qatari diabetes cohort from the control group. In addition, our study reveals that retinal images may contain prognosis markers for diabetes and other comorbidities like hypertension and ischemic heart disease. The results led us to believe that the inclusion of retinal images into the clinical setup for the diagnosis of diabetes is warranted in the near future.INDEX TERMS Convolutional neural network, deep learning, diabetes, machine learning, Qatar, Qatar Biobank (QBB), retina.
No abstract
Cardiovascular diseases (CVD) are the leading cause of death worldwide. People affected by CVDs may go undiagnosed until the occurrence of a serious heart failure event such as stroke, heart attack, and myocardial infraction. In Qatar, there is a lack of studies focusing on CVD diagnosis based on non-invasive methods such as retinal image or dual-energy X-ray absorptiometry (DXA). In this study, we aimed at diagnosing CVD using a novel approach integrating information from retinal images and DXA data. We considered an adult Qatari cohort of 500 participants from Qatar Biobank (QBB) with an equal number of participants from the CVD and the control groups. We designed a case-control study with a novel multi-modal (combining data from multiple modalities—DXA and retinal images)—to propose a deep learning (DL)-based technique to distinguish the CVD group from the control group. Uni-modal models based on retinal images and DXA data achieved 75.6% and 77.4% accuracy, respectively. The multi-modal model showed an improved accuracy of 78.3% in classifying CVD group and the control group. We used gradient class activation map (GradCAM) to highlight the areas of interest in the retinal images that influenced the decisions of the proposed DL model most. It was observed that the model focused mostly on the centre of the retinal images where signs of CVD such as hemorrhages were present. This indicates that our model can identify and make use of certain prognosis markers for hypertension and ischemic heart disease. From DXA data, we found higher values for bone mineral density, fat content, muscle mass and bone area across majority of the body parts in CVD group compared to the control group indicating better bone health in the Qatari CVD cohort. This seminal method based on DXA scans and retinal images demonstrate major potentials for the early detection of CVD in a fast and relatively non-invasive manner.
Lung cancer is one of the leading causes of death worldwide. Early detection of this disease increases the chances of survival. Computer-Aided Detection (CAD) has been used to process CT images of the lung to determine whether an image has traces of cancer. This paper presents an image classification method based on the hybrid Convolutional Neural Network (CNN) algorithm and Support Vector Machine (SVM). This algorithm is capable of automatically classifying and analyzing each lung image to check if there is any presence of cancer cells or not. CNN is easier to train and has fewer parameters compared to a fully connected network with the same number of hidden units. Moreover, SVM has been utilized to eliminate useless information that affects accuracy negatively. In recent years, Convolutional Neural Networks (CNNs) have achieved excellent performance in many computer visions tasks. In this study, the performance of this algorithm is evaluated, and the results indicated that our proposed CNN-SVM algorithm has been succeed in classifying lung images with 97.91% accuracy. This has shown the method's merit and its ability to classify lung cancer in CT images accurately.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.