This study aims to provide estimates, trends and projections of vision loss burden in Pakistan from 1990 to 2025. Global Burden of Diseases, Injuries, and Risk Factors Study (GBD 2017) was used to observe the vision loss burden in terms of prevalence and Years Lived with Disability (YLDs). As of 2017, out of 207.7 million people in Pakistan, an estimated 1.12 million (95% Uncertainty Interval [UI] 1.07–1.19) were blind (Visual Acuity [VA] <3/60), 1.09 million [0.93–1.24] people had severe vision loss (3/60≤VA<6/60) and 6.79 million [6.00–7.74] people had moderate vision loss (6/60≤VA<6/18). Presbyopia was found to be the most common ocular condition that affected an estimated 12.64 million [11.94–13.41] people (crude prevalence 6.08% [5.75–6.45]; 61% female). In terms of age-standardized YLDs rate, Pakistan is ranked fourth among other South Asian countries and twenty-first among other 42 low-middle income countries (classified by World Bank), with 552.98 YLDs [392.98–752.95] per 100,000. Compared with 1990, all-age YLDs count of blindness and vision impairment increased by 55% in 2017, which is the tenth highest increase among major health loss causes (such as dietary iron deficiency, headache disorders, low back pain etc.) in Pakistan. Moreover, our statistics show an increase in vision loss burden by 2025 for which Pakistan needs to make more efforts to encounter the growing burden of eye diseases.
The newly driven concept of electronic health (eHealth) has made distance irrelevant in healthcare. eHealth, which is commonly known as an electronic form of healthcare, refers broadly to the use of advanced information and communication technology within healthcare environments. However, its practical implementation poses many sets of challenges. One of the major challenges is authentication and integrity verification of vital medical data. Proper verification of patients with associated medical images or data is crucial to conduct accurate medical diagnostics. This paper presents an imperceptible watermarkingbased security framework to address the issues of authentication and integrity verification of medical images for eHealth applications. The electronic patient record (EPR), as a watermark image, is embedded in optical coherence tomography (OCT)/fundus scan (consisting of healthy and diseased scans), using a hybrid watermarking technique based on fast curvelet transform (FCT) and singular value decomposition (SVD). The proposed work showed a high level of robustness, imperceptibility, and security for medical images when compared with the state-of-the-art existing watermarking techniques. In addition, we conducted a comparative analysis of the watermarked OCT/fundus scans with non-watermarked scans, and our results validate that the insertion of the watermark in the retinal images does not affect the automated medical image diagnosis of various retinal pathologies. Moreover, the correct recovery of the EPR from the watermarked scans makes the proposed framework applicable to the authentication of medical images for a computerbased automated diagnostic system in an eHealth arrangement.
Forest fires pose a potential threat to the ecological and environmental systems and natural resources, impacting human lives. However, automated surveillance system for early forest fire detection can mitigate such calamities and protect the environment. Therefore, we propose a UAV-based forest fire fighting system with integrated artificial intelligence (AI) capabilities for continuous forest surveillance and fire detection. The major contributions of the proposed research are fourfold. Firstly, we explain the detailed working mechanism along with the key steps involved in executing the UAV-based forest fire fighting system. Besides, a robust forest fire detection system requires precise and efficient classification of forest fire imagery against no-fire. Moreover, we have curated a novel dataset (DeepFire) containing diversified real-world forest imagery with and without fire to assist future research in this domain. The DeepFire dataset consists of 1900 colored images with 950 each for fire and no-fire classes. Next, we investigate the performance of various supervised machine learning classifiers for the binary classification problem of detecting forest fire. Furthermore, we propose a VGG19-based transfer learning solution to achieve improved prediction accuracy. We assess and compare the performance of several machine learning approaches such as k -nearest neighbors, random forest, naive Bayes, support vector machine, logistic regression, and the proposed approach for accurately identifying fire and no-fire images in the DeepFire dataset. The simulation results demonstrate the efficacy of the proposed approach in terms of accurate classification, where it achieves the mean accuracy of 95% with 95.7% precision and 94.2% recall.
Macular edema (ME) is a retinal condition in which central vision of a patient is affected. ME leads to accumulation of fluid in the surrounding macular region resulting in a swollen macula. Optical coherence tomography (OCT) and the fundus photography are the two widely used retinal examination techniques that can effectively detect ME. Many researchers have utilized retinal fundus and OCT imaging for detecting ME. However, to the best of our knowledge, no work is found in the literature that fuses the findings from both retinal imaging modalities for the effective and more reliable diagnosis of ME. In this paper, we proposed an automated framework for the classification of ME and healthy eyes using retinal fundus and OCT scans. The proposed framework is based on deep ensemble learning where the input fundus and OCT scans are recognized through the deep convolutional neural network (CNN) and are processed accordingly. The processed scans are further passed to the second layer of the deep CNN model, which extracts the required feature descriptors from both images. The extracted descriptors are then concatenated together and are passed to the supervised hybrid classifier made through the ensemble of the artificial neural networks, support vector machines and naïve Bayes. The proposed framework has been trained on 73,791 retinal scans and is validated on 5100 scans of publicly available Zhang dataset and Rabbani dataset. The proposed framework achieved the accuracy of 94.33% for diagnosing ME and healthy subjects and achieved the mean dice coefficient of 0.9019 ± 0.04 for accurately extracting the retinal fluids, 0.7069 ± 0.11 for accurately extracting hard exudates and 0.8203 ± 0.03 for accurately extracting retinal blood vessels against the clinical markings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.