Early detection of vessels from fundus images can effectively prevent the permanent retinal damages caused by retinopathies such as glaucoma, hyperextension, and diabetes. Concerning the red color of both retinal vessels and background and the vessel's morphological variations, the current vessel detection methodologies fail to segment thin vessels and discriminate them in the regions where permanent retinopathies mainly occur. This research aims to suggest a novel approach to take the benefit of both traditional template-matching methods with recent deep learning (DL) solutions. These two methods are combined in which the response of a Cauchy matched filter is used to replace the noisy red channel of the fundus images. Consequently, a U-shaped fully connected convolutional neural network (U-net) is employed to train end-to-end segmentation of pixels into vessel and background classes. Each preprocessed image is divided into several patches to provide enough training images and speed up the training per each instance. The DRIVE public database has been analyzed to test the proposed method, and metrics such as Accuracy, Precision, Sensitivity and Specificity have been measured for evaluation. The evaluation indicates that the average extraction accuracy of the proposed model is 0.9640 on the employed dataset.
Natural Language Processing (NLP) is a group of theoretically inspired computer structures for analyzing and modeling clearly going on texts at one or extra degrees of linguistic evaluation to acquire human-like language processing for quite a few activities and applications. Hearing and visually impaired people are unable to see entirely or have very low vision, as well as being unable to hear completely or having a hard time hearing. It is difficult to get information since both hearing and vision, which are crucial organs for receiving information, are harmed. Hearing and visually impaired people are considered to have a substantial information deficit, as opposed to people who just have one handicap, such as blindness or deafness. Visually and hearing-impaired people who are unable to communicate with the outside world may experience emotional loneliness, which can lead to stress and, in extreme cases, serious mental illness. As a result, overcoming information handicap is a critical issue for visually and hearing-impaired people who want to live active, independent lives in society. The major objective of this study is to recognize Arabic speech in real time and convert it to Arabic text using Convolutional Neural Network-based algorithms before saving it to an SD card. The Arabic text is then translated into Arabic Braille characters, which are then used to control the Braille pattern via a Braille display with a solenoid drive. The Braille lettering triggered on the finger was deciphered by visually and hearing challenged participants who were proficient in Braille reading. The CNN, in combination with the ReLU model learning parameters, is fine-tuned for optimization, resulting in a model training accuracy of 90%. The tuned parameters model's testing results show that adding the ReLU activation function to the CNN model improves recognition accuracy by 84 % when speaking Arabic digits.
The mental risk poses a high threat to the individuals, especially overseas demographic, including expatriates in comparison to the general Arab demographic. Since Arab countries are renowned for their multicultural environment with half of the population of students and faculties being international, this paper focuses on a comprehensive analysis of mental health problems such as depression, stress, anxiety, isolation, and other unfortunate conditions. The dataset is developed from a web-based survey. The detailed exploratory data analysis is conducted on the dataset collected from Arab countries to study an individual’s mental health and indicative help-seeking pointers based on their responses to specific pre-defined questions in a multiculturalsociety. The proposed model validates the claims mathematically and uses different machine learning classifiers to identify individuals who are either currently or previously diagnosed with depression or demonstrate unintentional “save our souls” (SOS) behaviors for an early prediction to prevent risks of danger in life going forward. The accuracy is measured by comparing with the classifiers using several visualization tools. This analysis provides the claims and authentic sources for further research in the multicultural public medical sector and decision-making rules by the government.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.