Diagnosis of heart disease is a difficult job, and researchers have designed various intelligent diagnostic systems for improved heart disease diagnosis. However, low heart disease prediction accuracy is still a problem in these systems. For better heart risk prediction accuracy, we propose a feature selection method that uses a floating window with adaptive size for feature elimination (FWAFE). After the feature elimination, two kinds of classification frameworks are utilized, i.e., artificial neural network (ANN) and deep neural network (DNN). Thus, two types of hybrid diagnostic systems are proposed in this paper, i.e., FWAFE-ANN and FWAFE-DNN. Experiments are performed to assess the effectiveness of the proposed methods on a dataset collected from Cleveland online heart disease database. The strength of the proposed methods is appraised against accuracy, sensitivity, specificity, Matthews correlation coefficient (MCC), and receiver operating characteristics (ROC) curve. Experimental outcomes confirm that the proposed models outperformed eighteen other proposed methods in the past, which attained accuracies in the range of 50.00–91.83%. Moreover, the performance of the proposed models is impressive as compared with that of the other state-of-the-art machine learning techniques for heart disease diagnosis. Furthermore, the proposed systems can help the physicians to make accurate decisions while diagnosing heart disease.
Smart devices are effective in helping people with impairments, overcome their disabilities, and improve their living standards. Braille is a popular method used for communication by visually impaired people. Touch screen smart devices can be used to take Braille input and instantaneously convert it into a natural language. Most of these schemes require location-specific input that is difficult for visually impaired users. In this study, a position-free accessible touchscreen-based Braille input algorithm is designed and implemented for visually impaired people. It aims to place the least burden on the user, who is only required to tap those dots that are needed for a specific character. The user has input English Braille Grade 1 data (a–z) using a newly designed application. A total dataset comprised of 1258 images was collected. The classification was performed using deep learning techniques, out of which 70%–30% was used for training and validation purposes. The proposed method was thoroughly evaluated on a dataset collected from visually impaired people using Deep Learning (DL) techniques. The results obtained from deep learning techniques are compared with classical machine learning techniques like Naïve Bayes (NB), Decision Trees (DT), SVM, and KNN. We divided the multi-class into two categories, i.e., Category-A (a–m) and Category-B (n–z). The performance was evaluated using Sensitivity, Specificity, Positive Predicted Value (PPV), Negative Predicted Value (NPV), False Positive Rate (FPV), Total Accuracy (TA), and Area under the Curve (AUC). GoogLeNet Model, followed by the Sequential model, SVM, DT, KNN, and NB achieved the highest performance. The results prove that the proposed Braille input method for touch screen devices is more effective and that the deep learning method can predict the user's input with high accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.