Chest diseases can be dangerous and deadly. They include many chest infections such as pneumonia, asthma, edema, and, lately, COVID-19. COVID-19 has many similar symptoms compared to pneumonia, such as breathing hardness and chest burden. However, it is a challenging task to differentiate COVID-19 from other chest diseases. Several related studies proposed a computer-aided COVID-19 detection system for the single-class COVID-19 detection, which may be misleading due to similar symptoms of other chest diseases. This paper proposes a framework for the detection of 15 types of chest diseases, including the COVID-19 disease, via a chest X-ray modality. Two-way classification is performed in proposed Framework. First, a deep learning-based convolutional neural network (CNN) architecture with a soft-max classifier is proposed. Second, transfer learning is applied using fully-connected layer of proposed CNN that extracted deep features. The deep features are fed to the classical Machine Learning (ML) classification methods. However, the proposed framework improves the accuracy for COVID-19 detection and increases the predictability rates for other chest diseases. The experimental results show that the proposed framework, when compared to other state-of-the-art models for diagnosing COVID-19 and other chest diseases, is more robust, and the results are promising.
In the recent era, various diseases have severely affected the lifestyle of individuals, especially adults. Among these, bone diseases, including Knee Osteoarthritis (KOA), have a great impact on quality of life. KOA is a knee joint problem mainly produced due to decreased Articular Cartilage between femur and tibia bones, producing severe joint pain, effusion, joint movement constraints and gait anomalies. To address these issues, this study presents a novel KOA detection at early stages using deep learning-based feature extraction and classification. Firstly, the input X-ray images are preprocessed, and then the Region of Interest (ROI) is extracted through segmentation. Secondly, features are extracted from preprocessed X-ray images containing knee joint space width using hybrid feature descriptors such as Convolutional Neural Network (CNN) through Local Binary Patterns (LBP) and CNN using Histogram of oriented gradient (HOG). Low-level features are computed by HOG, while texture features are computed employing the LBP descriptor. Lastly, multi-class classifiers, that is, Support Vector Machine (SVM), Random Forest (RF), and K-Nearest Neighbour (KNN), are used for the classification of KOA according to the Kellgren–Lawrence (KL) system. The Kellgren–Lawrence system consists of Grade I, Grade II, Grade III, and Grade IV. Experimental evaluation is performed on various combinations of the proposed framework. The experimental results show that the HOG features descriptor provides approximately 97% accuracy for the early detection and classification of KOA for all four grades of KL.
Due to the rapid growth in artificial intelligence (AI) and deep learning (DL) approaches, the security and robustness of the deployed algorithms need to be guaranteed. The security susceptibility of the DL algorithms to adversarial examples has been widely acknowledged. The artificially created examples will lead to different instances negatively identified by the DL models that are humanly considered benign. Practical application in actual physical scenarios with adversarial threats shows their features. Thus, adversarial attacks and defense, including machine learning and its reliability, have drawn growing interest and, in recent years, has been a hot topic of research. We introduce a framework that provides a defensive model against the adversarial speckle-noise attack, the adversarial training, and a feature fusion strategy, which preserves the classification with correct labelling. We evaluate and analyze the adversarial attacks and defenses on the retinal fundus images for the Diabetic Retinopathy recognition problem, which is considered a state-of-the-art endeavor. Results obtained on the retinal fundus images, which are prone to adversarial attacks, are 99% accurate and prove that the proposed defensive model is robust.
Glaucoma is one of the eye diseases stimulated by the fluid pressure that increases in the eyes, damaging the optic nerves and causing partial or complete vision loss. As Glaucoma appears in later stages and it is a slow disease, detailed screening and detection of the retinal images is required to avoid vision forfeiture. This study aims to detect glaucoma at early stages with the help of deep learning-based feature extraction. Retinal fundus images are utilized for the training and testing of our proposed model. In the first step, images are pre-processed, before the region of interest (ROI) is extracted employing segmentation. Then, features of the optic disc (OD) are extracted from the images containing optic cup (OC) utilizing the hybrid features descriptors, i.e., convolutional neural network (CNN), local binary patterns (LBP), histogram of oriented gradients (HOG), and speeded up robust features (SURF). Moreover, low-level features are extracted using HOG, whereas texture features are extracted using the LBP and SURF descriptors. Furthermore, high-level features are computed using CNN. Additionally, we have employed a feature selection and ranking technique, i.e., the MR-MR method, to select the most representative features. In the end, multi-class classifiers, i.e., support vector machine (SVM), random forest (RF), and K-nearest neighbor (KNN), are employed for the classification of fundus images as healthy or diseased. To assess the performance of the proposed system, various experiments have been performed using combinations of the aforementioned algorithms that show the proposed model based on the RF algorithm with HOG, CNN, LBP, and SURF feature descriptors, providing <=99% accuracy on benchmark datasets and 98.8% on k-fold cross-validation for the early detection of glaucoma.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.