The global epidemic caused by COVID-19 has had a severe impact on the health of human beings. The virus has wreaked havoc throughout the world since its declaration as a worldwide pandemic and has affected an expanding number of nations in numerous countries around the world. Recently, a substantial amount of work has been done by doctors, scientists, and many others working on the frontlines to battle the effects of the spreading virus. The integration of artificial intelligence, specifically deep- and machine-learning applications, in the health sector has contributed substantially to the fight against COVID-19 by providing a modern innovative approach for detecting, diagnosing, treating, and preventing the virus. In this proposed work, we focus mainly on the role of the speech signal and/or image processing in detecting the presence of COVID-19. Three types of experiments have been conducted, utilizing speech-based, image-based, and speech and image-based models. Long short-term memory (LSTM) has been utilized for the speech classification of the patient’s cough, voice, and breathing, obtaining an accuracy that exceeds 98%. Moreover, CNN models VGG16, VGG19, Densnet201, ResNet50, Inceptionv3, InceptionResNetV2, and Xception have been benchmarked for the classification of chest X-ray images. The VGG16 model outperforms all other CNN models, achieving an accuracy of 85.25% without fine-tuning and 89.64% after performing fine-tuning techniques. Furthermore, the speech–image-based model has been evaluated using the same seven models, attaining an accuracy of 82.22% by the InceptionResNetV2 model. Accordingly, it is inessential for the combined speech–image-based model to be employed for diagnosis purposes since the speech-based and image-based models have each shown higher terms of accuracy than the combined model.
Classification is one of the most popular tasks of machine learning, which has been involved in broad applications in practice, such as decision making, sentiment analysis and pattern recognition. It involves the assignment of a class/label to an instance and is based on the assumption that each instance can only belong to one class. This assumption does not hold, especially for indexing problems (when an item, such as a movie, can belong to more than one category) or for complex items that reflect more than one aspect, e.g. a product review outlining advantages and disadvantages may be at the same time positive and negative. To address this problem, multi-label classification has been increasingly used in recent years, by transforming the data to allow an instance to have more than one label; the nature of learning, however, is the same as traditional learning, i.e. learning to discriminate one class from other classes and the output of a classifier is still single (although the output may contain a set of labels). In this paper we propose a fundamentally different type of classification in which the membership of an instance to all classes(/labels) is judged by a multiple-inputmultiple-output classifier through generative multi-task learning. An experimental study is conducted on five UCI data sets to show empirically that an instance can belong to more than one class, by using the theory of fuzzy logic and checking the extent to which an instance belongs to each single class, i.e. the fuzzy membership degree. The paper positions new research directions on multitask classification in the context of both supervised learning and semi-supervised learning.
Cracks on surface walls may imply that a building possesses problems with its structural integrity. Evaluating these types of defects needs to be accurate to determine the condition of the building. Currently, the evaluation of surface cracks is conducted through visual inspection, resulting in occasions of subjective judgements being made on the classification and severity of the surface crack which poses danger for customers and the environment as it not being analysed objectively. Previous researchers have applied numerous classification methods, but they always stop their research at just being able to classify cracks which would not be fully useful for professionals such as surveyors. We propose building a hybrid web application that can classify the condition of a surface from images using a trained Hierarchal-Convolutional Neural Network(H-CNN) which can also decipher if the image that is being looked is a surface or not. For continuous improvement of the H-CNN's accuracy, the application will have a feedback mechanism for users to send an email query on incorrectly classified images which will be used to retrain the H-CNN.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.