Background and Objective: The novel Coronavirus also called COVID-19 originated in Wuhan, China in December 2019 and has now spread across the world. It has so far infected around 1.8 million people and claimed approximately 114,698 lives overall. As the number of cases are rapidly increasing, most of the countries are facing shortage of testing kits and resources. The limited quantity of testing kits and increasing number of daily cases encouraged us to come up with a Deep Learning model that can aid radiologists and clinicians in detecting COVID-19 cases using chest X-rays.Methods: In this study, we propose CoroNet, a Deep Convolutional Neural Network model to automatically detect COVID-19 infection from chest X-ray images. The proposed model is based on Xception architecture pre-trained on ImageNet dataset and trained end-to-end on a dataset prepared by collecting COVID-19 and other chest pneumonia X-ray images from two different publically available databases.Results: CoroNet has been trained and tested on the prepared dataset and the experimental results show that our proposed model achieved an overall accuracy of 89.6%, and more importantly the precision and recall rate for COVID-19 cases are 93% and 98.2% for 4-class cases (COVID vs Pneumonia bacterial vs pneumonia viral vs normal). For 3-class classification (COVID vs Pneumonia vs normal), the proposed model produced a classification accuracy of 95%. The preliminary results of this study look promising which can be further improved as more training data becomes available. Conclusion:CoroNet achieved promising results on a small prepared dataset which indicates that given more data, the proposed model can achieve better results with minimum pre-processing of data. Overall, the proposed model substantially advances the current radiology based methodology and during COVID-19 pandemic, it can be very helpful tool for clinical practitioners and radiologists to aid them in diagnosis, quantification and follow-up of COVID-19 cases.
In this paper, we propose a new software tool called DALES to extract semantic information from multi-view videos based on the analysis of their visual content. Our system is fully automatic and is well suited for multi-camera environment. Once the multi-view video sequences are loaded into DALES, our software performs the detection, counting, and segmentation of the visual objects evolving in the provided video streams. Then, these objects of interest are processed in order to be labelled, and the related frames are thus annotated with the corresponding semantic content. Moreover, a textual script is automatically generated with the video annotations. DALES system shows excellent performance in terms of accuracy and computational speed and is robustly designed to ensure view synchronization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.