The COVID-19 outbreak began in December 2019 and has dreadfully affected our lives since then. More than three million lives have been engulfed by this newest member of the corona virus family. With the emergence of continuously mutating variants of this virus, it is still indispensable to successfully diagnose the virus at early stages. Although the primary technique for the diagnosis is the PCR test, the non-contact methods utilizing the chest radiographs and CT scans are always preferred. Artificial intelligence, in this regard, plays an essential role in the early and accurate detection of COVID-19 using pulmonary images. In this research, a transfer learning technique with fine tuning was utilized for the detection and classification of COVID-19. Four pre-trained models i.e., VGG16, DenseNet-121, ResNet-50, and MobileNet were used. The aforementioned deep neural networks were trained using the dataset (available on Kaggle) of 7232 (COVID-19 and normal) chest X-ray images. An indigenous dataset of 450 chest X-ray images of Pakistani patients was collected and used for testing and prediction purposes. Various important parameters, e.g., recall, specificity, F1-score, precision, loss graphs, and confusion matrices were calculated to validate the accuracy of the models. The achieved accuracies of VGG16, ResNet-50, DenseNet-121, and MobileNet are 83.27%, 92.48%, 96.49%, and 96.48%, respectively. In order to display feature maps that depict the decomposition process of an input image into various filters, a visualization of the intermediate activations is performed. Finally, the Grad-CAM technique was applied to create class-specific heatmap images in order to highlight the features extracted in the X-ray images. Various optimizers were used for error minimization purposes. DenseNet-121 outperformed the other three models in terms of both accuracy and prediction.
Covid-19 is unpredictable evolutionary discipline which requires continuous advancements for its appropriate Detection & Classifications which can be helpful for bio-medical stream. In this research, two dimensions are covered that is detection & classification using self-proposed 2 stage learning detector. Detection of different variants of Covid-19 are performed using images of CT-Scan and X-Rays of effected lungs. Furthermore, classification of different variants is carried out. Dataset of 27000 indigenous images were used for detection & classification purposed. Moreover, in depth survey & comparison is carried out with state-of-theart Yolo v5 single state detector & Faster R-CNN 2 stage detector. Accuracy analysis of self-proposed 2 stage detector was 91.66% & 87.9% for detection & classification in comparison with YOLOv5 which had accuracy of 92.8% & 87.175% for detection & classification. Moreover, in comparison with Faster R-CNN which had accuracy of 94.8% & 87% The training analysis was performed on Nvidia T4 (16GB GDDR6). Self-proposed MNN-2 superseded Yolov5 & faster R-CNN in real time video analysis with least real time rate at FPS 30 at duration of 72 min video.
Covid-19 is unpredictable evolutionary discipline which requires continuous advancements for its appropriate Detection & Classifications which can be helpful for bio-medical stream. In this research, two dimensions are covered that is detection & classification using self-proposed 2 stage learning detector. Detection of different variants of Covid-19 are performed using images of CT-Scan and X-Rays of effected lungs. Furthermore, classification of different variants is carried out. Dataset of 27000 indigenous images were used for detection & classification purposed. Moreover, in depth survey & comparison is carried out with state-of-the-art Yolo v5 single state detector & Faster R-CNN 2 stage detector. Accuracy analysis of self-proposed 2 stage detector was 91.66% & 87.9% for detection & classification in comparison with YOLOv5 which had accuracy of 92.8% & 87.175% for detection & classification. Moreover, in comparison with Faster R-CNN which had accuracy of 94.8% & 87% The training analysis was performed on Nvidia T4 (16GB GDDR6). Self-proposed MNN-2 superseded Yolov5 & faster R-CNN in real time video analysis with least real time rate at FPS 30 at duration of 72 min video.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.