2021
DOI: 10.32604/cmc.2021.018671
|View full text |Cite
|
Sign up to set email alerts
|

Lightweight Transfer Learning Models for Ultrasound-Guided Classification of COVID-19 Patients

Abstract: Lightweight deep convolutional neural networks (CNNs) present a good solution to achieve fast and accurate image-guided diagnostic procedures of COVID-19 patients. Recently, advantages of portable Ultrasound (US) imaging such as simplicity and safe procedures have attracted many radiologists for scanning suspected COVID-19 cases. In this paper, a new framework of lightweight deep learning classifiers, namely COVID-LWNet is proposed to identify COVID-19 and pneumonia abnormalities in US images. Compared to trad… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 47 publications
0
6
0
Order By: Relevance
“…Therefore, AI studies often resize input images to a widely used common dimension across datasets. Most of the reviewed articles in this paper, for example, (Ebadi et al, 2021;Rojas-Azabache et al, 2021;Nabalamba, 2022;Quentin Muller et al, 2020;Karar et al, 2021a;Karnes et al, 2021;Perera et al, 2021), etc., also used the common image dimension of 224× 224 pixels, as well-known computer vision deep learning models are typically designed to intake images of 224×224 pixels. However, other image dimensions are also found for ultrasound COVID-19 studies.…”
Section: Image Resizingmentioning
confidence: 98%
“…Therefore, AI studies often resize input images to a widely used common dimension across datasets. Most of the reviewed articles in this paper, for example, (Ebadi et al, 2021;Rojas-Azabache et al, 2021;Nabalamba, 2022;Quentin Muller et al, 2020;Karar et al, 2021a;Karnes et al, 2021;Perera et al, 2021), etc., also used the common image dimension of 224× 224 pixels, as well-known computer vision deep learning models are typically designed to intake images of 224×224 pixels. However, other image dimensions are also found for ultrasound COVID-19 studies.…”
Section: Image Resizingmentioning
confidence: 98%
“…Therefore, AI studies often resize input images to a widely used common dimension across datasets. Most of the reviewed articles in this paper, for example, [37,49,50,[52][53][54][55], etc., also used the common image dimension of 224×224 pixels as well-known computer vision deep learning models are typically designed to intake images of 224×224 pixels. However, other image dimensions are also found for ultrasound COVID-19 studies.…”
Section: Image Resizingmentioning
confidence: 99%
“…In order to address the need for a less complex, power efficient, and less expensive solution to screen lung ultrasound images and monitor lung status, Hou et al [75] introduced a Saab transform-based subspace learning model to find the A-line, B-line, and consolidation in lung ultrasound data. Karar et al [53] introduced a lightweight deep model, COVID-LWNet, to make an efficient CNN-based system for classifying lung ultrasound images into COVID-19, bacterial pneumonia, and healthy classes. In addition, Karar et al [56] proposed a generative adversarial network (GAN) to perform the same task on ultrasound images.…”
Section: Studiesmentioning
confidence: 99%
“…Therefore, AI studies often resize input images to a widely used common dimension across datasets. Most of the reviewed articles in this paper, for example, [37,49,50,[52][53][54][55], etc., also used the common image dimension of 224×224 pixels as well-known computer vision deep learning models are typically designed to intake images of 224×224 pixels. However, other image dimensions are also found for ultrasound COVID-19 studies.…”
Section: Image Resizingmentioning
confidence: 99%
“…We also describe different types of AI models, used by state-of-the-art US COVID-19 studies in the following sections. [59] SqueezeNet, MobileNetV2 ✗ Al-Jumaili et al [68] ResNet-18, RestNet-50, NASNetMobile, GoogleNet, SVM Al-Zogbi et al [70] DenseNet ✗ Almeida et al [71] MobileNet ✗ Arntfield et al [38] Xception ✗ Awasthi et al [72] MiniCOVIDNet ✗ Azimi et al [73] InceptionV3, RNN ✗ Barros et al [69] Xception-LSTM ✗ Born et al [12] VGG-16 ✗ Born et al [74] VGG-16 ✗ Born et al [13] VGG-16 ✗ Carrer et al [16] Hidden Markov Model, Viterbi Algorithm, SVM ✗ Che et al [17] Multi-scale Residual CNN ✗ Chen et al [40] 2-layer NN, SVM, Decision tree Diaz-Escobar et al [67] InceptionV3, VGG-19, ResNet-50, Xception ✗ Dastider et al [18] Autoencoder-based Hybrid CNN-LSTM ✗ Durrani et al [35] Reg-STN ✗ Ebadi et al [52] Kinetics-I3D ✗ Frank et al [19] ResNet-18, MobileNetV2, DeepLabV3++ ✗ Gare et al [15] Reverse Transfer Learning on UNet ✗ Hou et al [75] Saab transform-based SSL, CNN ✗ Huang et al [41] Non-local channel attention ResNet ✗ Karar et al [53] MobileNet, ShuffleNet, MENet, MnasNet ✗ Karar et al [56] A semi-supervised GAN, a modified AC-GAN ✗ Karnes et al [54] Few-shot learning using MobileNet ✗ Khan et al [76] CNN ✗ La Salvia et al [42] ResNet-18, ResNet-50 ✗ Liu et al [48] Multi-symptom multi-label (MSML) network ✗ MacLean et al [77] COVID-Net US ✗ MacLean et al [78] ResNet ✗ Mento et al [44] STN, U-Net, DeepLabV3+ ✗ Muhammad and Hossain [58] CNN ✗ Nabalamba [49] VGG-16, VGG-19, ResNet ✗ Panicker et al [36] LUSNet (a U-Net like network for ultrasound...…”
Section: Ai Modelsmentioning
confidence: 99%