2022
DOI: 10.21203/rs.3.rs-1396136/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Exploration Of Interpretability Techniques For Deep COVID-19 Classification Using Chest X-Ray Images

Abstract: The outbreak of COVID-19 has shocked the entire world with its fairly rapid spread and has challenged different sectors. One of the most effective ways to limit its spread is the early and accurate diagnosis of infected patients. Medical imaging such as X-ray and Computed Tomography (CT) combined with the potential of Artificial Intelligence (AI) plays an essential role in supporting the medical staff in the diagnosis process. Thereby, five different deep learning models (ResNet18, ResNet34, InceptionV3, Incep… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 60 publications
0
5
0
Order By: Relevance
“…The images were then filled with a background composed of 10 translational and rotational lungs [24] NONE [26] The conventional data augmentation method included ± 15 • rotation, ± 15% x-axis shift, ± 15% y-axis shift, horizontal flipping, and 85%-115% scaling and shear transformation. The parameters of mixup was set to 0.1 [9] The original image is divided into 16 * 16 and 32 * 32 blocks to build two data sets [27] All the of images were initially preprocessed to have the same size. To make the image size uniform throughout the dataset, each of the images was interpolated using bicubic interpolation.…”
Section: Papermentioning
confidence: 99%
See 1 more Smart Citation
“…The images were then filled with a background composed of 10 translational and rotational lungs [24] NONE [26] The conventional data augmentation method included ± 15 • rotation, ± 15% x-axis shift, ± 15% y-axis shift, horizontal flipping, and 85%-115% scaling and shear transformation. The parameters of mixup was set to 0.1 [9] The original image is divided into 16 * 16 and 32 * 32 blocks to build two data sets [27] All the of images were initially preprocessed to have the same size. To make the image size uniform throughout the dataset, each of the images was interpolated using bicubic interpolation.…”
Section: Papermentioning
confidence: 99%
“…Performance Criteria [8] AUC\Recall\Precision\F1-score\Accuracy [24] Accuracy\Sensitivity\FPR\F1-score [26] Accuracy\Sensitivity [9] TP\TN\FP\FN\Accuracy \Sensitivity\Specificity\Precision \F1-score \Matthews Correlation Coefficient (MCC) [27] \F1-score\Recall\Precision\Specificity [28] AUC\Recall\Precision\F1-score\Accuracy [29] AUC\Recall\Precision\F1-score\Accuracy [11] AUC\Sensitivity\Specificity [12] Recall\Precision\F1-score [13] AUC\Sensitivity\Specificity [16] AUC\Recall\Precision\Accuracy [21] AUC\specificity\Precision\F1-score\Accuracy value, the optimal historical value is updated. At the same time, the optimal weight file for this generation of training is saved.…”
Section: Papermentioning
confidence: 99%
“…Grad-CAM [31,34,37,38,44,45,53,55,56,63,68,73,76,77,79,84,97] Grad-CAM++ [31,34,40] CAM [30,37,54,55,60,87,88,94] LIME [14,41,68] LRP [31] smallest. Next, for feature extraction (in Tables 4 and 5), most of the previous research has focused on the use of deep features, and the most widely used CNN architecture for feature extraction is ResNet.…”
Section: Interpretabilitymethods Papersmentioning
confidence: 99%
“…[4,[15][16][17]19,20,22,[24][25][26][27][28]34,35,[37][38][39]41,43,[44][45][46]48,49,52,55,58,61,63,69,74,78,80,81,83,85,86,89,90,92,93,95,[99][100][101]105,107,111,[113][114][115] Multi-class[14,18,21,…”
mentioning
confidence: 99%
“…Example heatmaps obtained from a variety of techniques (top) 70 and from Grad‐CAM (bottom) 67 . Each technique may provide different evaluations of influential regions, both in terms of relative importance and key locations.…”
Section: Influential Region Identificationmentioning
confidence: 99%