2021
DOI: 10.3390/s21165657
|View full text |Cite
|
Sign up to set email alerts
|

Explainable Artificial Intelligence for Bias Detection in COVID CT-Scan Classifiers

Abstract: Problem: An application of Explainable Artificial Intelligence Methods for COVID CT-Scan classifiers is presented. Motivation: It is possible that classifiers are using spurious artifacts in dataset images to achieve high performances, and such explainable techniques can help identify this issue. Aim: For this purpose, several approaches were used in tandem, in order to create a complete overview of the classificatios. Methodology: The techniques used included GradCAM, LIME, RISE, Squaregrid, and direct Gradie… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 27 publications
(10 citation statements)
references
References 20 publications
0
10
0
Order By: Relevance
“…The Gradient-weighted Class Activation Mapping (Grad-CAM) method was used to improve the interpretability of our trained DenseNet models, and visually contextualize important features in the image data that were used for model predictions [ 20 ]. Grad-CAM is a widely used technique for the visual explanation of deep-learning algorithms [ 21 , 22 ]. With this approach, heat maps were generated from 8x8 feature maps to highlight regions of each image that were the most important for model prediction ( Fig 4 ).…”
Section: Methodsmentioning
confidence: 99%
“…The Gradient-weighted Class Activation Mapping (Grad-CAM) method was used to improve the interpretability of our trained DenseNet models, and visually contextualize important features in the image data that were used for model predictions [ 20 ]. Grad-CAM is a widely used technique for the visual explanation of deep-learning algorithms [ 21 , 22 ]. With this approach, heat maps were generated from 8x8 feature maps to highlight regions of each image that were the most important for model prediction ( Fig 4 ).…”
Section: Methodsmentioning
confidence: 99%
“…Rahman et al showed that many COVID-19 diagnostic models are vulnerable to attacks by adversarial examples [20]. Palatnik de Sousa et al [21] also demonstrated the utility of adding random colored artifacts to CT images to identify model architecture which are most robust to such perturbation. This illustrates the importance of robust validation of models prior to their integration within clinical settings.…”
Section: Identificationmentioning
confidence: 99%
“…Data Augmentation Latent Space Interpretation Intrinsic latent space guidance [15]- [18], [20], [21] Post-hoc PCA-based [19] Outcome Prediction…”
Section: Strategy Category Techniquementioning
confidence: 99%
“…eir dataset included 166 individuals' CT scans, 72 of them were COVID positive, and 35 were interstitial but COVID negative. Similarly, Palatnik et al [37] identified the issue of usage of false or doubtful artifacts in dataset images by classifiers; they suggested a new AI-based technique for COVID-19 classification on the COVID-CT dataset [38].…”
Section: Viral Pneumonia Normalmentioning
confidence: 99%