2020
DOI: 10.48550/arxiv.2010.00672
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Explaining Convolutional Neural Networks through Attribution-Based Input Sampling and Block-Wise Feature Aggregation

Abstract: As an emerging field in Machine Learning, Explainable AI (XAI) has been offering remarkable performance in interpreting the decisions made by Convolutional Neural Networks (CNNs). To achieve visual explanations for CNNs, methods based on class activation mapping and randomized input sampling have gained great popularity. However, the attribution methods based on these techniques provide lowerresolution and blurry explanation maps that limit their explanation power. To circumvent this issue, visualization based… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(10 citation statements)
references
References 28 publications
0
9
0
Order By: Relevance
“…Two pre-trained models, a shallow VGG16 (with a test accuracy of 87.18%) and a residual ResNet-50 network (with 87.96% test accuracy), are directly loaded from the TorchRay library [10] to replicate the original experimentation setup. As it was reported in [5] that SISE meets or outperforms most of the state-of-the-art XAI methods like Grad-CAM [14], RISE [10] and Score-CAM [16], we restrict our comparisons only with Extremal Perturbation [11] (as it is one of the sophisticated perturbation-based methods) and SISE. 1.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Two pre-trained models, a shallow VGG16 (with a test accuracy of 87.18%) and a residual ResNet-50 network (with 87.96% test accuracy), are directly loaded from the TorchRay library [10] to replicate the original experimentation setup. As it was reported in [5] that SISE meets or outperforms most of the state-of-the-art XAI methods like Grad-CAM [14], RISE [10] and Score-CAM [16], we restrict our comparisons only with Extremal Perturbation [11] (as it is one of the sophisticated perturbation-based methods) and SISE. 1.…”
Section: Resultsmentioning
confidence: 99%
“…Table 1 shows the benchmark evaluation of Ada-SISE concerning various metrics and their execution time. As the depicted results are achieved through the same experimental setup as SISE paper, the readers can refer to [5] to infer further head-to-head comparison of Ada-SISE with other state-ofthe-art methods. Energy-Based Pointing Game (EBPG) [16] and Bbox [19] use the ground-truth annotations available to determine the precision of an XAI algorithm.…”
Section: Benchmark Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…Ablation-CAM (Ramaswamy et al 2020) and Score-CAM (Wang et al 2020) have been developed to overcome these drawbacks. Despite the strength of the CAMbased methods in capturing the features extracted in CNNs, the lack of localization information in the coarse high-level feature maps limits such methods' performance by producing blurry explanations (Sattarzadeh et al 2020).…”
Section: Attribution Methodsmentioning
confidence: 99%
“…Those superpixels come from a previous perturbation step. With visualization algorithms like CAM (Class Activation Mapping) [6], SISE (Semantic Input Sampling for Explanations) [7], Saliency Map [8], and so on, the output will show the user a heatmap on the original image.…”
Section: Introductionmentioning
confidence: 99%