ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2021
DOI: 10.1109/icassp39728.2021.9414942
|View full text |Cite
|
Sign up to set email alerts
|

Ada-Sise: Adaptive Semantic Input Sampling for Efficient Explanation of Convolutional Neural Networks

Abstract: Explainable AI (XAI) is an active research area to interpret a neural network's decision by ensuring transparency and trust in the task-specified learned models. Recently, perturbation-based model analysis has shown better interpretation, but backpropagation techniques are still prevailing because of their computational efficiency. In this work, we combine both approaches as a hybrid visual explanation algorithm and propose an efficient interpretation method for convolutional neural networks. Our method adapti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(10 citation statements)
references
References 10 publications
0
10
0
Order By: Relevance
“…The first studies showed that the final activation layers often carry complete feature information, which is the main basis for final predictions [31,43], so several Class Activation Mapping (CAM)-based methods [5,21,31,43] are proposed to calculate the importance of each feature map in the final activation layer of classification models. Instead of using only one final convolutional layer, Semantic Input Sampling for Explanation (SISE) [34] uses multiple intermediate convolutional layers to provide better spatial resolution and completeness of the explanation.…”
Section: Region-based Saliency Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The first studies showed that the final activation layers often carry complete feature information, which is the main basis for final predictions [31,43], so several Class Activation Mapping (CAM)-based methods [5,21,31,43] are proposed to calculate the importance of each feature map in the final activation layer of classification models. Instead of using only one final convolutional layer, Semantic Input Sampling for Explanation (SISE) [34] uses multiple intermediate convolutional layers to provide better spatial resolution and completeness of the explanation.…”
Section: Region-based Saliency Methodsmentioning
confidence: 99%
“…Intuitively, small features are the most important to identify the object's class, while large features contain the generic and relevant context in which the object is found. Our method, inspired by SISE [34], aggregates feature maps at the semantic level by prioritizing more detailed features and descending to more general features. However, SISE forcibly removes noises using threshold parameters with Otsu's algorithm [23], making the explanations sometimes confusing to end-users.…”
Section: Fusion Feature Mapmentioning
confidence: 99%
“…There have been only limited works studying the explainability of GNNs. In contrast to CNN-based approaches where explanations are usually provided at pixel-level [85], for graph data the focus is on the structural information, i.e., the identification of the salient nodes and/or edges contributing the most to the GNN classification decision [86]. In the following, we briefly survey techniques most relevant to ours, i.e., targeting graph classification tasks and providing nodelevel (rather than edge-level) explanations.…”
Section: B Gnn Decision Explanationmentioning
confidence: 99%
“…Finally, response-based methods [8], [11], [21], [27] use feature maps or activations of layers in the inference stage to interpret the decision-making process of a neural network. One of the earliest methods in this category, CAM [29], uses the output of the global average pooling layer as weights, and computes the weighted average of the features maps at the final convolutional layer.…”
Section: Related Workmentioning
confidence: 99%
“…Perturbation-based methods [19], [28], perturb the input and observe changes in the output, thus do not suffer from gradient-based problems as above. Similarly, response-based methods [8], [21], [27] combine a model's intermediate representations, or features, to generate explanations. However, most methods of the two latter categories described above are computationally expensive because arXiv:2301.07407v1 [cs.CV] 18 Jan 2023 each input requires many forward passes for an accurate explanation map to be produced.…”
Section: Introductionmentioning
confidence: 99%