2020 IEEE International Conference on Image Processing (ICIP) 2020
DOI: 10.1109/icip40778.2020.9190952
|View full text |Cite
|
Sign up to set email alerts
|

SIDU: Similarity Difference And Uniqueness Method for Explainable AI

Abstract: A new brand of technical artificial intelligence ( Explainable AI ) research has focused on trying to open up the 'black box' and provide some explainability. This paper presents a novel visual explanation method for deep leaning networks in the form of a saliency map that can effectively localize entire object regions. In contrast to the current state-of-the art methods, the proposed method shows quite promising visual explanations that can gain greater trust of human expert. Both quantitative and qualitative… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
13
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 14 publications
(14 citation statements)
references
References 18 publications
1
13
0
Order By: Relevance
“…Note that Grad-CAM is the generalization of CAM [870]. There are also some improved versions of Grad-CAM: Chattopadhay et al [103] propose a method that gives better object localization and improved heatmaps, when multiple object instances occur and in [501] a method with a more precise heatmap is described using a similarity difference mask, which is combined with a heatmap to generate the final heatmap. In addition, Wang et al [755] introduced a class activation mapping method, that removed the dependency on the gradients by using a linear combination of weights and activation maps.…”
Section: Model-specific Methodsmentioning
confidence: 99%
“…Note that Grad-CAM is the generalization of CAM [870]. There are also some improved versions of Grad-CAM: Chattopadhay et al [103] propose a method that gives better object localization and improved heatmaps, when multiple object instances occur and in [501] a method with a more precise heatmap is described using a similarity difference mask, which is combined with a heatmap to generate the final heatmap. In addition, Wang et al [755] introduced a class activation mapping method, that removed the dependency on the gradients by using a linear combination of weights and activation maps.…”
Section: Model-specific Methodsmentioning
confidence: 99%
“…A new visual explanation method known as SIDU proposed recently in [17] estimates the pixel saliency by extracting the last convolutional layer of the deep CNN model and creating the similarity difference mask which is eventually combined to form a final map for generating the visual explanation of the prediction. This method generates a heatmap based on two steps: Similarity difference and Uniqueness.…”
Section: Sidumentioning
confidence: 99%
“…Finally the heatmaps obtained via the eye-tracker can be compared directly by XAI methods. In this work we use heatmaps generated together by two XAI state-of-the-art methods namely SIDU [17], GRAD-CAM [23] using two different evaluation metrics.…”
Section: Introductionmentioning
confidence: 99%
“…This obviates the need to train interpretable classifiers for explaining each input-output relation (as in LIME) for every test point. Our proposed SIDU [13] method falls under perturbation-based methods but can effectively localize entire salient region of the object of interest compared to the state-of-the-art XAI methods such as Grad-CAM and RISE. Furthermore, it is less computationally complex.…”
Section: Visual Explanationmentioning
confidence: 99%
“…While each of these methods can be justifiable in one way or another, apart from challenges such as gradient computation of DNN architecture (e.g., Grad-CAM) or visualizing all the perturbations modes (e.g., RISE), the generated visual explanation suffers from a lack of localizing the entire salient regions of an object, which is often required for higher classification scores. To address this high-impact problem, we proposed a new visual explanation approach known as SIDU [13] for estimating pixel saliency by extracting the last convolutional layer of the deep CNN model and creating the similarity differences and uniqueness masks which are eventually combined to form a final map for generating the visual explanation for the prediction.…”
Section: Introductionmentioning
confidence: 99%