2017
DOI: 10.48550/arxiv.1706.03825
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

SmoothGrad: removing noise by adding noise

Abstract: Explaining the output of a deep network remains a challenge. In the case of an image classifier, one type of explanation is to identify pixels that strongly influence the final decision. A starting point for this strategy is the gradient of the class score function with respect to the input image. This gradient can be interpreted as a sensitivity map, and there are several techniques that elaborate on this basic idea. This paper makes two contributions: it introduces SMOOTHGRAD, a simple method that can help v… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
778
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 502 publications
(781 citation statements)
references
References 16 publications
2
778
0
1
Order By: Relevance
“…Deterministic visualization methods: Many early works (Erhan et al, 2009;Zeiler & Fergus, 2014;Simonyan et al, 2013;Selvaraju et al, 2016;Smilkov et al, 2017) This led to successful interpretability of DNs internal features, especially when applied on a unit belonging to the first few layers of a DN (Cadena et al, 2018).…”
Section: Related Workmentioning
confidence: 99%
“…Deterministic visualization methods: Many early works (Erhan et al, 2009;Zeiler & Fergus, 2014;Simonyan et al, 2013;Selvaraju et al, 2016;Smilkov et al, 2017) This led to successful interpretability of DNs internal features, especially when applied on a unit belonging to the first few layers of a DN (Cadena et al, 2018).…”
Section: Related Workmentioning
confidence: 99%
“…(3) Gaussian baseline [28,31], (4) uniform baseline [31], and (5) a trained baseline [15], resulting in 15 tables of explanations for each experiment.…”
Section: Comparing Local Explanation Methodsmentioning
confidence: 99%
“…4. It can be seen that h(x) > h(x) and h(G(z)) < h(G(z)), indicating that the teacher discriminator D is more confident than the student discriminator D. Thus, the teacher discriminator D may More specifically, to illuminate the impact of D and D on G, we visualize the gradients ∇ G(z) h(G(z)) and ∇ G(z) h(G(z)) via SmoothGrad [58] in objective functions Adv(G, D) and Adv(G, D) during the optimization process of Adv(G, D), respectively, shown in Fig. 5.…”
Section: Motivationmentioning
confidence: 99%