2022
DOI: 10.48550/arxiv.2202.07728
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…Different from gradient-and activation-based methods, perturbation-based methods mask or alter an input image feature and calculate the difference to the output of the original input. Existing work proposed to occlude (Zeiler and Fergus 2014), marginalize with a sliding window (Zintgraf et al 2017), randomly perturb (Petsiuk, Das, and Saenko 2018), or occlude parts of an input image with perturbation space-exploration (Fel et al 2022). In Dabkowski and Gal (2017), Vedaldi (2017), andRibeiro, Singh, andGuestrin (2016), the black-box predictor is approximated by an interpretable model locally and super-pixel explanations summarized to the global input.…”
Section: Introductionmentioning
confidence: 99%
“…Different from gradient-and activation-based methods, perturbation-based methods mask or alter an input image feature and calculate the difference to the output of the original input. Existing work proposed to occlude (Zeiler and Fergus 2014), marginalize with a sliding window (Zintgraf et al 2017), randomly perturb (Petsiuk, Das, and Saenko 2018), or occlude parts of an input image with perturbation space-exploration (Fel et al 2022). In Dabkowski and Gal (2017), Vedaldi (2017), andRibeiro, Singh, andGuestrin (2016), the black-box predictor is approximated by an interpretable model locally and super-pixel explanations summarized to the global input.…”
Section: Introductionmentioning
confidence: 99%