2022
DOI: 10.48550/arxiv.2211.04780
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On the Robustness of Explanations of Deep Neural Network Models: A Survey

Abstract: Explainability has been widely stated as a cornerstone of the responsible and trustworthy use of machine learning models. With the ubiquitous use of Deep Neural Network (DNN) models expanding to risk-sensitive and safety-critical domains, many methods have been proposed to explain the decisions of these models. Recent years have also seen concerted efforts that have shown how such explanations can be distorted (attacked) by minor input perturbations. While there have been many surveys that review explainabilit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 78 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?