2021
DOI: 10.48550/arxiv.2106.08492
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Developing a Fidelity Evaluation Approach for Interpretable Machine Learning

Abstract: Although modern machine learning and deep learning methods allow for complex and in-depth data analytics, the predictive models generated by these methods are often highly complex, and lack transparency. Explainable AI (XAI) methods are used to improve the interpretability of these complex models, and in doing so improve transparency. However, the inherent fitness of these explainable methods can be hard to evaluate. In particular, methods to evaluate the fidelity of the explanation to the underlying black box… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 22 publications
0
3
0
Order By: Relevance
“…Fidelity of interpretability method (AI experts)—uses two metrics ( Velmurugan et al, 2021 ), recall and precision , where the term True Features (TF) represents the relevant features as extracted directly from the model and Explanation Features (EF) represents the features characterised as most relevant…”
Section: Explainability and Interpretabilitymentioning
confidence: 99%
“…Fidelity of interpretability method (AI experts)—uses two metrics ( Velmurugan et al, 2021 ), recall and precision , where the term True Features (TF) represents the relevant features as extracted directly from the model and Explanation Features (EF) represents the features characterised as most relevant…”
Section: Explainability and Interpretabilitymentioning
confidence: 99%
“…In this paper, we measure faithfulness to the model. Earlier work has looked at global measures of this type [48] and measures that are specialized to neural networks [32], feature importance [4,9,43,45], rule-based explanations [24], surrogate explanation [35], or highlighted text [10,46,49].…”
Section: Related Workmentioning
confidence: 99%
“…Surveys about the evaluation of XAI methods were published by [13,53]. Defining appropriate and generally applicable metrics to evaluate the quality of explanations [3,31,[46][47][48], the field is actively researched and hasn't converged towards a set of standard metrics.…”
Section: Explainable Ai Evaluation Frameworkmentioning
confidence: 99%