2022
DOI: 10.36227/techrxiv.21067438.v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Trustworthy View on XAI Method Evaluation

Abstract: <p>As the demand grows to develop end-user trust in AI models, practitioners start to build and configure customized XAI (Explainable Artificial Intelligence) methods. The challenge is the lack of systematic evaluation of the newly proposed XAI method. As a result, it limits the confidence of XAI explanation in practice. In this paper, we follow a process of XAI method development and define two metrics in terms of consistency and efficiency in guiding the evaluation of trustworthy explanations. We demon… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 11 publications
0
4
0
Order By: Relevance
“…However, other methods can also be used to compute metrics related to Faithfulness [16,31]. Robustness measures the stability and consistency of a given XAI method [15,28], while Complexity [15] ensures that explanation would be easily understandable by users. Finally, it can be interesting to measure how explanations coincide with ground truth [28,32].…”
Section: Related Workmentioning
confidence: 99%
“…However, other methods can also be used to compute metrics related to Faithfulness [16,31]. Robustness measures the stability and consistency of a given XAI method [15,28], while Complexity [15] ensures that explanation would be easily understandable by users. Finally, it can be interesting to measure how explanations coincide with ground truth [28,32].…”
Section: Related Workmentioning
confidence: 99%
“…Meanwhile, features in text content can be evaluated by masking. We pick up Shapley Value [17], SHAP [19], Preddiff [32] and Mean-Centroid Preddiff [33] to perform the feature masking-based explanation.…”
Section: Explanation Consistency Analysismentioning
confidence: 99%
“…In this case, the way of deriving the explanation summary for the Shapley Value, SHAP, and Prediff method is the same as we did in the first case study. For Mean-Centroid Prediff method, the raw explanation for each data instance is the prediction difference in paper [33] as δ x [i] j . This method computes feature contribution value φ j (δ X j ) as the tangent value of the centroid point of clusters formed by the data instances of δ x [i] j .…”
Section: Explanation Consistency Analysismentioning
confidence: 99%
“…In this case, the way of deriving the explanation summary for the Shapley Value, SHAP, and Prediff method is the same as we did in the first case study. For Mean-Centroid Prediff method, the raw explanation for each data instance is the prediction difference in paper [33] as δ x [i] j . This method computes feature contribution value ϕ j (δ X j ) as the tangent value of the centroid point of clusters formed by the data points of δ x [i] j .…”
Section: Explanation Consistency Analysismentioning
confidence: 99%