2021
DOI: 10.1007/978-3-030-91431-8_4
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating Stability of Post-hoc Explanations for Business Process Predictions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 9 publications
0
8
0
Order By: Relevance
“…Also [10] reports on the use of post-hoc, SHAP explanations for both the LSTM and the CatBoost method. [28] compares and evaluates explanations of process predictions yielded by different post-hoc frameworks (e.g., LIME and SHAP). [29] leverages post-hoc explainers to understand why a PPM model provides wrong predictions, eventually improving its accuracy.…”
Section: Explainability In Ppmmentioning
confidence: 99%
“…Also [10] reports on the use of post-hoc, SHAP explanations for both the LSTM and the CatBoost method. [28] compares and evaluates explanations of process predictions yielded by different post-hoc frameworks (e.g., LIME and SHAP). [29] leverages post-hoc explainers to understand why a PPM model provides wrong predictions, eventually improving its accuracy.…”
Section: Explainability In Ppmmentioning
confidence: 99%
“…Particularly, it is necessary to evaluate the ability of an XAI method to reflect the knowledge learned by an ML model. With the increasing interest in applying XAI methods in PPM, a few proposals [3]- [5] were made to evaluate the explanations created for PPM results. This article proposes an approach for evaluating XAI methods with respect to their ability to transfer data facts learned by an ML model about PPM data.…”
Section: A Problem Statementmentioning
confidence: 99%
“…An evaluation method has a target characteristic against which the performance of an XAI method is evaluated. For example, there exist methods for evaluating an explanation for its stability [5], [20], understandability [3], robustness [21], [22] and fidelity [4]. Our proposed approach evaluates an XAI method with respect to the degree of its consistency with the underlying data.…”
Section: B Explainable Artificial Intelligence Applicationmentioning
confidence: 99%
“…So there is no consistent way of finding this minimum counterfactual. Although there are some metrics that are used for feature attribution XAI algorithms, such as fidelity [16] and stability [17], there is no standardised way of evaluating XAI algorithms in general, which increases the complexity and difficulty of developing a benchmark evaluation for counterfactuals.…”
Section: The Problem Of Validation Of Counterfactual Explanationsmentioning
confidence: 99%