2021
DOI: 10.1007/978-3-030-79108-7_8
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating Fidelity of Explainable Methods for Predictive Process Analytics

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 21 publications
(11 citation statements)
references
References 8 publications
0
11
0
Order By: Relevance
“…Predictive process monitoring is concerned with the early prediction of the future state of an ongoing case. Predictive efforts are primarily driven by using machine learning models such as XGBoost [3,21,22], random forest [8,23,3], support vector machines [8,3] or logistic regression (LR) [3], with recent works showing interest in applying deep learning models [24,25]. A wide range of works have already evaluated these different models with performance-based metrics [3,8,26].…”
Section: Related Work and Motivationmentioning
confidence: 99%
See 2 more Smart Citations
“…Predictive process monitoring is concerned with the early prediction of the future state of an ongoing case. Predictive efforts are primarily driven by using machine learning models such as XGBoost [3,21,22], random forest [8,23,3], support vector machines [8,3] or logistic regression (LR) [3], with recent works showing interest in applying deep learning models [24,25]. A wide range of works have already evaluated these different models with performance-based metrics [3,8,26].…”
Section: Related Work and Motivationmentioning
confidence: 99%
“…It has already been pointed out that the post-hoc explainability techniques should uncover, apart from an interpretable explanation, the true reasons for model predictions [14,15,36]. Many papers have already investigated the unfaithfulness of post-hoc explainability models [37,21,38,36]. A recent study by Ma [37] has suggested that there was a non-monotonic relationship between the SHAP values and the predictive performance.…”
Section: Related Work and Motivationmentioning
confidence: 99%
See 1 more Smart Citation
“…So there is no consistent way of finding this minimum counterfactual. Although there are some metrics that are used for feature attribution XAI algorithms, such as fidelity [16] and stability [17], there is no standardised way of evaluating XAI algorithms in general, which increases the complexity and difficulty of developing a benchmark evaluation for counterfactuals.…”
Section: The Problem Of Validation Of Counterfactual Explanationsmentioning
confidence: 99%
“…emerging [4] and several works have been focusing in comparing and evaluating explanations produced by different frameworks [28,29,30,31,32]. The explainable survey of Stierle et al [5] reports on the repertoire of techniques that were developed to address this problem.…”
Section: Explanations In the Bpm Fieldmentioning
confidence: 99%