2021
DOI: 10.1007/978-3-030-64949-4_1
|View full text |Cite
|
Sign up to set email alerts
|

Explainable Artificial Intelligence for Process Mining: A General Overview and Application of a Novel Local Explanation Approach for Predictive Process Monitoring

Abstract: to the edited volume "Interpretable Artificial Intelligence: A perspective of Granular Computing" (published by Springer).

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
20
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 40 publications
(20 citation statements)
references
References 86 publications
0
20
0
Order By: Relevance
“…A wide range of works have already evaluated these different models with performance-based metrics [3,8,26]. Over the last two years, there has also been a lot of movement around explainability in predictive process monitoring [23,27,22,28,29,30,11,19]. Related to the XAI literature [13], these different works can be divided into two different trends based on how they deal with the explainability-accuracy trade-off.…”
Section: Related Work and Motivationmentioning
confidence: 99%
See 3 more Smart Citations
“…A wide range of works have already evaluated these different models with performance-based metrics [3,8,26]. Over the last two years, there has also been a lot of movement around explainability in predictive process monitoring [23,27,22,28,29,30,11,19]. Related to the XAI literature [13], these different works can be divided into two different trends based on how they deal with the explainability-accuracy trade-off.…”
Section: Related Work and Motivationmentioning
confidence: 99%
“…The first trend presumes the complex model as the task model [14] and looks for explanations using post-hoc techniques. In predictive process monitoring, several papers have already suggested model-agnostic explainability techniques on top of machine learning models [23,22] such as SHapley Additive exPlanations (SHAP) [31] or Local Interpretable Model-Agnostic Explanations (LIME) [32], with similar developments in a deep learning context [27,30,33,28,29]. In [27],…”
Section: Related Work and Motivationmentioning
confidence: 99%
See 2 more Smart Citations
“…Most of the existing work in Explainable deep learning-based predictive process analytics approaches use post-hoc methods such as LIME and SHAP to explain the model's prediction [33,22], while some of the recent approaches focus on intrinsic interpretable deep learning architectures. These include the approaches that use model attention [33], explicit process model and Gated graph neural network based approaches [34], and use of partial dependence plots [35] and Layer wise relevance propagation [15].…”
Section: Deep Learning-based Predictive Process Analyticsmentioning
confidence: 99%