2021
DOI: 10.1016/j.artint.2021.103550
|View full text |Cite
|
Sign up to set email alerts
|

A framework for step-wise explaining how to solve constraint satisfaction problems

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
51
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
4
2

Relationship

1
9

Authors

Journals

citations
Cited by 22 publications
(52 citation statements)
references
References 27 publications
1
51
0
Order By: Relevance
“…Publication history. This paper is an extension of previous papers presented at workshops and conferences [13,14,15]. The current paper extends the previous papers with more detailed examples, additional experiments, as well as the formalization of what we call nested explanation sequences.…”
Section: Introductionmentioning
confidence: 55%
“…Publication history. This paper is an extension of previous papers presented at workshops and conferences [13,14,15]. The current paper extends the previous papers with more detailed examples, additional experiments, as well as the formalization of what we call nested explanation sequences.…”
Section: Introductionmentioning
confidence: 55%
“…Although recent years have witnessed a growing interest in finding explanations of machine learning (ML) models (Lipton 2018;Guidotti et al 2019;Weld and Bansal 2019;Monroe 2021), explanations have been studied from different perspectives and in different branches of AI at least since the 80s (Shanahan 1989;Falappa, Kern-Isberner, and Simari 2002;Pérez and Uzcátegui 2003), including more recently in constraint programming (Amilhastre, Fargier, and Marquis 2002;Bogaerts et al 2020;Gamba, Bogaerts, and Guns 2021). In the case of ML models, non-heuristic explanations have been studied in recent years (Shih, Choi, and Darwiche 2018;Ignatiev, Narodytska, and Marques-Silva 2019a;Shih, Choi, and Darwiche 2019;Narodytska et al 2019;Ignatiev, Narodytska, and Marques-Silva 2019b,c;Darwiche and Hirth 2020;Ignatiev et al 2020a;Ignatiev 2020;Audemard, Koriche, and Marquis 2020;Marques-Silva et al 2020;Barceló et al 2020;Ignatiev et al 2020b;Izza, Ignatiev, and Marques-Silva 2020;Wäldchen et al 2021;Izza and Marques-Silva 2021;Malfa et al 2021;Ignatiev and Marques-Silva 2021;Cooper and Marques-Silva 2021;Huang et al 2021;Audemard et al 2021;Marques-Silva and Ignatiev 2022;Ignatiev et al 2022;Shrotri et al 2022).…”
Section: Related Workmentioning
confidence: 99%
“…Although at present ML model explainability of ML models is the most studied theme in the general field of explainability, it is also the case that explainability has been studied in AI for decades [11-13, 96, 103, 104, 113, 250, 276, 278, 292, 293], with a renewed interest in recent years. For example, explanations have recently been studied in AI planning [65,94,95,111,142,194,288,289,291,302], constraint satisfaction and problem solving [54,99,115,135,289], among other examples [290]. Furthermore, there is some agreement that regulations like EU's General Data Protection Regulation (GDPR) [100] effectively impose the obligation of explanations for any sort of algorithmic decision making [129,188].…”
Section: Additional Topics and Extensionsmentioning
confidence: 99%