2021
DOI: 10.1613/jair.1.12813
|View full text |Cite
|
Sign up to set email alerts
|

Contrastive Explanations of Plans through Model Restrictions

Abstract: In automated planning, the need for explanations arises when there is a mismatch between a proposed plan and the user’s expectation. We frame Explainable AI Planning as an iterative plan exploration process, in which the user asks a succession of contrastive questions that lead to the generation and solution of hypothetical planning problems that are restrictions of the original problem. The object of the exploration is for the user to understand the constraints that govern the original plan and, ultimately, t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
19
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 19 publications
(19 citation statements)
references
References 33 publications
0
19
0
Order By: Relevance
“…Moreover, the latter allows each action to be available for execution as soon as it is generated, which in turn allows concurrent planning and execution. PDDL is used, for example, for the proposals of Krarup et al [46] or Lindner and Olz [16], which are cited in Table 1, but also for Sreedharan et al [55] or Stulp et al [56]. In general, the coding of preconditions and postconditions used in PDDL constitutes a causal model that can be quite complex and that can allow work on reconciliation between the robot and person models [55], or on post hoc explanation [46] (counterfactuals, i.e., an explanation that describes that the output of the model will be y ′ instead of y, thereby changing, without this change occurring in the real world, the behaviour or inputs x to x ′ ) [57].…”
Section: Approach Knowledge Representationmentioning
confidence: 99%
See 2 more Smart Citations
“…Moreover, the latter allows each action to be available for execution as soon as it is generated, which in turn allows concurrent planning and execution. PDDL is used, for example, for the proposals of Krarup et al [46] or Lindner and Olz [16], which are cited in Table 1, but also for Sreedharan et al [55] or Stulp et al [56]. In general, the coding of preconditions and postconditions used in PDDL constitutes a causal model that can be quite complex and that can allow work on reconciliation between the robot and person models [55], or on post hoc explanation [46] (counterfactuals, i.e., an explanation that describes that the output of the model will be y ′ instead of y, thereby changing, without this change occurring in the real world, the behaviour or inputs x to x ′ ) [57].…”
Section: Approach Knowledge Representationmentioning
confidence: 99%
“…The aim of providing explanations should be to improve these metrics. The topic is addressed in detail in Krarup et al [46]'s work.…”
Section: Approach Knowledge Representationmentioning
confidence: 99%
See 1 more Smart Citation
“…Although at present ML model explainability of ML models is the most studied theme in the general field of explainability, it is also the case that explainability has been studied in AI for decades [11-13, 96, 103, 104, 113, 250, 276, 278, 292, 293], with a renewed interest in recent years. For example, explanations have recently been studied in AI planning [65,94,95,111,142,194,288,289,291,302], constraint satisfaction and problem solving [54,99,115,135,289], among other examples [290]. Furthermore, there is some agreement that regulations like EU's General Data Protection Regulation (GDPR) [100] effectively impose the obligation of explanations for any sort of algorithmic decision making [129,188].…”
Section: Additional Topics and Extensionsmentioning
confidence: 99%
“…Other XAIP user studies use one domain(Chakraborti et al 2019b;Chakraborti and Kambhampati 2019;Sreedharan et al 2019aLindsay et al 2020;Das, Banerjee, and Chernova 2021), two domains(Sreedharan et al 2019b;Sreedharan, Srivastava, and Kambhampati 2020) or four domains(Krarup et al 2021). …”
mentioning
confidence: 99%