2019
DOI: 10.1609/aaai.v33i01.33017635
|View full text |Cite
|
Sign up to set email alerts
|

Moral Permissibility of Action Plans

Abstract: Research in classical planning so far was mainly concerned with generating a satisficing or an optimal plan. However, if such systems are used to make decisions that are relevant to humans, one should also consider the ethical consequences generated plans can have. We address this challenge by analyzing in how far it is possible to generalize existing approaches of machine ethics to automatic planning systems. Traditionally, ethical principles are formulated in an actionbased manner, allowing to judge the exec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 11 publications
(7 citation statements)
references
References 10 publications
0
7
0
Order By: Relevance
“…One way to generate contrastive explanations is by counterfactual analysis: the occurrence of some phenomenon X in situation S can be explained by a sufficiently altered situation S ′ where X does not occur (but Y does). Counterfactual explanations have recently been applied to generating explanations for plan failures [25], for explaining why an action plan contains a specific action [22], and to explain why an action plan does (not) adhere to moral principles [51]. These approaches only partially fulfill Miller's criteria of selectivity, though: although minimality criteria are considered, there are generally many possible explanations and it is not necessarily clear how to pick the most appropriate ones.…”
Section: Offering Explanationsmentioning
confidence: 99%
“…One way to generate contrastive explanations is by counterfactual analysis: the occurrence of some phenomenon X in situation S can be explained by a sufficiently altered situation S ′ where X does not occur (but Y does). Counterfactual explanations have recently been applied to generating explanations for plan failures [25], for explaining why an action plan contains a specific action [22], and to explain why an action plan does (not) adhere to moral principles [51]. These approaches only partially fulfill Miller's criteria of selectivity, though: although minimality criteria are considered, there are generally many possible explanations and it is not necessarily clear how to pick the most appropriate ones.…”
Section: Offering Explanationsmentioning
confidence: 99%
“…For future work, we want to evaluate how well topk planning performs in scenarios where complex concepts for plans are required, such as preferences (Ceriani and Gerevini 2015), state-trajectory constraints (Wright, Mattmüller, and Nebel 2018) or even moral permissibility (Lindner, Mattmüller, and Nebel 2019). In addition, we plan to generalize SYM-K to search for diverse plans (Katz and Sohrabi 2019), since in some cases it may be desirable to find plans that differ according to a certain specification in order to avoid plans that are, e.g., different orders of the same actions.…”
Section: Discussionmentioning
confidence: 99%
“…A variety of good alternative plans makes it possible to take into account user preferences and environmental influences that are difficult to model or may have changed at the time the plan is executed. A top-k planner also allows to "generate and test" high quality plans, which is relevant for various areas, such as goal recognition (Sohrabi, Riabov, and Udrea 2016), diverse planning (Katz and Sohrabi 2019), morally permissible planning (Lindner, Mattmüller, and Nebel 2019), or explanation generation (Eifler et al 2019). In addition, collections of plans for planning tasks can serve as practical training sets for machine learning algorithms (Toyer et al 2018;Gnad et al 2019) and enable empirical studies on properties of different planning tasks (Corraya et al 2019).…”
Section: Introductionmentioning
confidence: 99%
“…All three main branches of normative ethics, namely consequentialism, deontological ethics and virtue ethics have been studied to some degree in the context of automated planning. Some works focus on particular theories, while others more closely related to this work, try to combine the mechanisms of several of them as in (Cointe, Bonnet, and Boissier 2016;Lindner, Bentzen, and Nebel 2017;Lindner, Mattmüller, and Nebel 2019;Bonnemains, Saurel, and Tessier 2016;Berreby, Bourgne, and Ganascia 2017).…”
Section: Capturing Ethical Theoriesmentioning
confidence: 99%
“…We place ourselves in the intersection of normative ethics and automated planning. Past research in this subject aimed to apply ideas from the field of normative ethics, the subfield of ethics that studies the admissibility of actions, to make autonomous agents take into account the decision process behind diverse ethical theories (Berreby, Bourgne, and Ganascia 2017;Lindner, Mattmüller, and Nebel 2019;Cointe, Bonnet, and Boissier 2016;Dennis and Fisher 2018). Still, none provide a direct way to support ethical features in PDDL (Gerevini et al 2009) which profits from its state-of-the-art planning algorithms.…”
Section: Introductionmentioning
confidence: 99%