2018
DOI: 10.1609/aaai.v32i1.11295
|View full text |Cite
|
Sign up to set email alerts
|

Norm Conflict Resolution in Stochastic Domains

Abstract: Artificial agents will need to be aware of human moral and social norms, and able to use them in decision-making. In particular, artificial agents will need a principled approach to managing conflicting norms, which are common in human social interactions. Existing logic-based approaches suffer from normative explosion and are typically designed for deterministic environments; reward-based approaches lack principled ways of determining which normative alternatives exist in a given environment. We propose a h… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 14 publications
(8 citation statements)
references
References 18 publications
0
8
0
Order By: Relevance
“…Several major ethical theories have been used to motivate autonomous systems where an agent is, morally speaking, required, permitted, or prohibited from taking specific actions in specific states depending on whether that action in that scenario violates the rules of the ethical theory. These theories include Act Utilitarianism [7,91,128], Kantianism [105,187,236], Virtue Ethics [131,171,216], Norm-based systems [82,120], The Veil of Ignorance [138,167], Divine Command Theory [37], The Golden Rule [167], and Prima Facie Duties [8,216] among others. In addition to these applied works, there have been many more theoretical pieces examining when and why particular ethical frameworks ought to be used [76,89,103,142,178,186,187,225,244,254].…”
Section: Conceptualizationsmentioning
confidence: 99%
“…Several major ethical theories have been used to motivate autonomous systems where an agent is, morally speaking, required, permitted, or prohibited from taking specific actions in specific states depending on whether that action in that scenario violates the rules of the ethical theory. These theories include Act Utilitarianism [7,91,128], Kantianism [105,187,236], Virtue Ethics [131,171,216], Norm-based systems [82,120], The Veil of Ignorance [138,167], Divine Command Theory [37], The Golden Rule [167], and Prima Facie Duties [8,216] among others. In addition to these applied works, there have been many more theoretical pieces examining when and why particular ethical frameworks ought to be used [76,89,103,142,178,186,187,225,244,254].…”
Section: Conceptualizationsmentioning
confidence: 99%
“…We know of only one other nonmyopic top-down approach to explicit ethical agents (Kasenberg and Scheutz 2018). However, the approach cannot represent different ethical theories, such as utilitarianism or Kantianism, because it is specific to norms.…”
Section: Related Workmentioning
confidence: 99%
“…Prima facie duties (PFD), a pluralistic, nonabsolutist ethical theory, holds that the morality of an action is based on whether that action fulfills fundamental moral duties that can contradict each other (Ross 1930;Morreau 1996). Related to recent work on norm conflict resolution (Kasenberg and Scheutz 2018), we consider an ethical framework that requires a policy that selects actions that do not neglect duties of different penalties within some tolerance. Definition 9.…”
Section: Prima Facie Dutiesmentioning
confidence: 99%
“…Early work on planning with linear temporal logic (LTL) specifications in MDPs includes that of Ding et al (2011), who employ dynamic programming to construct a policy which almost surely satisfies an LTL specification. Subsequent work has considered how to plan with LTL specifications that are only partially satisfiable (Lacerda, Parker, and Hawes 2015;Lahijanian et al 2015), and how to work with multiple specifications which may not all be satisfiable (Tumova et al 2013;Kasenberg and Scheutz 2018) or represented as beliefs over formulas (Shah, Li, and Shah 2019). The present work builds on these latter approaches, describing how an agent planning with multiple specifications in LTL may answer questions about its behavior, including "why" questions.…”
Section: Related Workmentioning
confidence: 99%
“…The planning approach described in this section is similar to the approach described by Kasenberg and Scheutz (2018). The key differences are (1) the use of co-safe and safe LTL statements to avoid constructing ω-automata; (2) the use of a binary cost function instead of the timestep-based cost function Kasenberg and Scheutz describe; and (3) a preference structure that allows priorities (specifications of such different priorities that they cannot be traded off).…”
Section: Preliminaries: Planning With Ltl Specificationsmentioning
confidence: 99%