Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence 2017
DOI: 10.24963/ijcai.2017/26
|View full text |Cite
|
Sign up to set email alerts
|

No Pizza for You: Value-based Plan Selection in BDI Agents

Abstract: Autonomous agents are increasingly required to be able to make moral decisions. In these situations, the agent should be able to reason about the ethical bases of the decision and explain its decision in terms of the moral values involved. This is of special importance when the agent is interacting with a user and should understand the value priorities of the user in order to provide adequate support. This paper presents a model of agent behavior that takes into account user preferences and moral values.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
46
0
1

Year Published

2017
2017
2020
2020

Publication Types

Select...
5
1
1

Relationship

2
5

Authors

Journals

citations
Cited by 54 publications
(47 citation statements)
references
References 14 publications
0
46
0
1
Order By: Relevance
“…However, it can be seen as an abstraction for many dilemmas involving AI systems now and in the future. Besides the relation to self-driving cars, similar dilemmas will need to be solved by intelligent medicine dispenses faced with the need to choose between two patients when it does not have enough of a needed medicine, by search and rescue robots faced with the need to prioritize victims, or, as we have recently shown, by health-care robots needing to choose between user's desires and optimal care [10]. In this paper, we use the trolley scenario as illustration of this wide application of moral dilemmas deliberation.…”
Section: Introductionmentioning
confidence: 99%
“…However, it can be seen as an abstraction for many dilemmas involving AI systems now and in the future. Besides the relation to self-driving cars, similar dilemmas will need to be solved by intelligent medicine dispenses faced with the need to choose between two patients when it does not have enough of a needed medicine, by search and rescue robots faced with the need to prioritize victims, or, as we have recently shown, by health-care robots needing to choose between user's desires and optimal care [10]. In this paper, we use the trolley scenario as illustration of this wide application of moral dilemmas deliberation.…”
Section: Introductionmentioning
confidence: 99%
“…A similar use of value-based reasoning for the purpose of plan selection is demonstrated in Cranefield et al [34], where plans are filtered, not just for their applicability to the situation, but also based on the effect it will have on values held by the agent. Additionally, Petruzzi et al [81] utilise social capital as an incentive for agents to participate in and choose actions that benefit a group rather than themselves.…”
Section: High-cognitive Ability Agents Demonstrating Norm Emergencementioning
confidence: 97%
“…Research here includes but is not limited to: (i) frameworks to describe and model norms as deontic logic in institutions 3 for example: InstAL [31], JaCaMo [22], OperettA [2]; (ii) the development of agents that reason about norms in their decision making for example: BDI (Belief-Desire-Intention) and BOID (Belief-Obligation-Intention-Desire) [19,24,35,38], Normative KGP (Knowledge-Goals-Plans) agents [84], NBDI(Norm-Belief-Desire-Intention) [41] (iii) approaches to synthesising normative systems for example: IRON and SENSE [70,71,73], AOCMAS [26], Guard functions in [4]. (iv) and more recently values and norms, for example [18,34,94].…”
Section: Perspectives and Representations Of Normsmentioning
confidence: 99%
See 2 more Smart Citations