2019
DOI: 10.1038/s41598-019-49411-7
|View full text |Cite
|
Sign up to set email alerts
|

Human decision-making biases in the moral dilemmas of autonomous vehicles

Abstract: The development of artificial intelligence has led researchers to study the ethical principles that should guide machine behavior. The challenge in building machine morality based on people’s moral decisions, however, is accounting for the biases in human moral decision-making. In seven studies, this paper investigates how people’s personal perspectives and decision-making modes affect their decisions in the moral dilemmas faced by autonomous vehicles. Moreover, it determines the variations in people’s moral d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
30
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 65 publications
(32 citation statements)
references
References 26 publications
1
30
0
1
Order By: Relevance
“…It is thus interesting to test whether the same effects can be found when people reason about abstract scenarios in which the threat to survival is less salient. Here it is relevant that Frank et al [ 44 ] cued participants into the perspective of passengers, pedestrians, and observers when judging abstract scenarios of moral dilemma situations with autonomous vehicles. They observed self-protective biases in the sense that participants who were cued into the perspective of the passenger were more willing to sacrifice the pedestrian than participants who were cued into the perspective of the pedestrian.…”
Section: Introductionmentioning
confidence: 99%
“…It is thus interesting to test whether the same effects can be found when people reason about abstract scenarios in which the threat to survival is less salient. Here it is relevant that Frank et al [ 44 ] cued participants into the perspective of passengers, pedestrians, and observers when judging abstract scenarios of moral dilemma situations with autonomous vehicles. They observed self-protective biases in the sense that participants who were cued into the perspective of the passenger were more willing to sacrifice the pedestrian than participants who were cued into the perspective of the pedestrian.…”
Section: Introductionmentioning
confidence: 99%
“…For example, we might expect that when users are endangered, the AI should evaluate the best outcome to minimize harm. Satisfactory explanation in the case of car accidents resulting in harm are complicated by the fact that people have different ideas of how AI should act in ethical quandaries depending on their culture (Awad et al, 2018), or whether they've been primed to take the perspective of passenger or pedestrian (Frank et al, 2019). In other words, trust and ethics are both flexibly interpreted, and explanations will only be satisfactory if they allow a user to judge whether the decision was appropriate in that situation.…”
Section: Ethical and Political Need For Xaimentioning
confidence: 99%
“…Pedestrian reaction upon emergent traffic situations largely influences the occurrence and severities of accidents 2 . Multiple influencing factors on pedestrian collision risk shall be considered for identifying relatively the distance-based safety boundary of pedestrian and vehicles, including both human factors (such as kinematics, posture, gait, age 3 5 ) and vehicle factors (such as impact velocity, front-end structural design, relative location 6 8 ). Epidemiological studies have provided the mostly on-hand information on field while are in lack of precise and comprehensive description of human reactions right before the event.…”
Section: Introductionmentioning
confidence: 99%