2019
DOI: 10.1007/s11948-019-00096-1
|View full text |Cite
|
Sign up to set email alerts
|

Why Trolley Problems Matter for the Ethics of Automated Vehicles

Abstract: This paper argues against the view that trolley cases are of little or no relevance to the ethics of automated vehicles. Four arguments for this view are outlined and rejected: the Not Going to Happen Argument, the Moral Difference Argument, the Impossible Deliberation Argument and the Wrong Question Argument. In making clear where these arguments go wrong, a positive account is developed of how trolley cases can inform the ethics of automated vehicles.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
42
0
2

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 61 publications
(44 citation statements)
references
References 23 publications
0
42
0
2
Order By: Relevance
“…The utility of trolley dilemmas does not lie in their use as blueprints for crash optimizations (Holstein and Dodig-Crnkovic, 2018). Rather, they are an effective means to elucidate which ethical values are potentially conflicting in accident scenarios and to allow for the design of self-driving cars informed by human values (Gerdes et al, 2019;Keeling, 2019). As argued by Bonnefon et al (2019), trolley dilemmas should not be understood primarily as simulations of real-life scenarios, but as representations of conflicts that emerge on a statistical level: the introduction of self-driving cars will likely put different people at risk compared to today.…”
Section: Introductionmentioning
confidence: 99%
“…The utility of trolley dilemmas does not lie in their use as blueprints for crash optimizations (Holstein and Dodig-Crnkovic, 2018). Rather, they are an effective means to elucidate which ethical values are potentially conflicting in accident scenarios and to allow for the design of self-driving cars informed by human values (Gerdes et al, 2019;Keeling, 2019). As argued by Bonnefon et al (2019), trolley dilemmas should not be understood primarily as simulations of real-life scenarios, but as representations of conflicts that emerge on a statistical level: the introduction of self-driving cars will likely put different people at risk compared to today.…”
Section: Introductionmentioning
confidence: 99%
“…While it is fair to say that there is no consensus in the literature, I will refine some older arguments and introduce some new ones in support of the position that holds that the 'trolley methodology' is mistaken in some sense (e.g., because the applied trolley problems are irrelevant or misleading for the issue of ethical crashing). Despite broad criticism, application of the trolley methodology has been defended as recently as this year by Geoff Keeling (2020) and became broadly well-known because of the so-called Moral Machine experiment (Awad et al 2018). According to Edmond Awad et al, consumers will only switch from human-driven vehicles to autonomous vehicles if they understand the origins of the ethical principles that are programmed into these vehicles (p. 59).…”
Section: What Is Wrong With the Discussion On Ethical Crashing?mentioning
confidence: 99%
“…Recently, Keeling (2020) attempted to counter this argument by showing that the difference between choices in scenarios with absolute descriptions and standard decisionmaking under risk are not sufficiently different to warrant the claim of a categorical difference (pp. 299-300).…”
Section: What Is Wrong With the Discussion On Ethical Crashing?mentioning
confidence: 99%
“…Critically, each road user also holds a specific valence, which varies in strength in relation to how that individual user's identity corresponds to a number of set criteria 2 . Beyond the technical limitations of identification, data collection and processing, there are no specific criteria that ought to inform a valence, but can cover features like different age groups, socio-economic levels or professions following the line of the Moral Machines Experiment (Bonnefon et al 2016), or cover forms of morally admirable partiality that might exist between the AV and its passenger(s) (Keeling 2019). Arguably, most of the criteria chosen to inform valences will remain polemic, and given space limitations, it would be unwise to attempt to rehearse the full extent of the debate here.…”
Section: The Concept Of 'Valence'mentioning
confidence: 99%