2018
DOI: 10.1111/phc3.12507
|View full text |Cite
|
Sign up to set email alerts
|

The ethics of crashes with self‐driving cars: A roadmap, I

Abstract: Self‐driving cars hold out the promise of being much safer than regular cars. Yet they cannot be 100% safe. Accordingly, they need to be programmed for how to deal with crash scenarios. Should cars be programmed to always prioritize their owners, to minimize harm, or to respond to crashes on the basis of some other type of principle? The article first discusses whether everyone should have the same “ethics settings.” Next, the oft‐made analogy with the trolley problem is examined. Then follows an assessment of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

2
55
0
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2
2

Relationship

2
7

Authors

Journals

citations
Cited by 75 publications
(58 citation statements)
references
References 29 publications
2
55
0
1
Order By: Relevance
“…However, we want to stress that responses to simplified dilemma situations should not be the basis for legal or ethical regulations. Furthermore, in agreement with Keeling (2017) and Nyholm (2018a), we believe empirical research alone cannot answer the ethical question of how self-driving cars should be programmed to behave. Nevertheless, we believe the results provide insights into the public's preferences regarding the decision making of self-driving cars and potential conflicts that may arise.…”
Section: Discussionmentioning
confidence: 70%
“…However, we want to stress that responses to simplified dilemma situations should not be the basis for legal or ethical regulations. Furthermore, in agreement with Keeling (2017) and Nyholm (2018a), we believe empirical research alone cannot answer the ethical question of how self-driving cars should be programmed to behave. Nevertheless, we believe the results provide insights into the public's preferences regarding the decision making of self-driving cars and potential conflicts that may arise.…”
Section: Discussionmentioning
confidence: 70%
“…risks of harm, on one side, versus actual harms, on the other). (Nyholm andSmids 2016: 1286) Once the distinction between risky and non-risky cases is made precise, it is clear that the categorical difference described here is insufficient to render trolley cases of little or no relevance to the moral design problem. Take the orthodox model of decision-making under risk (Luce and Raiffa 1957;Savage 1972).…”
Section: The Moral Difference Argumentmentioning
confidence: 96%
“…But in real-world collisions one action might produce several different outcomes, and the AV at best has a probability distribution over these outcomes. In short, AV collisions involve risk (Himmelreich 2018: 676-677;Nyholm andSmids 2016: 1286). The second step holds that the non-normative difference between these cases gives rise to a normative difference.…”
Section: The Moral Difference Argumentmentioning
confidence: 99%
“…In this sense, the term opacity is understood as the lighter version of so-called black-box problem (i.e., the lack of understanding of algorithmic functioning and interpretation of its results (Mittelstadt et al, 2016;Finn, 2019)). Sometimes, the opacity is mentioned alongside the transparency (Zerilli et al, 2019;Shin & Park, 2019), accountability (Cath, 2018;Espeland & Young, 2019), privacy, equity and inequality (Nyholm, 2018;Shadbolt et al, 2019;Hagendorff & Wezel, 2019), intepretability (Binns, 2018), inscrutability (Desai & Kroll, 2017;Kroll, 2018). In most cases, the mention of algorithmic opacity involves decision-making, choices, consensus and agreements in the course of action and notions of community and involvement in its practice.…”
Section: Opacity: Its Nature and Close Neighborsmentioning
confidence: 99%