2019
DOI: 10.17351/ests2019.260
|View full text |Cite
|
Sign up to set email alerts
|

Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction

Abstract: As debates about the policy and ethical implications of AI systems grow, it will be increasingly important to accurately locate who is responsible when agency is distributed in a system and control over an action is mediated through time and space. Analyzing several high-profile accidents involving complex and automated socio-technical systems and the media coverage that surrounded them, I introduce the concept of a moral crumple zone to describe how responsibility for an action may be misattributed to a human… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
106
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 149 publications
(108 citation statements)
references
References 17 publications
1
106
0
1
Order By: Relevance
“…Yet teaming's feel-good ethos of 'bringing out the best in everyone,' and its promise of flexibility in designing interactional relationships, leaves room to stretch out into implications for ethical and political domainsespecially since design prescriptions such as "transparency," as we will see, operate deeply on both the functional and ethical levels. This has left us room to more quietly address issues brought up in the work of anthropologists of technological labor (Gray and Suri 2019;Elish 2019), especially the ethical consequences of calibrating agency.…”
Section: Our Work At the Innovation Labmentioning
confidence: 99%
See 2 more Smart Citations
“…Yet teaming's feel-good ethos of 'bringing out the best in everyone,' and its promise of flexibility in designing interactional relationships, leaves room to stretch out into implications for ethical and political domainsespecially since design prescriptions such as "transparency," as we will see, operate deeply on both the functional and ethical levels. This has left us room to more quietly address issues brought up in the work of anthropologists of technological labor (Gray and Suri 2019;Elish 2019), especially the ethical consequences of calibrating agency.…”
Section: Our Work At the Innovation Labmentioning
confidence: 99%
“…In a recent paper in Data & Society, Elish (2019) describes that intelligent and autonomous systems in every form have the potential to generate "moral crumple zones." A "moral crumple zone" describes how responsibility for an automation error may be incorrectly displaced onto a human actor within the system who in fact had very little control over the erroneous behavior:…”
Section: Worker Culpabilitymentioning
confidence: 99%
See 1 more Smart Citation
“…We may not wish to incentivize the preservation of human control, even where less safe or efficient, merely to furnish a human "crumple zone" for liability. 261 I am aware that the liability still winds up landing on one or more humansperhaps the manufacturer of the driverless car instead of whoever happens to be behind the skeuomorphic wheel. Even so, the metaphors and analogies we use influence which human pays the price for a robotic harm.…”
Section: A Robotics Law: An Early Agendamentioning
confidence: 99%
“…Another set of reasons for maintaining humans as the locus of decision-making power is that they are situated in social and institutional contexts which allow for liability and apportioning of responsibility (Bryson et al, 2017), or societal legitimation (Rahwan, 2018). According to this view, human involvement needs to be substantive to ensure that genuine thought has been applied, and that control can be exercised where necessary, otherwise these liability arrangements may risk humans being reduced to rubber-stamping quasi-automated decisions which they were not meaningfully involved in (Wagner, 2019), or acting as a 'moral crumple zone' (Elish, 2016). Even if algorithms could effectively learn from observing human judgements, we may still want to keep human decision makers around to generate fresh ground truth and to avoid moral atrophy (Hildebrandt, 2013).…”
Section: Introductionmentioning
confidence: 99%