2021
DOI: 10.1038/s41562-021-01128-2
|View full text |Cite
|
Sign up to set email alerts
|

Bad machines corrupt good morals

Abstract: Machines powered by Artificial Intelligence (AI) are now influencing the behavior of humans in ways that are both like and unlike the ways humans influence each other. In light of recent research showing that other humans can exert a strong corrupting influence on people's ethical behavior, worry emerges about the corrupting power of AI agents. To estimate the empirical validity of these fears, we review the available evidence from behavioral science, human-computer interaction, and AI research. We propose tha… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

2
48
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 94 publications
(50 citation statements)
references
References 135 publications
2
48
0
Order By: Relevance
“…As detecting deepfakes appears less a matter of motivation and attention, our results suggest that deepfakes warrant special attention in the research and policy field of digital misinformation. In line with the widely voiced concern about deepfakes becoming a new AI threat ( Caldwell et al., 2020 ; Chesney and Citron, 2019 ; Köbis et al., 2021 ), our findings suggest some of the previously established strategies against manipulation do not hold for detection of deepfakes. A need exists for more research on human-centered strategies against deepfakes.…”
Section: Discussionsupporting
confidence: 79%
See 3 more Smart Citations
“…As detecting deepfakes appears less a matter of motivation and attention, our results suggest that deepfakes warrant special attention in the research and policy field of digital misinformation. In line with the widely voiced concern about deepfakes becoming a new AI threat ( Caldwell et al., 2020 ; Chesney and Citron, 2019 ; Köbis et al., 2021 ), our findings suggest some of the previously established strategies against manipulation do not hold for detection of deepfakes. A need exists for more research on human-centered strategies against deepfakes.…”
Section: Discussionsupporting
confidence: 79%
“…Together, these findings suggest that people apply an overly optimistic seeing-is-believing heuristic, which might put them at a particular risk of being influenced by deepfakes. We discuss these findings below, emphasizing the need for more research on the interplay of behavior by humans and machines ( Rahwan et al., 2019 ; Köbis et al., 2021 ).…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“… 1 We use the term “machine” as a interchangeable term for AI systems and robots, i.e., embodied forms of AI. Recent work on the human factors of AI systems has used this term to refer to both AI and robots (e.g., ( Köbis et al, 2021 )), and some of the literature that has inspired this research uses similar terms when discussing both entities, e.g., ( Matthias, 2004 ). …”
mentioning
confidence: 99%