2021
DOI: 10.1016/j.isci.2021.102679
|View full text |Cite
|
Sign up to set email alerts
|

Algorithm exploitation: Humans are keen to exploit benevolent AI

Abstract: People predict that AI agents will be as benevolent (cooperative) as humans People cooperate less with benevolent AI agents than with benevolent humans Reduced cooperation only occurs if it serves people's selfish interests People feel guilty when they exploit humans but not when they exploit AI agents

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
14
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 28 publications
(38 citation statements)
references
References 48 publications
3
14
0
Order By: Relevance
“…Despite their concerns about algorithms, people often choose automatically taken decisions rather than decisions by human experts or they are indifferent (Araujo et al, 2020). Moreover, they appear to trust algorithms to be as cooperative as human interaction partners (Karpus et al, 2021).…”
Section: Studymentioning
confidence: 99%
“…Despite their concerns about algorithms, people often choose automatically taken decisions rather than decisions by human experts or they are indifferent (Araujo et al, 2020). Moreover, they appear to trust algorithms to be as cooperative as human interaction partners (Karpus et al, 2021).…”
Section: Studymentioning
confidence: 99%
“…Together with the participant statements, this suggests that the AI is seen as a tool rather than an independent co-author (H4). In line with the tendency of humans to more readily exploit AI than a human [62], the human ghostwriter was also credited more often than the AI ghostwriter (H5). Study 2 provided participants with a multiple-choice list of possible options to choose from, rather than an open author declaration.…”
Section: Discussionmentioning
confidence: 92%
“…org/RKV_ZXX 13 ), comparing AI-supported writing to the case of a human author supporting the writing task. Based on the finding that algorithms/AI models are more likely to be exploited than humans [62], we posit the following hypotheses:…”
Section: The Ai Ghostwriter Effectmentioning
confidence: 99%
See 1 more Smart Citation
“…High acceptance of artificial agents has been observed when the task involves analytical skills (March, 2021; Tulli et al, 2019). Conversely, people are less willing to accept artificial agents as cooperation partners in social contexts (Dietvorst et al, 2015; Ishowo‐Oloko et al, 2019; Karpus et al, 2021; Rovatsos, 2019). Neural activity in mentalizing and empathy networks is reduced when interacting with a computer or robot as compared to a true or alleged human (Chaminade et al, 2012; Rilling et al, 2004), and ERP components sensitive to affective stimulus significance are attenuated when participants receive personality feedback from a computer (Schindler & Kissler, 2016, 2018).…”
Section: Introductionmentioning
confidence: 99%