2021
DOI: 10.1111/cogs.13032
|View full text |Cite
|
Sign up to set email alerts
|

Can a Robot Lie? Exploring the Folk Concept of Lying as Applied to Artificial Agents

Abstract: The potential capacity for robots to deceive has received considerable attention recently. Many papers explore the technical possibility for a robot to engage in deception for beneficial purposes (e.g., in education or health). In this short experimental paper, I focus on a more paradigmatic case: robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment that investigates the following three q… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
17
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 22 publications
(20 citation statements)
references
References 87 publications
2
17
0
Order By: Relevance
“…Second, should there be conditions under which a machine can perform speech acts constitutively governed by moral principles, we may follow Nickel (2013) in contending that the moral evaluation of such acts is to be traced back to their designers. Alternatively, and in light of the results of Kneer (2021), we may wish to include that machine among those agents whom we assess morally. These two strategies are of course compatible with one another.…”
Section: Discussionmentioning
confidence: 99%
“…Second, should there be conditions under which a machine can perform speech acts constitutively governed by moral principles, we may follow Nickel (2013) in contending that the moral evaluation of such acts is to be traced back to their designers. Alternatively, and in light of the results of Kneer (2021), we may wish to include that machine among those agents whom we assess morally. These two strategies are of course compatible with one another.…”
Section: Discussionmentioning
confidence: 99%
“…Plausible as it sounds in philosophical circles, this empirical premise is under considerable pressure from a plethora of studies in human-robot interaction (see e.g. Malle et al 2015, Liu & Du, 2022, Kneer, 2021 -studies, which suggest that people are rather willing to blame robots. 3 Scholars who deny responsibility gaps in the first place, or argue that they can be "plugged", should be concerned about these findings:…”
Section: Conclusion (From 3and4mentioning
confidence: 99%
“…to artificial agents than to humans across different domains (see e.g. Malle et al 2015;Malle et al 2016;Voiklis et al 2016;Kneer 2021;Liu & Du, 2022). Given that the evidence is mixed and seems to depend strongly on context, we ran an experiment which closely tracks Sparrow's scenario and can thus provide some insight into retribution gaps as hypothesized by Danaher.…”
Section: Moral Judgment In Human-robot Interactionmentioning
confidence: 99%
See 1 more Smart Citation
“…We call the first the “agentive contribution” hypothesis and the second the “mere tool hypothesis.” If, on the one hand, the mere presence of an AI induces the idea that an independent agent is involved or that the AI assistant could have done something differently, we should expect that it will take a share of responsibility in action carried out by its human user (see Figure 1 , H1)—though not necessarily a 50-50 split. 13 , 14 , 15 , 16 , 17 Relatedly, we can expect that the human driver would be held less responsible when using the AI, as some share of responsibility goes to the AI system for contributing to the decision. 18 , 19
Figure 1 Experimental design and expectations All y axes correspond to 0–100 responsibility ratings.
…”
Section: Introductionmentioning
confidence: 99%