2020
DOI: 10.1002/ijop.12715
|View full text |Cite
|
Sign up to set email alerts
|

Seeing the mind of robots: Harm augments mind perception but benevolent intentions reduce dehumanisation of artificial entities in visual vignettes

Abstract: A ccording to moral typecasting theory, good-and evil-doers (agents) interact with the recipients of their actions (patients) in a moral dyad. When this dyad is completed, mind attribution towards intentionally harmed liminal minds is enhanced. However, from a dehumanisation view, malevolent actions may instead result in a denial of humanness. To contrast both accounts, a visual vignette experiment (N = 253) depicted either malevolent or benevolent intentions towards robotic or human avatars. Additionally, we … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 15 publications
(7 citation statements)
references
References 61 publications
0
7
0
Order By: Relevance
“…Some of this research provides evidence that people empathize with artificial entities and respond negatively to actions that appear to harm or insult them (Darling, 2016 ; Freier, 2008 ; Rosenthal-von der Pütten et al, 2013 ; Suzuki et al, 2015 ). Bartneck and Keijsers ( 2020 ) found no significant difference between participants’ ratings of the moral acceptability of abuse towards a human or a robot, but other researchers have found evidence that current artificial entities are granted less moral consideration than humans (Slater et al, 2006 ; Gray et al, 2007 ; Bartneck & Hu, 2008 ; Küster & Świderska, 2016 ; Akechi et al, 2018 ; Sommer et al, 2019 ; Nijssen et al, 2019 ; Küster and Świderska, 2020 ).…”
Section: Resultsmentioning
confidence: 98%
See 3 more Smart Citations
“…Some of this research provides evidence that people empathize with artificial entities and respond negatively to actions that appear to harm or insult them (Darling, 2016 ; Freier, 2008 ; Rosenthal-von der Pütten et al, 2013 ; Suzuki et al, 2015 ). Bartneck and Keijsers ( 2020 ) found no significant difference between participants’ ratings of the moral acceptability of abuse towards a human or a robot, but other researchers have found evidence that current artificial entities are granted less moral consideration than humans (Slater et al, 2006 ; Gray et al, 2007 ; Bartneck & Hu, 2008 ; Küster & Świderska, 2016 ; Akechi et al, 2018 ; Sommer et al, 2019 ; Nijssen et al, 2019 ; Küster and Świderska, 2020 ).…”
Section: Resultsmentioning
confidence: 98%
“…There is also evidence that people in individual rather than group settings (Hall, 2005 ), with prior experience interacting with robots (Spence et al, 2018 ), or presented with information promoting support for robot rights, such as “examples of non-human entities that are currently granted legal personhood” (Lima et al, 2020 ) are more willing to grant artificial entities moral consideration. Other studies have examined the conditions under which people are most willing to attribute high mental capacities to artificial entities (Briggs et al, 2014 ; Fraune et al, 2017 ; Gray & Wegner, 2012 ; Küster & Swiderska, 2020 ; Küster et al, 2020 ; McLaughlin & Rose, 2018 ; Swiderska & Küster, 2018 , 2020 ; Wallkötter et al, 2020 ; Wang & Krumhuber, 2018 ; Ward et al, 2013 ; Wortham, 2018 ).…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…The ability to feel pain is not tied to agency but instead to experience. Although we focused primarily on agency, future studies should also explore the importance of experience, especially given work on the "harm-made mind" which finds that the very act of inflicting harm increases perceptions of experience (Ward, Olsen, & Wegner, 2013; for harm-made mind research in the robotic contexts, see Küster & Swiderska, 2021;Swiderska & Küster, 2020).…”
Section: Limitations and Future Researchmentioning
confidence: 99%