2022
DOI: 10.1016/j.chb.2022.107372
|View full text |Cite
|
Sign up to set email alerts
|

Predicting the moral consideration of artificial intelligences

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
16
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 22 publications
(16 citation statements)
references
References 159 publications
0
16
0
Order By: Relevance
“…However, people are much more skeptical of AIs capacity for sentience than their capacity for cognition. For example, Pauketat and Anthis [59] found that, on a scale from 0 (not at all) to 100 (very much), the mean response to whether future AIs can have emotions was 38.6 (standard deviation = 30.4), compared to a mean of 70.9 (standard deviation = 22.7) for cognition. 41 It may be very difficult to show with enough confidence that certain AIs are sentient, if we can even have such confidence ourselves.…”
Section: Strategic Considerationsmentioning
confidence: 99%
See 1 more Smart Citation
“…However, people are much more skeptical of AIs capacity for sentience than their capacity for cognition. For example, Pauketat and Anthis [59] found that, on a scale from 0 (not at all) to 100 (very much), the mean response to whether future AIs can have emotions was 38.6 (standard deviation = 30.4), compared to a mean of 70.9 (standard deviation = 22.7) for cognition. 41 It may be very difficult to show with enough confidence that certain AIs are sentient, if we can even have such confidence ourselves.…”
Section: Strategic Considerationsmentioning
confidence: 99%
“…Since the expected value of damaging the AI is lower than not doing so, we should avoid doing so. 41 See Appendix A of Pauketat and Anthis [59]. 42 This consideration will of course be stronger if we have higher confidence in the view that non-sentient AIs can have moral standing.…”
Section: Strategic Considerationsmentioning
confidence: 99%
“…Based on the social cognitive chain of being (SCCB) theory, the status of all things in the world (including humans) on the chain determines how they are treated by others, and anthropomorphism is an important factor determining the agents' status in the chain (Brandt & Reyna, 2011). Past studies have confirmed this theory (Park, 2013;Pauketat & Anthis, 2022). For instance, Nijssen et al (2019) found that social actors (machine-like robots) who looked least human sacrificed more often in moral dilemmas than human-like robots.…”
Section: Importance Of Anthropomorphism and Animacy Of Social Actors ...mentioning
confidence: 99%
“…One research has also found that people are more inclined to save humans in moral dilemmas than inanimate robots (Nijssen et al, 2019). The stronger the belief that robots are alive, the more moral considerations are given to artificial intelligence (Pauketat & Anthis, 2022). These studies have consistently shown that when a social actor is perceived as inanimate, it is unlikely to have the same moral status as an animate social actor.…”
Section: Importance Of Anthropomorphism and Animacy Of Social Actors ...mentioning
confidence: 99%
See 1 more Smart Citation