Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society 2018
DOI: 10.1145/3278721.3278743
|View full text |Cite
|
Sign up to set email alerts
|

A Framework for Grounding the Moral Status of Intelligent Machines

Abstract: I propose a framework, derived from moral theory, for assessing the moral status of intelligent machines. Using this framework, I claim that some current and foreseeable intelligent machines have approximately as much moral status as plants, trees, and other environmental entities. This claim raises the question: what obligations could a moral agent (e.g., a normal adult human) have toward an intelligent machine? I propose that the threshold for any moral obligation should be the "functional morality" of Walla… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(6 citation statements)
references
References 19 publications
0
6
0
Order By: Relevance
“…There is some brief discussion of abuse of robots. They note that verbal abuse towards machines might not matter morally, apart from insofar as it "sets the wrong normative tone in the environment" and may have negative indirect effects on observers Schafer (2016) Schafer analyzes the European Parliament's Resolution that relates to electronic personhood and discusses it through comparison and analogy to science fiction writings Scheessele (2018) The article argues that, "some current and foreseeable intelligent machines have approximately as much moral status as plants, trees, and other environmental entities… the upper limit of our obligations should not exceed the upper limit of our obligations toward plants, trees, and other environmental entities." "Moral agency" is seen as a key criterion and consciousness is not presented as required for an entity to have "a good of its own" Schmetkamp (2020) Schmetkamp argues that we can have empathy for social robots.…”
Section: Sarathy Et Al (2019)mentioning
confidence: 99%
“…There is some brief discussion of abuse of robots. They note that verbal abuse towards machines might not matter morally, apart from insofar as it "sets the wrong normative tone in the environment" and may have negative indirect effects on observers Schafer (2016) Schafer analyzes the European Parliament's Resolution that relates to electronic personhood and discusses it through comparison and analogy to science fiction writings Scheessele (2018) The article argues that, "some current and foreseeable intelligent machines have approximately as much moral status as plants, trees, and other environmental entities… the upper limit of our obligations should not exceed the upper limit of our obligations toward plants, trees, and other environmental entities." "Moral agency" is seen as a key criterion and consciousness is not presented as required for an entity to have "a good of its own" Schmetkamp (2020) Schmetkamp argues that we can have empathy for social robots.…”
Section: Sarathy Et Al (2019)mentioning
confidence: 99%
“…Suppose further that it is endowed with the functional morality described by Wallach and Allen (2009), but lacks consciousness. Arguably, this machine could have "a good of its own" (Scheessele 2018;Basl and Sandler 2013;Kaufmann 1994). Each human (as well as each living thing) also has a "good of its own."…”
Section: Critique Of Challenges To Anthropocentrismmentioning
confidence: 99%
“…Each human (as well as each living thing) also has a "good of its own." Using this common denominator (i.e., having a good of its own) as a threshold for moral status, the machine could have moral status (Scheessele 2018;Basl and Sandler 2013;Kaufmann 1994). Recall, however, that "thinking otherwise" seeks to avoid reducing difference to sameness in determining moral status of an Other.…”
Section: Critique Of Challenges To Anthropocentrismmentioning
confidence: 99%
See 2 more Smart Citations