2000
DOI: 10.1080/09528130050111428
|View full text |Cite
|
Sign up to set email alerts
|

Prolegomena to any future artificial moral agent

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

3
116
0
3

Year Published

2005
2005
2024
2024

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 249 publications
(122 citation statements)
references
References 7 publications
3
116
0
3
Order By: Relevance
“…Just as there is not one universal ethical theory, there is no agreement on what it means to be a moral agent, let alone a successful artificial moral agent. A Moral Turing Test (Allen et al 2000) is one possible strategy for evaluating the adequacy of an AMA in light of differing theories of moral agency. Turing's test for machine intelligence is notoriously controversial, and we would not endorse it as a criterion for strong A.I.…”
Section: Evaluating Machine Moralitymentioning
confidence: 99%
See 1 more Smart Citation
“…Just as there is not one universal ethical theory, there is no agreement on what it means to be a moral agent, let alone a successful artificial moral agent. A Moral Turing Test (Allen et al 2000) is one possible strategy for evaluating the adequacy of an AMA in light of differing theories of moral agency. Turing's test for machine intelligence is notoriously controversial, and we would not endorse it as a criterion for strong A.I.…”
Section: Evaluating Machine Moralitymentioning
confidence: 99%
“…We pick up where Allen et al (2000) left off when they wrote, 'Essential to building a morally praiseworthy agent is the task of giving it enough intelligence to assess the effects of its actions on sentient beings, and to use those assessments to make appropriate choices'. Top-down approaches to this task involve turning explicit theories of moral behavior into algorithms.…”
Section: Introductionmentioning
confidence: 99%
“…They will continue to do so not just in one or more of these roles (e.g. Block 2007), but also as disseminators (Hohwy 2014), participants (Eliasmith 2013), and articulators of new social and moral concerns that arise as intuitions about human cognition and agency are challenged (Roskies 2010;Allen, Varner, and Zinser 2000). We think about the mind differently now than we did 100 years ago, due to both theoretical and empirical advances.…”
mentioning
confidence: 99%
“…There are many studies by philosophers trying to implement moral common sense to artificial intelligence [17][18][19][20][21]. Wallch et al [21] proposed a hybrid model combining a posteriori model and a deductive model.…”
Section: Introductionmentioning
confidence: 99%