2017
DOI: 10.1007/s10506-017-9209-6
|View full text |Cite
|
Sign up to set email alerts
|

Do androids dream of normative endorsement? On the fallibility of artificial moral agents

Abstract: The more autonomous future artificial agents will become, the more important it seems to equip them with a capacity for moral reasoning and to make them autonomous moral agents (AMAs). Some authors have even claimed that one of the aims of AI development should be to build morally praiseworthy agents. From the perspective of moral philosophy, praiseworthy moral agents, in any meaningful sense of the term, must be fully autonomous moral agents who endorse moral rules as action-guiding. They need to do so becaus… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
references
References 20 publications
0
0
0
Order By: Relevance