2013 IEEE 12th International Conference on Cognitive Informatics and Cognitive Computing 2013
DOI: 10.1109/icci-cc.2013.6622232
|View full text |Cite
|
Sign up to set email alerts
|

A biologically inspired computational model of Moral Decision Making for autonomous agents

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2013
2013
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 18 publications
0
6
0
Order By: Relevance
“…Velleman argues that, because reason is accessible to everyone identically, obligations apply to all people equually [51,25]. When Kant describes the categorical imperative as the objective principle of the will, he is referring to the fact that, as opposed to a subjective principle, the categorical imperative applies to all rational agents equally [31,16]. At its core, the FUL best handles, "the temptation to make oneself an exception: selfishness, meanness, advantagetaking, and disregard for the rights of others" [34,30].…”
Section: Additional Testsmentioning
confidence: 99%
See 1 more Smart Citation
“…Velleman argues that, because reason is accessible to everyone identically, obligations apply to all people equually [51,25]. When Kant describes the categorical imperative as the objective principle of the will, he is referring to the fact that, as opposed to a subjective principle, the categorical imperative applies to all rational agents equally [31,16]. At its core, the FUL best handles, "the temptation to make oneself an exception: selfishness, meanness, advantagetaking, and disregard for the rights of others" [34,30].…”
Section: Additional Testsmentioning
confidence: 99%
“…While this example is (hopefully) not typical of the operation of a self-driving car, every decision that such an AI agent makes, from avoiding congested freeways to carpooling, is morally tinged. Machine ethicists recognize this need and have made theoretical ([8,19,53,26]) and practical progress in automating ethics ( [6,16,30,54]). Prior work in machine ethics using deontology ( [2,4]), consequentialism ( [1,3,17]), and virtue ethics ( [13]) rarely engages with philosophical literature, and so misses philosophers' insights.…”
Section: Introductionmentioning
confidence: 99%
“…Finally, there are papers in which the hierarchy across theory types remains ambiguous. Examples of ambiguous papers are implementations where authors try to mimic the human brain [37], or focus on implementing constraints such as the Pareto principle [89], which does not strictly speaking constitute a moral theory. Note that categorizing a paper as "ambiguous" does not imply a negative assessment of the implementation.…”
Section: Ethical Classificationmentioning
confidence: 99%
“…Finally, there are papers in which the hierarchy across theory types remains ambiguous. Examples of ambiguous papers are implementations where authors try to mimic the human brain [34], or focus on implementing constraints such as the Pareto principle [86], which does not strictly speaking constitute a moral theory. Note that categorizing a paper as "ambiguous" does not imply a negative assessment of the implementation.…”
Section: Ethical Classificationmentioning
confidence: 99%