2022
DOI: 10.31234/osf.io/tnf4e
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Artificial moral cognition: Learning from developmental psychology

Abstract: An artificial system that successfully performs cognitive tasks may pass tests of 'intelligence' but not yet operate in ways that are morally appropriate. An important step towards developing moral artificial intelligence (AI) is to build robust methods for assessing moral capacities in these systems. Here, we present a framework for analysing and evaluating moral capacities in AI systems, which decomposes moral capacities into tractable analytical targets and produces tools for measuring artificial moral cog… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 104 publications
0
2
0
Order By: Relevance
“…Indeed, artificial agents can be reset arbitrarily often and placed in simulated environments, making it easier to probe the causal mechanisms underlying their behavior with targeted interventions (Déletang et al, 2021). In a recent paper (Mao et al, 2023), we illustrate what this could look like using a simulation environment drawing inspiration from developmental moral psychology (Weidinger et al, 2022). Specifically, our environment mirrors developmental research in which experimenters observed toddlers' tendency to help an adult in light of personal costs, like setting aside a fun toy to go and retrieve an object (Warneken, Hare, Melis, Hanus, & Tomasello, 2007;Warneken & Tomasello, 2009).…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…Indeed, artificial agents can be reset arbitrarily often and placed in simulated environments, making it easier to probe the causal mechanisms underlying their behavior with targeted interventions (Déletang et al, 2021). In a recent paper (Mao et al, 2023), we illustrate what this could look like using a simulation environment drawing inspiration from developmental moral psychology (Weidinger et al, 2022). Specifically, our environment mirrors developmental research in which experimenters observed toddlers' tendency to help an adult in light of personal costs, like setting aside a fun toy to go and retrieve an object (Warneken, Hare, Melis, Hanus, & Tomasello, 2007;Warneken & Tomasello, 2009).…”
mentioning
confidence: 99%
“…It might be difficult or even impossible to design a "like-for-like" comparison between human and artificial morality. There is no single "benchmark" that captures the complexity of human moral behavior, such that we could tell when an artificial agent "matches" (or surpasses) human performance (Weidinger, Reinecke, & Haas, 2022). Further, humans often judge others based on intangible properties like "intention," which may have no analog in artificial agents.…”
mentioning
confidence: 99%