2022
DOI: 10.48550/arxiv.2207.00868
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The Linguistic Blind Spot of Value-Aligned Agency, Natural and Artificial

Abstract: The value-alignment problem for artificial intelligence (AI) asks how we can ensure that the 'values'-i.e., objective functions-of artificial systems are aligned with the values of humanity. In this paper, I argue that linguistic communication is a necessary condition for robust value alignment. I discuss the consequences that the truth of this claim would have for research programmes that attempt to ensure value alignment for AI systems-or, more loftily, designing robustly beneficial or ethical artificial age… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 123 publications
(145 reference statements)
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?