2012
DOI: 10.1007/s11245-012-9128-9
|View full text |Cite
|
Sign up to set email alerts
|

Safety Engineering for Artificial General Intelligence

Abstract: Machine ethics and robot rights are quickly becoming hot topics in artificial intelligence and robotics communities. We will argue that attempts to attribute moral agency and assign rights to all intelligent machines are misguided, whether applied to infrahuman or superhuman AIs, as are proposals to limit the negative effects of AIs by constraining their behavior. As an alternative, we propose a new science of safety engineering for intelligent artificial agents based on maximizing for what humans value. In pa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
53
0

Year Published

2012
2012
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 51 publications
(53 citation statements)
references
References 49 publications
0
53
0
Order By: Relevance
“…Perhaps access to external information can be used to mediate speed of RSI process. This also has significant implications on safety mechanisms we can employ while experimenting with early RSI systems [55][56][57][58][59][60][61][62][63]. Finally, it needs to be investigated if the whole RSI process can be paused at any point and for any specific duration of time in order to limit any negative impact from potential intelligence explosion.…”
Section: Other Propertiesmentioning
confidence: 99%
“…Perhaps access to external information can be used to mediate speed of RSI process. This also has significant implications on safety mechanisms we can employ while experimenting with early RSI systems [55][56][57][58][59][60][61][62][63]. Finally, it needs to be investigated if the whole RSI process can be paused at any point and for any specific duration of time in order to limit any negative impact from potential intelligence explosion.…”
Section: Other Propertiesmentioning
confidence: 99%
“…This initially appears somehow pedestrian, but incorporating the objections of Yampolskiy (2013); Yampolskiy and Fox (2013), it becomes more logical: Yampolskiy and Fox consider that AI may become equivalent to, or exceeds human intelligence level. Reaching human level implicates that AI becomes capable to reproduce and improve its own kind.…”
Section: The Psychodynamic Structure Modelmentioning
confidence: 99%
“…Whereas most literature on politicized skepticism (and similar concepts such as denial) is backward-looking, consisting of historical analysis of skepticisms that have already occurred [1,2,[4][5][6][7], this paper is largely (but not exclusively) forward-looking, consisting of prospective analysis of skepticisms that could occur at some point in the future. Meanwhile, the superintelligence governance literature has looked mainly at institutional regulations to prevent research groups from building dangerous superintelligence and support for research on safety measures [8][9][10][11]. This paper contributes to a smaller literature on the role of corporations in superintelligence development [12] and on social and psychological aspects of superintelligence governance [13].…”
Section: Introductionmentioning
confidence: 99%