Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence 2018
DOI: 10.24963/ijcai.2018/768
|View full text |Cite
|
Sign up to set email alerts
|

AGI Safety Literature Review

Abstract: The development of Artificial General Intelligence (AGI) promises to be a major event. Along with its many potential benefits, it also raises serious safety concerns (Bostrom, 2014). The intention of this paper is to provide an easily accessible and up-to-date collection of references for the emerging field of AGI safety. A significant number of safety problems for AGI have been identified. We list these, and survey recent research on solving them. We also cover works on how best to think of AGI from the limit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
40
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 59 publications
(40 citation statements)
references
References 108 publications
(162 reference statements)
0
40
0
Order By: Relevance
“…To address these concerns, and for many application areas involving anticipation of human motions these concerns play a central role, transparency, explainability, and interpretability become more and more important criteria for the certification of machine learning driven systems. For a comprehensive review of the current literature addressing these rising concerns about safety and trustworthiness in machine learning see [31]. Funding: This work has been supported in part by Deutsche Forschungsgemeinschaft under grant 313421352 (DFG-Forschergruppe 2535 "Anticipating Human Behavior", projects P3 and P4).…”
Section: Discussionmentioning
confidence: 99%
“…To address these concerns, and for many application areas involving anticipation of human motions these concerns play a central role, transparency, explainability, and interpretability become more and more important criteria for the certification of machine learning driven systems. For a comprehensive review of the current literature addressing these rising concerns about safety and trustworthiness in machine learning see [31]. Funding: This work has been supported in part by Deutsche Forschungsgemeinschaft under grant 313421352 (DFG-Forschergruppe 2535 "Anticipating Human Behavior", projects P3 and P4).…”
Section: Discussionmentioning
confidence: 99%
“…There has been a modest amount of work on developing policy solutions to AI risk, with a recent literature review by [4] and Everitt (2016) [5] covering most of it. Some authors have focused on the development of AGI, with proposed solutions ranging from Joy (2000) [6] who calls for a complete moratorium on AGI research, to Hibbard (2002) [7] and Hughes (2007) [8], who advocate for regulatory regimes to prevent the emergence of harmful AGI, to McGinnis (2010), who advocates for the US to steeply accelerate friendly AGI research [9].…”
Section: Introductionmentioning
confidence: 99%
“…Some authors have focused on the development of AGI, with proposed solutions ranging from Joy (2000) [6] who calls for a complete moratorium on AGI research, to Hibbard (2002) [7] and Hughes (2007) [8], who advocate for regulatory regimes to prevent the emergence of harmful AGI, to McGinnis (2010), who advocates for the US to steeply accelerate friendly AGI research [9]. Everitt et al (2017) [5] suggests that there should be an increase in AI safety funding. Scherer (2016) [10], however, at least in the context of narrow AI, argues that tort law and the existing legal structures, along with the concentration of AI R&D in large visible corporations like Google, will provide some incentives for the safe development of AI.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations