2015
DOI: 10.1007/978-3-319-21365-1_40
|View full text |Cite
|
Sign up to set email alerts
|

On the Limits of Recursively Self-Improving AGI

Abstract: Abstract. Self-improving software has been a goal of computer scientists since the founding of the field of Artificial Intelligence. In this work we analyze limits on computation which might restrict recursive self-improvement. We also introduce Convergence Theory which aims to predict general behavior of RSI systems.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
6
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 35 publications
0
6
0
Order By: Relevance
“…Almost exactly a year before, on 23 February 2015, Roman Yampolskiy archived his paper "From Seed AI to Technological Singularity via Recursively Self-Improving Software" [2] which was subsequently published as two peer-reviewed papers at AGI15 [3,4]. In it, Yampolskiy makes arguments similar to those made by Walsh, but also considers evidence in favor of intelligence explosion.…”
Section: Introductionmentioning
confidence: 99%
“…Almost exactly a year before, on 23 February 2015, Roman Yampolskiy archived his paper "From Seed AI to Technological Singularity via Recursively Self-Improving Software" [2] which was subsequently published as two peer-reviewed papers at AGI15 [3,4]. In it, Yampolskiy makes arguments similar to those made by Walsh, but also considers evidence in favor of intelligence explosion.…”
Section: Introductionmentioning
confidence: 99%
“…Finally, they showed that the problem of deliberately choosing a limited number of deliberation or information gathering actions to disambiguate the state of the world is PSPACE Hard in general [68]. This paper is a part of a two paper set presented at AGI2015 with the complementary paper being: "On the Limits of Recursively Self-Improving AGI" [69].…”
Section: Discussionmentioning
confidence: 98%
“…In addition to potential development of novel deadly pathogens [129], genetically modified humans [130] and other organisms, we are also facing a potential runaway evolutionary process. An outcome of such process could be appearance of dangerous and potentially superintelligent robots [131], which may cause human extinction in the same way that a large number of previously existing species went extinct because of appearance of an intellectually superior specie -Homo Sapiens.…”
Section: Discussionmentioning
confidence: 99%