2011
DOI: 10.1007/978-3-642-22887-2_47
|View full text |Cite
|
Sign up to set email alerts
|

Towards Heuristic Algorithmic Memory

Abstract: Abstract. We propose a long-term memory design for artificial general intelligence based on Solomonoff's incremental machine learning methods. We introduce four synergistic update algorithms that use a Stochastic Context-Free Grammar as a guiding probability distribution of programs. The update algorithms accomplish adjusting production probabilities, re-using previous solutions, learning programming idioms and discovery of frequent subprograms. A controlled experiment with a long training sequence shows that … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
3
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
5
1

Relationship

3
3

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 9 publications
1
3
0
Order By: Relevance
“…In 1984, Solomonoff observed that KUSP machines are especially suitable for incremental learning [11]. In our work [25] we found that, the incremental learning approach was indeed useful (as in the preceding OOPS algorithm [26]). Here is how we interpreted incremental learning.…”
Section: Incremental Machine Learningsupporting
confidence: 66%
See 1 more Smart Citation
“…In 1984, Solomonoff observed that KUSP machines are especially suitable for incremental learning [11]. In our work [25] we found that, the incremental learning approach was indeed useful (as in the preceding OOPS algorithm [26]). Here is how we interpreted incremental learning.…”
Section: Incremental Machine Learningsupporting
confidence: 66%
“…During the experimental tests of our Stochastic Context Free Grammar based search and update algorithms [25], we have observed that in practice we can realize fast updates, and we can still achieve actual code re-use and tremendous speed-up. Using only 0.5 teraflop/sec of computing speed and a reference machine choice of R5RS Scheme [27], we solved 6 simple deterministic operator induction problems in 245.1 seconds.…”
Section: Incremental Machine Learningmentioning
confidence: 99%
“…• Teramachine [32]: a universal inductive inference engine that partially solves the memory and parallelism problems.…”
Section: Artificial General Intelligencementioning
confidence: 99%
“…An extremely advanced formulation of the optimizing/goal-following agents is possible due to Solomonoff's general-purpose AI system Alpha, stage 2 [39] which can solve time-limited free-form optimization problems. Alpha is of particular interest to us because it is the culmination of our research program, as teramachine was conceived as a candidate for Alpha, stage 1 [32]. Shortly, merely collecting as much information as possible is not enough.…”
Section: Advanced Formulation Of Ai Probementioning
confidence: 99%