2022
DOI: 10.48550/arxiv.2206.08896
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Evolution through Large Models

Abstract: This paper pursues the insight that large language models (LLMs) trained to generate code can vastly improve the effectiveness of mutation operators applied to programs in genetic programming (GP). Because such LLMs benefit from training data that includes sequential changes and modifications, they can approximate likely changes that humans would make. To highlight the breadth of implications of such evolution through large models (ELM), in the main experiment ELM combined with MAP-Elites generates hundreds of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
16
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(16 citation statements)
references
References 34 publications
0
16
0
Order By: Relevance
“…This work builds on previous results that highlight the potential of LMs to generate [69] or augment [26] data, although here the focus is on enabling continual evolution. It also is closely related to work demonstrating that LMs can be trained to embody intelligent mutation operators for code [46], i.e. by training a model on changes to code files gathered from GitHub; Lehman et al [46] also proposed a way to generate domain-specific mutations from hand-designed prompts for LMs.…”
Section: Intelligent Variation Operatorsmentioning
confidence: 99%
See 3 more Smart Citations
“…This work builds on previous results that highlight the potential of LMs to generate [69] or augment [26] data, although here the focus is on enabling continual evolution. It also is closely related to work demonstrating that LMs can be trained to embody intelligent mutation operators for code [46], i.e. by training a model on changes to code files gathered from GitHub; Lehman et al [46] also proposed a way to generate domain-specific mutations from hand-designed prompts for LMs.…”
Section: Intelligent Variation Operatorsmentioning
confidence: 99%
“…It also is closely related to work demonstrating that LMs can be trained to embody intelligent mutation operators for code [46], i.e. by training a model on changes to code files gathered from GitHub; Lehman et al [46] also proposed a way to generate domain-specific mutations from hand-designed prompts for LMs. Such operators may require fine-tuning a model on mutation-like training data gathered from GitHub or hand-specifying example mutations, and have been applied only to the domain of code.…”
Section: Intelligent Variation Operatorsmentioning
confidence: 99%
See 2 more Smart Citations
“…LLM can be viewed as a newer metalearning case. 249 Jane X Wang of DeepMind team published a review on metalearning in natural learning and AI. 250…”
Section: Anns and Natural Rule Typesmentioning
confidence: 99%