The 2020 Conference on Artificial Life 2020
DOI: 10.1162/isal_a_00331
|View full text |Cite
|
Sign up to set email alerts
|

Interaction Between Evolution and Learning in NK Fitness Landscapes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…The main motivation for combining evolution and deep RL is the improved performance that may result from the combination. For instance, through simple experiments with simple fitness landscapes and simplified versions of the components, combining evolution and RL can be shown to work better than using either of the two in isolation (Todd et al, 2020). Why is this so?…”
Section: Evolution Of Policies For Performancementioning
confidence: 99%
“…The main motivation for combining evolution and deep RL is the improved performance that may result from the combination. For instance, through simple experiments with simple fitness landscapes and simplified versions of the components, combining evolution and RL can be shown to work better than using either of the two in isolation (Todd et al, 2020). Why is this so?…”
Section: Evolution Of Policies For Performancementioning
confidence: 99%
“…Increasingly, work combining evolution and lifetime learning mechanisms have been proposed and explored (Todd et al, 2020;Gupta et al, 2021) and this offers yet another opportunity to combine lifetime learning and evolution. The synergy of these two forms of learning might enable training dynamical systems previously too difficult to train using purely lifetime learning or evolutionary approaches alone.…”
Section: Combining Lifetime and Evolutionary Learningmentioning
confidence: 99%
“…Particularly, learning from other members in a group generally leads to a collective success by exploiting verified information [1][2][3], which distinguishes social learning from asocial, individual learning where information directly comes from exploration through the environment. While social learning is intuitively beneficial at first sight, research over the past several decades has consistently proven that naive social imitation is not inherently adaptive and often fails to achieve good group-level performance [4][5][6][7][8][9][10]. Instead, current theory suggests that, to properly determine how to learn from others one should employ a selective heuristics called a social learning strategy (SLS) [5], which governs the internal rules for choosing the proper time, subject, and methods to engage both social and individual learning.…”
Section: Introductionmentioning
confidence: 99%
“…In particular, RL has shown its strength in constructing computational models of the spatiotemporal dynamics of human cooperation and anthropological problems [32][33][34][35]. Here, we model the problem of social learning by considering a group of individuals that iteratively search for better solutions in a rugged, high-dimensional landscape [6,9,10,18,19,36], where our goal is to find the optimal heuristic for individuals that yields the maximum average payoff when shared with its group (Fig. 1A).…”
Section: Introductionmentioning
confidence: 99%