2020
DOI: 10.1007/978-3-030-39958-0_1
|View full text |Cite
|
Sign up to set email alerts
|

Characterizing the Effects of Random Subsampling on Lexicase Selection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
45
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 23 publications
(47 citation statements)
references
References 24 publications
1
45
1
Order By: Relevance
“…For these reasons, lexicase selection can be thought of as more faithfully modeling interactions between biological organisms and their environments. Hernandez et al (2019) recently proposed two methods for subsampling the training set each generation when using lexicase selection, which were further studied by Ferguson et al (2019). Down-sampled lexicase selection uses a different random subsample of cases for each generation.…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations
“…For these reasons, lexicase selection can be thought of as more faithfully modeling interactions between biological organisms and their environments. Hernandez et al (2019) recently proposed two methods for subsampling the training set each generation when using lexicase selection, which were further studied by Ferguson et al (2019). Down-sampled lexicase selection uses a different random subsample of cases for each generation.…”
Section: Introductionmentioning
confidence: 99%
“…These computational savings can be recouped by evaluating more individuals throughout evolution. Results from Hernandez et al (2019) and Ferguson et al (2019) indicate that both of these methods improve problem-solving performance compared to standard lexicase selection.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…For reasons discussed in Section 3, we have created datasets consisting of large numbers of inputs and correct outputs for every problem [13]. 5 The dataset for each problem consists of a small number of hand-chosen inputs, often addressing edge cases for the problem, and 1 million randomly-generated inputs falling within the constraints of the problem. We recommend each different program synthesis run use a different set of data, composed of every one of the hand-chosen inputs and a random sample of the randomly-generated inputs.…”
Section: Using Psb2mentioning
confidence: 99%