Proceedings of the Genetic and Evolutionary Computation Conference 2021
DOI: 10.1145/3449639.3459285
|View full text |Cite
|
Sign up to set email alerts
|

Psb2

Abstract: For the past six years, researchers in genetic programming and other program synthesis disciplines have used the General Program Synthesis Benchmark Suite to benchmark many aspects of automatic program synthesis systems. These problems have been used to make notable progress toward the goal of general program synthesis: automatically creating the types of software that human programmers code. Many of the systems that have attempted the problems in the original benchmark suite have used it to demonstrate perfor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 35 publications
(2 citation statements)
references
References 38 publications
0
2
0
Order By: Relevance
“…We use a core set of 12 problems with a range of difficulties and requirements for many of our experiments, and expand that set to 26 problems (all of the problems from the suite that have been solved by at least one program synthesis system) for one experiment. We additionally compare down-sampled lexicase selection to standard lexicase selection on the 25 problems of PSB2, the second iteration of general program synthesis benchmark problems (Helmuth & Kelly, 2021). 1 As in , we define each problemʼs specifications as a set of input/output examples, so that GP has no knowledge of the underlying problems besides these examples.…”
Section: Experimental Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We use a core set of 12 problems with a range of difficulties and requirements for many of our experiments, and expand that set to 26 problems (all of the problems from the suite that have been solved by at least one program synthesis system) for one experiment. We additionally compare down-sampled lexicase selection to standard lexicase selection on the 25 problems of PSB2, the second iteration of general program synthesis benchmark problems (Helmuth & Kelly, 2021). 1 As in , we define each problemʼs specifications as a set of input/output examples, so that GP has no knowledge of the underlying problems besides these examples.…”
Section: Experimental Methodsmentioning
confidence: 99%
“…Table 5 continues the comparison from Table 4 on 25 new problems from PSB2 (Helmuth & Kelly, 2021). These problems were designed to be a step more difficult than those from , and show lower success rates for both standard lexicase selection and downsampled lexicase selection.…”
Section: Expanding Benchmarking Of Down-sampled Lexicase Selection To More Problemsmentioning
confidence: 99%