2022
DOI: 10.1145/3578482.3578483
|View full text |Cite
|
Sign up to set email alerts
|

Genetic programming benchmarks

Abstract: The top image shows a set of scales, which are intended to bring to mind the ideas of balance and fair experimentation which are the focus of our article on genetic programming benchmarks in this issue. Image by Elena Mozhvilo and made available under the Unsplash license on https://unsplash.com/photos/j06gLuKK0GM.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 23 publications
(25 reference statements)
0
4
0
Order By: Relevance
“…The recently proposed SRBench benchmark suite [3,4], presents the most comprehensive, systematic and reproducible evaluation process for SR algorithms. While SR is the most widely studied problem in GP, assessing and comparing SR methods was mostly done in an ad hoc manner before SRBench [47,48]. Moreover, it provides open-source and easy-to-use implementations of the state-of-the-art methods, considering both GP and non-GP methods.…”
Section: Related Workmentioning
confidence: 99%
“…The recently proposed SRBench benchmark suite [3,4], presents the most comprehensive, systematic and reproducible evaluation process for SR algorithms. While SR is the most widely studied problem in GP, assessing and comparing SR methods was mostly done in an ad hoc manner before SRBench [47,48]. Moreover, it provides open-source and easy-to-use implementations of the state-of-the-art methods, considering both GP and non-GP methods.…”
Section: Related Workmentioning
confidence: 99%
“…However, according to many authors who have used this dataset, these characteristics are interesting and should be integrated in a reasonable benchmark suite, because they allow us to test the ability of our algorithms to deal with the difficulties and ambiguities that are typical of real-world data. It is not our objective to discuss what characteristics a good benchmark suite should possess (the interested reader is referred to [45][46][47][48] for such a discussion). We simply observe that the Bioavailability dataset, as well as the PPB and LD50 datasets, have x, z = rnd(−1, 1), y = rnd(1, 2)…”
Section: Test Problemsmentioning
confidence: 99%
“…Moreover, the authors state that for the future of benchmarking in LS, it might be useful to further increase the diversity of benchmarks by exploring new Boolean function problems and curating these problems into a new benchmark suite. Therefore, in this work, we follow up the suggestion of McDermott et al [19] and consider benchmarking in LS from a general perspective. We reflect on the requirements for a general benchmark suite for LS, and bundle together a set of Boolean functions from the major categories commonly used in previous work on GP.…”
Section: Introductionmentioning
confidence: 98%
“…At last year's GECCO conference, the work of McDermott et al [20] was awarded the SIGEVO Impact award, which triggered reflection on the developments of the last decades. Very recently, a followup article on the state and of development of Benchmarking has been published by McDermott et al [19]. Besides reviewing wellestablished GP benchmark suites which have been proposed over the last years, the missing of a Boolean function benchmark suite for logic synthesis (LS) has been identified as one of the major gaps.…”
Section: Introductionmentioning
confidence: 99%