2018
DOI: 10.1186/s13040-018-0164-x
|View full text |Cite|
|
Sign up to set email alerts
|

Investigating the parameter space of evolutionary algorithms

Abstract: Evolutionary computation (EC) has been widely applied to biological and biomedical data. The practice of EC involves the tuning of many parameters, such as population size, generation count, selection size, and crossover and mutation rates. Through an extensive series of experiments over multiple evolutionary algorithm implementations and 25 problems we show that parameter space tends to be rife with viable parameters, at least for the problems studied herein. We discuss the implications of this finding in pra… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
38
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
4
4
2

Relationship

1
9

Authors

Journals

citations
Cited by 68 publications
(39 citation statements)
references
References 29 publications
1
38
0
Order By: Relevance
“…These values have been shown to produce accurate results, and since a preliminary tuning phase (performed with a grid search) showed no (statistically significant) change in terms of performance with respect to other configurations tested, we relied on these values. This is related to the fact that it has become increasingly clear that GP is very robust to parameter values once a good configuration is determined, as suggested by a recent and in-depth study [33]. Again, as anticipated in Section 3, these results confirm that all the considered classifiers achieve a very similar performance on unseen data, even if the training performance is better for some of them.…”
Section: Resultssupporting
confidence: 76%
“…These values have been shown to produce accurate results, and since a preliminary tuning phase (performed with a grid search) showed no (statistically significant) change in terms of performance with respect to other configurations tested, we relied on these values. This is related to the fact that it has become increasingly clear that GP is very robust to parameter values once a good configuration is determined, as suggested by a recent and in-depth study [33]. Again, as anticipated in Section 3, these results confirm that all the considered classifiers achieve a very similar performance on unseen data, even if the training performance is better for some of them.…”
Section: Resultssupporting
confidence: 76%
“…The function set contained only the four basic arithmetic operators (+, −, ×, and ÷, protected against division by zero as in [60]), plus the Maximum (max) and Minimum (min) operators. Although there is a vast array of tunable parameters even in the most basic GP system, normally they do not substantially influence the outcome in terms of best fitness achieved [61].…”
Section: Genetic Programming and Supervised Learningmentioning
confidence: 99%
“…Evolutionary computation generates the first set of possible solutions and then produces iterative updates by the stochastic deletion of less desirable solutions and introduction of random minor changes. In a biological sense, each population of solutions is exposed to either natural or artificial selection and mutation [15].…”
Section: Evolutionary Computationmentioning
confidence: 99%