Proceedings of the 2020 Genetic and Evolutionary Computation Conference 2020
DOI: 10.1145/3377930.3389833
|View full text |Cite
|
Sign up to set email alerts
|

Self-adjusting evolutionary algorithms for multimodal optimization

Abstract: Recent theoretical research has shown that self-adjusting and selfadaptive mechanisms can provably outperform static settings in evolutionary algorithms for binary search spaces. However, the vast majority of these studies focuses on unimodal functions which do not require the algorithm to flip several bits simultaneously to make progress. In fact, existing self-adjusting algorithms are not designed to detect local optima and do not have any obvious benefit to cross large Hamming gaps.We suggest a mechanism ca… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
44
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
1
1

Relationship

3
4

Authors

Journals

citations
Cited by 50 publications
(47 citation statements)
references
References 40 publications
3
44
0
Order By: Relevance
“…A promising setting for this bias is learned through a self-adjusting scheme being similar in style with the 1/5-rule [2] and related techniques: in a certain observation phase of length N two different parameter values are each tried N/2 times and the value that is relatively more successful is used in the next phase. Hence, this approach is in line with a recent line of theoretical research of self-adjusting algorithms where the concrete implementation of self-adjustment is an ongoing debate [4,5,7,[9][10][11]15,23,24]. See also the recent survey article [3] for an in-depth coverage of parameter control, self-adjusting algorithms, and theoretical runtime results.…”
Section: Introductionsupporting
confidence: 67%
“…A promising setting for this bias is learned through a self-adjusting scheme being similar in style with the 1/5-rule [2] and related techniques: in a certain observation phase of length N two different parameter values are each tried N/2 times and the value that is relatively more successful is used in the next phase. Hence, this approach is in line with a recent line of theoretical research of self-adjusting algorithms where the concrete implementation of self-adjustment is an ongoing debate [4,5,7,[9][10][11]15,23,24]. See also the recent survey article [3] for an in-depth coverage of parameter control, self-adjusting algorithms, and theoretical runtime results.…”
Section: Introductionsupporting
confidence: 67%
“…A value of 𝛽 = 1 is the best possible if more than 𝑛 𝜖 bits, for any arbitrarily small constant 𝜖 > 0, have to be flipped to escape from the local optima. We point out that the best possible expected runtime achievable with SBM is 𝑂 ((𝑒𝑛/𝑘) 𝑘 which is matched by the stagnation detection adaptive (1+1) EA [35].…”
Section: Multimodal Functionsmentioning
confidence: 82%
“…Future work should further evaluate experimentally and theoretically the performance of the proposed algorithms on classical combinatorial optimization problems and real-world applications. We note that recently a different self-adjusting mechanism has been proposed in the literature with the aim of increasing the standard bit mutation rate of evolutionary algorithms when detecting local optima [35]. While the mechanism was not designed with the aim of enhancing the non-elitist performance of search heuristics as well as the mutation operator in the presence of local optima, a comparison with our own proposed mechanism should be performed in the near future.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…We finish with some conclusions. Due to space restrictions, several proofs had to be omitted from this paper and have been replaced by proof sketches, but note that these proofs can be found in the preprint [26].…”
Section: Introductionmentioning
confidence: 99%