Proceedings of the Genetic and Evolutionary Computation Conference 2018
DOI: 10.1145/3205455.3205519
|View full text |Cite
|
Sign up to set email alerts
|

The linear hidden subset problem for the (1 + 1) EA with scheduled and adaptive mutation rates

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 12 publications
0
4
0
Order By: Relevance
“…The setting where k corresponds to an unknown initial number of bits which impact fitness has become known as the initial segment uncertainty model. The closely related hidden subset problem, which is analogous to the initial segment model except the k meaningful bits can be anywhere in the bitstring, has also been studied for LeadingOnes k and OneMax k [15,16,26]. Since our algorithm always flips all bits with equal probability during mutation, our results immediately extend to this class of problems.…”
Section: Optimisation Against An Adversarymentioning
confidence: 85%
See 2 more Smart Citations
“…The setting where k corresponds to an unknown initial number of bits which impact fitness has become known as the initial segment uncertainty model. The closely related hidden subset problem, which is analogous to the initial segment model except the k meaningful bits can be anywhere in the bitstring, has also been studied for LeadingOnes k and OneMax k [15,16,26]. Since our algorithm always flips all bits with equal probability during mutation, our results immediately extend to this class of problems.…”
Section: Optimisation Against An Adversarymentioning
confidence: 85%
“…They also show the log 1+ε k term can be further reduced by more carefully choosing the positional bitflip probabilities or the distribution Q; however, in a follow-up work, it is shown that the upper bound for both of these algorithms is nearly tight, that is, the expected runtime is ω(k 2 log k) [15]. In [26], a different sort of self-adjusting (1 + 1) EA is introduced for the hidden subset problem on the class of linear functions. Rather than adjusting the mutation rate in each generation during the actual search process, the algorithm instead spends O(k) generations approximating the hidden value k, and then O(k log k) generations actually optimising f k now that k is approximately known.…”
Section: Optimisation Against An Adversarymentioning
confidence: 99%
See 1 more Smart Citation
“…We note that such scenarios have been analyzed theoretically, and different ways to deal with this unknown solution length have been proposed. Efficient EAs can obtain almost the same performance (in asymptotic terms) than EAs "knowing" the problem dimension [Einarsson et al(2018)Einarsson, Lengler, Gauy, Meier, Mujika, Steger, andWeissenberger, Doerr et al(2017)Doerr, Doerr, and. Dummy variables are also among the characteristics of the benchmark functions contained in Facebook's nevergrad platform [Rapin and Teytaud(2018)], which might be seen as evidence for practical relevance.…”
Section: The Basic Transformationsmentioning
confidence: 99%