Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation 2015
DOI: 10.1145/2739480.2754681
|View full text |Cite
|
Sign up to set email alerts
|

Solving Problems with Unknown Solution Length at (Almost) No Extra Cost

Abstract: Most research in the theory of evolutionary computation assumes that the problem at hand has a fixed problem size. This assumption does not always apply to real-world optimization challenges, where the length of an optimal solution may be unknown a priori.Following up on previous work of Cathabard, Lehre, and Yao [FOGA 2011] we analyze variants of the (1+1) evolutionary algorithm for problems with unknown solution length. For their setting, in which the solution length is sampled from a geometric distribution… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
18
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 12 publications
(18 citation statements)
references
References 17 publications
(35 reference statements)
0
18
0
Order By: Relevance
“…As discussed in more detail in Section 1.2, this is a significant speedup compared to an EA using a static choice of mutation rate, which can only achieve Θ(nk) on LeadingOnes k . This is also an asymptotic speedup from the best-known runtime shown in [16], and indeed is asymptotically optimal among all unary unbiased black-box algorithms [3].…”
Section: Introductionmentioning
confidence: 64%
See 4 more Smart Citations
“…As discussed in more detail in Section 1.2, this is a significant speedup compared to an EA using a static choice of mutation rate, which can only achieve Θ(nk) on LeadingOnes k . This is also an asymptotic speedup from the best-known runtime shown in [16], and indeed is asymptotically optimal among all unary unbiased black-box algorithms [3].…”
Section: Introductionmentioning
confidence: 64%
“…Rather than adjusting the mutation rate in each generation during the actual search process, the algorithm instead spends O(k) generations approximating the hidden value k, and then O(k log k) generations actually optimising f k now that k is approximately known. This algorithm not only improves the bound from O(k(log k) 2+ε ) in [16] to Θ(k log k) for OneMax k under the hidden subset model, but the implicit constants of (1 ± o(1))en ln n are found as well, matching the performance of a (1 + 1) EA which knows k in advance. However, it remained to be demonstrated whether an EA could similarly solve the LeadingOnes k problem at no extra cost when k was unknown.…”
Section: Optimisation Against An Adversarymentioning
confidence: 69%
See 3 more Smart Citations