Proceedings of the 12th Annual Conference on Genetic and Evolutionary Computation 2010
DOI: 10.1145/1830483.1830749
|View full text |Cite
|
Sign up to set email alerts
|

Quasirandom evolutionary algorithms

Abstract: Motivated by recent successful applications of the concept of quasirandomness, we investigate to what extent such ideas can be used in evolutionary computation. To this aim, we propose different variations of the classical (1+1) evolutionary algorithm, all imitating the property that the (1+1) EA over intervals of time touches all bits roughly the same number of times. We prove bounds on the optimization time of these algorithms for the simple OneMax function.Surprisingly, none of the algorithms achieves the s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
34
0

Year Published

2010
2010
2021
2021

Publication Types

Select...
5
2
1

Relationship

6
2

Authors

Journals

citations
Cited by 42 publications
(36 citation statements)
references
References 14 publications
2
34
0
Order By: Relevance
“…Their use led to superior results for random walks (Cooper and Spencer (2006)), communication problems (Doerr et al (2008(Doerr et al ( , 2009) and load balancing problems (Friedrich et al (2010)) in networks. Doerr et al (2010a) show that, surprisingly, it is not straightforward to find a useful quasirandom variant of a simple (1+1) EA. Even on a simple function like OneMax, some of the obvious quasirandom (1+1) EAs perform unexpectedly poorly.…”
Section: Qeamentioning
confidence: 99%
See 2 more Smart Citations
“…Their use led to superior results for random walks (Cooper and Spencer (2006)), communication problems (Doerr et al (2008(Doerr et al ( , 2009) and load balancing problems (Friedrich et al (2010)) in networks. Doerr et al (2010a) show that, surprisingly, it is not straightforward to find a useful quasirandom variant of a simple (1+1) EA. Even on a simple function like OneMax, some of the obvious quasirandom (1+1) EAs perform unexpectedly poorly.…”
Section: Qeamentioning
confidence: 99%
“…Only recently, it was shown by Doerr et al (2010a) that the lower bound is at least (1 − o(1)) · en ln n, which means that the leading constant is exactly e. Their proof implicitly shows that the lower-order terms are of order O(n ln ln n). Using different techniques, namely a variant of the fitness-based partitions for lower bounds, Sudholt (2010) has proven the expected optimization time to be at least en ln n − 2n log log n − 16n.…”
Section: Tight Upper and Lower Bounds For The Classical (1+1) Eamentioning
confidence: 99%
See 1 more Smart Citation
“…In other words, if a function is easier to optimize than OneMax, then this can only be due to the fact that it has more than one global optimum. The general lower bound then follows from the following theorem by Doerr, Fouz and Witt [DFW10], which provides a lower bound for OneMax.…”
Section: The (1+1) Ea Optimizes Onemax Faster Than Any Function With mentioning
confidence: 99%
“…Such analyses are more exact than purely asymptotic ones and therefore typically harder to derive. For instance, while the expected runtime of the simple (1+1) EA on OneMax had been known to be Θ(n ln n) since the early days of the research area, the first tight lower bound of the kind (1−o(1))en ln n was not proven until 2010 [DFW10,Sud13]. For the more general case of linear functions, a long series of research results was published (e. g., [Jäg11,DJW12]) until Witt [Wit13] finally proved that the expected runtime of the (1+1) EA equals (1±o(1))en ln n for any linear function with non-zero weights.…”
Section: Introductionmentioning
confidence: 99%