Proceedings of the Genetic and Evolutionary Computation Conference 2018
DOI: 10.1145/3205455.3205611
|View full text |Cite
|
Sign up to set email alerts
|

On the runtime analysis of selection hyper-heuristics with adaptive learning periods

Abstract: Selection hyper-heuristics are randomised optimisation techniques that select from a set of low-level heuristics which one should be applied in the next step of the optimisation process. Recently it has been proven that a Random Gradient hyper-heuristic optimises the L O benchmark function in the best runtime achievable with any combination of its low-level heuristics, up to lower order terms. To achieve this runtime, the learning period τ , used to evaluate the performance of the currently chosen heuristic, s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
35
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 54 publications
(36 citation statements)
references
References 44 publications
1
35
0
Order By: Relevance
“…So far, such learning-based concepts are rarely used in evolutionary computation. The only theoretical works in this direction propose a history-based choice of the mutation strength [9] and analyze hyperheuristics that stick to a chosen subheuristic until its performance over the last τ iterations, τ a parameter of the algorithms, appears insufficient (see, e.g., [15] and the references therein).…”
Section: Discussionmentioning
confidence: 99%
“…So far, such learning-based concepts are rarely used in evolutionary computation. The only theoretical works in this direction propose a history-based choice of the mutation strength [9] and analyze hyperheuristics that stick to a chosen subheuristic until its performance over the last τ iterations, τ a parameter of the algorithms, appears insufficient (see, e.g., [15] and the references therein).…”
Section: Discussionmentioning
confidence: 99%
“…The generalized random gradient heuristic was further extended in [DLOW18]. There an operator was defined as successful (which leads to another phase using this operator) if it leads to σ improvements in a phase of at most τ iterations.…”
Section: Beyond Mixing: Advanced Selection Mechanisms 10mentioning
confidence: 99%
“…By using a larger value of σ, the algorithm is able to take more robust decisions on what is a success. This was used in [DLOW18] to determine the phase length τ in a self-adjusting manner. While the previous work [LOW17] does not state this explicitly, the choice of τ is crucial.…”
Section: Beyond Mixing: Advanced Selection Mechanisms 10mentioning
confidence: 99%
See 1 more Smart Citation
“…All these references consider the optimization of One-Max, the problem of maximizing the counting-ones function f : {0, 1} n → R, x → n i=1 x i . Only few theoretical results analyzing algorithms with adaptive parameters consider different functions, e.g., [LOW17,DLOW18,DDK18] (see [DD18b] for a complete list of references). OneMax also plays a prominent role in empirical research on parameter control.…”
Section: Introductionmentioning
confidence: 99%