2020
DOI: 10.1609/aaai.v34i03.5617
|View full text |Cite
|
Sign up to set email alerts
|

How the Duration of the Learning Period Affects the Performance of Random Gradient Selection Hyper-Heuristics

Abstract: Recent analyses have shown that a random gradient hyper-heuristic (HH) using randomised local search (RLSk) low-level heuristics with different neighbourhood sizes k can optimise the unimodal benchmark function LeadingOnes in the best expected time achievable with the available heuristics, if sufficiently long learning periods τ are employed. In this paper, we examine the impact of the learning period on the performance of the hyper-heuristic for standard unimodal benchmark functions with different characteris… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(3 citation statements)
references
References 16 publications
0
3
0
Order By: Relevance
“…The length of the learning period is crucial for the success of the hyper-heuristic and can be determined in a self-adjusting manner as shown in [DLOW18]. These works were extended to other benchmark problems in [LOW20a].…”
Section: Hyper-heuristicsmentioning
confidence: 99%
“…The length of the learning period is crucial for the success of the hyper-heuristic and can be determined in a self-adjusting manner as shown in [DLOW18]. These works were extended to other benchmark problems in [LOW20a].…”
Section: Hyper-heuristicsmentioning
confidence: 99%
“…Since the optimal values for the distribution parameters γ and β are different in the exploitation and the exploration phases, future work may consider an adaptation of the parameters to automatically allow them to increase and decrease throughout the run [51]- [53]. Furthermore, the performance of the proposed operators should be evaluated experimentally for classical combinatorial optimisation problems, complementing the theoretical analyses of the worst-case performance, and for real-world applications.…”
Section: Discussionmentioning
confidence: 99%
“…The mechanism we propose to use to adapt the mutation rate in the (1+1) AIS is inspired by the 1/5 rule traditionally used in evolutionary computation for continuous optimisation [26]. The method has also been applied successfully in discrete optimisation to automatically adapt the offspring population size of crossover-based algorithms [16,17] and the duration of the learning period in online algorithm selection (i.e., hyper-heuristics) [20,30]. While commonly used in continuous optimisation, the 1/5 rule adaptation has rarely been rigorously studied to automatically adapt the mutation rate in combinatorial optimisation, with probably the only exception 2) respectively.…”
Section: Adaptive Mechanismmentioning
confidence: 99%