2020
DOI: 10.1007/978-3-030-58112-1_48
|View full text |Cite
|
Sign up to set email alerts
|

Learning Step-Size Adaptation in CMA-ES

Abstract: An algorithm's parameter setting often affects its ability to solve a given problem, e.g., population-size, mutation-rate or crossoverrate of an evolutionary algorithm. Furthermore, some parameters have to be adjusted dynamically, such as lowering the mutation-strength over time. While hand-crafted heuristics offer a way to fine-tune and dynamically configure these parameters, their design is tedious, time-consuming and typically involves analyzing the algorithm's behavior on simple problems that may not be re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
47
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 18 publications
(49 citation statements)
references
References 35 publications
2
47
0
Order By: Relevance
“…As a next step, we plan a closer inspection of concrete AutoML scenarios and existing methods used in these scenarios. This includes algorithm selection [19,18,41], meta algorithm selection [38,39], extreme algorithm selection [40,37], hyperparameter optimization [2,8], combined algorithm selection and hyperparameter optimization [16,36,9], algorithm configuration [15,1,14], and dynamic algorithm configuration [3,6,34,30]. An interesting question is to what extent existing methods can be seen as bounded rational, how they realize metareasoning, and whether they perhaps even do so in a provably optimal manner.…”
Section: Discussionmentioning
confidence: 99%
“…As a next step, we plan a closer inspection of concrete AutoML scenarios and existing methods used in these scenarios. This includes algorithm selection [19,18,41], meta algorithm selection [38,39], extreme algorithm selection [40,37], hyperparameter optimization [2,8], combined algorithm selection and hyperparameter optimization [16,36,9], algorithm configuration [15,1,14], and dynamic algorithm configuration [3,6,34,30]. An interesting question is to what extent existing methods can be seen as bounded rational, how they realize metareasoning, and whether they perhaps even do so in a provably optimal manner.…”
Section: Discussionmentioning
confidence: 99%
“…The roots of DAC extend to a variety of methods that use RL to control parameters of optimization approaches online. For example, in genetic algorithms (Sakurai et al, 2010;Karafotias et al, 2014), planning algorithms (Pageau et al, 2019;Speck et al, 2021;Bhatia et al, 2021), hyper-heuristics ( Özcan et al, 2010), physics simulations (Armstrong et al, 2006), and evolutionary strategies (Shala et al, 2020). The work of Sakurai et al (2010), Karafotias et al (2014) and Özcan et al (2010) in particular can be widely applied to optimization problems, whereas other works are more application focused.…”
Section: Dynamic Methodsmentioning
confidence: 99%
“…Empirically, we have seen dynamic hyperparameter schedules outperform static settings in fields like Evolutionary Computation [Shala et al, 2020], AI Planning [Speck et al, 2021] and Deep Learning [Daniel et al, 2016]. In addition, hyperheuristics [Ochoa et al, 2012] can also be seen as a form of DAC.…”
Section: Related Workmentioning
confidence: 99%
“…In some cases they become part of the algorithm. Dynamic step size adaption in ES using heuristics, for example, is very common, but can be replaced and outperformed by more specific DAC hyperparameter policies [Shala et al, 2020].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation