2022
DOI: 10.1613/jair.1.13922
|View full text |Cite
|
Sign up to set email alerts
|

Automated Dynamic Algorithm Configuration

Abstract: The performance of an algorithm often critically depends on its parameter configuration. While a variety of automated algorithm configuration methods have been proposed to relieve users from the tedious and error-prone task of manually tuning parameters, there is still a lot of untapped potential as the learned configuration is static, i.e., parameter settings remain fixed throughout the run. However, it has been shown that some algorithm parameters are best adjusted dynamically during execution. Thus far, thi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 17 publications
(3 citation statements)
references
References 103 publications
0
3
0
Order By: Relevance
“…Reinforcement learning excels at decision-making tasks, achieving a series of successes (Zhou, Li, and Wang 2020;Yang et al 2022;Haarnoja et al 2018;Wang et al 2022Wang et al , 2023bKuang et al 2022;Liu et al 2023;Zhou et al 2020) and finding increasing applications in specific tasks (Adriaensen et al 2022). We consider the stopping strategy as a reinforcement learning problem as well.…”
Section: Problem Formulationmentioning
confidence: 99%
“…Reinforcement learning excels at decision-making tasks, achieving a series of successes (Zhou, Li, and Wang 2020;Yang et al 2022;Haarnoja et al 2018;Wang et al 2022Wang et al , 2023bKuang et al 2022;Liu et al 2023;Zhou et al 2020) and finding increasing applications in specific tasks (Adriaensen et al 2022). We consider the stopping strategy as a reinforcement learning problem as well.…”
Section: Problem Formulationmentioning
confidence: 99%
“…Methods in AutoRL can be placed on a spectrum of automation, where on one end would be methods to select pipelines and on the other would be methods that try to discover new algorithms groundup in a data-driven manner (Oh et al, 2020). Techniques from the Automated Machine Learning literature (Hutter et al, 2019) then transfer to the RL setting, including algorithm selection (Laroche & Feraud, 2022), hyperparameter optimization (Li et al, 2019;Parker-Holder et al, 2020;Wan et al, 2022), dynamic configurations (Adriaensen et al, 2022), learned optimizers , and neural architecture search (Wan et al, 2022). Similarly, techniques from the Evolutionary optimization and Meta-Learning literature naturally transfer to this setting with methods aiming to meta-learn parts of the RL pipeline such as update rules (Oh et al, 2020), loss functions (Salimans et al, 2017;Kirsch et al, 2020), symbolic representations of algorithms (Alet et al, 2020;Co-Reyes et al, 2021;Luis et al, 2022), or concept drift (Lu et al, 2022).…”
Section: Automated Reinforcement Learning (Autorl)mentioning
confidence: 99%
“…Dynamic Algorithm Configuration (DAC) offers an automated solution to the task of setting algorithm hyperparameters dynamically, by determining well-performing hyperparameter schedules or policies. One way to learn such policies is through Reinforcement Learning (RL) [1,3]. While conceptually appealing, RL algorithms have the notorious tendency to significantly overfit their training environments [14,15,19].…”
Section: Introductionmentioning
confidence: 99%