Proceedings of the 2014 Annual Conference on Genetic and Evolutionary Computation 2014
DOI: 10.1145/2576768.2598360
|View full text |Cite
|
Sign up to set email alerts
|

Generic parameter control with reinforcement learning

Abstract: Parameter control in Evolutionary Computing stands for an approach to parameter setting that changes the parameters of an Evolutionary Algorithm (EA) on-the-fly during the run. In this paper we address the issue of a generic and parameter-independent controller that can be readily plugged into an existing EA and offer performance improvements by varying the EA parameters during the problem solution process. Our approach is based on a careful study of Reinforcement Learning (RL) theory and the use of existing R… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
29
0
1

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 52 publications
(30 citation statements)
references
References 28 publications
0
29
0
1
Order By: Relevance
“…In the Q-learning algorithm a single state is used and the ranges of parameter values are discretized a priori on five equally sized intervals as in the algorithm proposed by Karafotias et al The considered methods were tested on several real-valued functions with different landscapes and different number of local optima. We implemented the EARPC algorithm ourselves and we used the implementation of the method proposed by Karafotias et al kindly given by the authors of [7].…”
Section: Experiments and Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…In the Q-learning algorithm a single state is used and the ranges of parameter values are discretized a priori on five equally sized intervals as in the algorithm proposed by Karafotias et al The considered methods were tested on several real-valued functions with different landscapes and different number of local optima. We implemented the EARPC algorithm ourselves and we used the implementation of the method proposed by Karafotias et al kindly given by the authors of [7].…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…In the method proposed by Karafotias et al [7,8], a modification of the ε-greedy Q-learning algorithm is used. Let k denote the number of parameters being adjusted.…”
Section: Parameter Selection By Reinforcement Learningmentioning
confidence: 99%
See 2 more Smart Citations
“…Typical state features are fitness standard deviation, fitness improvement from parent to offspring, best fitness, and mean fitness [5,10]. Typical reward functions measure improvement achieved over the previous generation [10]. Other parameter control methods use an offline training phase to collect more data about the algorithm than what is available within a single run.…”
Section: Introductionmentioning
confidence: 99%