Proceedings of the Genetic and Evolutionary Computation Conference Companion 2017
DOI: 10.1145/3067695.3076060
|View full text |Cite
|
Sign up to set email alerts
|

On the exploitation of search history and accumulative sampling in robust optimisation

Abstract: E cient robust optimisation methods exploit the search history when evaluating a new solution by using information from previously visited solutions that fall in the new solution's uncertainty neighbourhood. We propose a full exploitation of the search history by updating the robust tness approximations across the entire search history rather than a xed population. Our proposed method shows promising results on a range of test problems compared with other approaches from the literature.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 8 publications
0
2
0
Order By: Relevance
“…As part of our work in this area, we have begun developing a suite of evolutionary optimisers specifically adapted to BDE systems that can be used together with a basic Python implementation of the parallel solver (Fieldsend et al, 2021). This suite currently contains a particle swarm optimiser that can identify multiple local optima within a given cost function landscape (Fieldsend, 2014) and an algorithm that utilises elite accumulative sampling to locate optima that are robust to disturbances in the parameter space (Alyahya et al, 2017).…”
Section: Discussionmentioning
confidence: 99%
“…As part of our work in this area, we have begun developing a suite of evolutionary optimisers specifically adapted to BDE systems that can be used together with a basic Python implementation of the parallel solver (Fieldsend et al, 2021). This suite currently contains a particle swarm optimiser that can identify multiple local optima within a given cost function landscape (Fieldsend, 2014) and an algorithm that utilises elite accumulative sampling to locate optima that are robust to disturbances in the parameter space (Alyahya et al, 2017).…”
Section: Discussionmentioning
confidence: 99%
“…It was found that implicit averaging method can be promising, but it should be accompanied with extra fitness evaluation to refine the final solutions obtained. To handle the same type of uncertainty, Fieldsend et al proposed an Elite Accumulating Sampling (EAS) in 2015 and 2017 [30,31,32]. This technique regularly re-evaluates the elites and accumulate the evaluations, which results in the need for additional function evaluations and higher computational cost.…”
Section: Related Work and Problem Backgroundmentioning
confidence: 99%