Proceedings of the 2020 Genetic and Evolutionary Computation Conference 2020
DOI: 10.1145/3377930.3389838
|View full text |Cite
|
Sign up to set email alerts
|

Versatile black-box optimization

Abstract: Choosing automatically the right algorithm using problem descriptors is a classical component of combinatorial optimization. It is also a good tool for making evolutionary algorithms fast, robust and versatile. We present Shiwa, an algorithm good at both discrete and continuous, noisy and noise-free, sequential and parallel, blackbox optimization. Our algorithm is experimentally compared to competitors on YABBOB, a BBOB comparable testbed, and on some variants of it, and then validated on several real world te… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3
1
1

Relationship

1
9

Authors

Journals

citations
Cited by 20 publications
(17 citation statements)
references
References 39 publications
0
17
0
Order By: Relevance
“…The current focus of Nevergrad is to be seen on the problem side, as it ofers several new benchmark problems, such as the structured optimization problems which are aggregated in their own test suite. Nevergrad also provides interfaces to the following benchmark collections: LSGO [45], YABBOB [46], Pyomo [32], MLDA [24], and MuJoCo [56]. The performance evaluation, however, is much more basic than those of COCO or IOHanalyzer, in that only the quality of the inally recommended point(s) is stored, but no information about the search trajectory.…”
Section: Related Benchmarking Environmentsmentioning
confidence: 99%
“…The current focus of Nevergrad is to be seen on the problem side, as it ofers several new benchmark problems, such as the structured optimization problems which are aggregated in their own test suite. Nevergrad also provides interfaces to the following benchmark collections: LSGO [45], YABBOB [46], Pyomo [32], MLDA [24], and MuJoCo [56]. The performance evaluation, however, is much more basic than those of COCO or IOHanalyzer, in that only the quality of the inally recommended point(s) is stored, but no information about the search trajectory.…”
Section: Related Benchmarking Environmentsmentioning
confidence: 99%
“…TBPSA is a specific implementation of pcCMAES [36], a variant of CMAES. It evaluates points with a strong mutation rate and performs small steps in the best direction, relying on the longer-range trends of the objective landscape [37].…”
Section: Evolution Strategy and Bayesian Optimisationmentioning
confidence: 99%
“…(vii) NGOpt [21] is an algorithm which automatically selects the right evolutionary algorithm to be trained out of a set of several algorithms, according to the properties of the optimization problem. NGOpt is implemented in the Nevergrad package.…”
Section: B Experiments For One Master Face Imagementioning
confidence: 99%