2019
DOI: 10.5281/zenodo.2594848
|View full text |Cite
|
Sign up to set email alerts
|

COmparing Continuous Optimizers: numbbo/COCO on Github

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 20 publications
(10 citation statements)
references
References 0 publications
0
10
0
Order By: Relevance
“…An instance is a rotated or shifted version of the original objective function. All described experiments were run with a recent GitHub version 2 , v2.3.1 [15]. Each algorithm is tested on the 15 available standard instances of the BBOB function set.…”
Section: Test Functionsmentioning
confidence: 99%
“…An instance is a rotated or shifted version of the original objective function. All described experiments were run with a recent GitHub version 2 , v2.3.1 [15]. Each algorithm is tested on the 15 available standard instances of the BBOB function set.…”
Section: Test Functionsmentioning
confidence: 99%
“…Experimental Setup. We assess the performance of PCA-BO on ten multi-modal functions taken from the the BBOB problem set [11], and compare the experimental result to a standard BO. This choice of test functions aims to verify the applicability of PCA-BO to more difficult problems, which potentially resemble the feature of real-world applications.…”
Section: Methodsmentioning
confidence: 99%
“…Here, the weighting scheme is meant for taking the objective values into account. We assessed the empirical performance of PCA-BO by testing it on the well-known BBOB problem set [11]. Among these problems, we focus on the multi-modal ones, which are most representative of real-world optimization challenges.…”
Section: Introductionmentioning
confidence: 99%
“…The objective is generated by sampling a c uniformly at random on the hyper-sphere. Following the idea in Hansen et al [2016Hansen et al [ , 2019, we define the constraints of such problems by setting the gradient of the first constraint to a 1 = −c to ensure the Karush-Kuhn-Tucker optimality conditions Kuhn and Tucker [1951], Nocedal and Wright [2006] hold at (0, . .…”
Section: Appendix a Convex Optimization Numerical Illustrationmentioning
confidence: 99%