2020
DOI: 10.1016/j.asoc.2020.106737
|View full text |Cite
|
Sign up to set email alerts
|

Benchmarking large-scale continuous optimizers: The bbob-largescale testbed, a COCO software guide and beyond

Abstract: Benchmarking of optimization solvers is an important and compulsory task for performance assessment that in turn can help in improving the design of algorithms. It is a repetitive and tedious task. Yet, this task has been greatly automatized in the past ten years with the development of the Comparing Continuous Optimizers platform (COCO). In this context, this paper presents a new testbed, called bbob-largescale, that contains functions ranging from dimension 20 to 640, compatible with and extending the well-k… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(10 citation statements)
references
References 17 publications
0
9
0
1
Order By: Relevance
“…We believe that the time to call a flacco function from pflacco is negligible when 𝑛 is large enough. We used the BBOB function set [11] for 𝑛 ∈ {2, 3, 5, 10} and its large-scale version [48] for 𝑛 ∈ {20, 40, 80, 160, 320, 640}. Both function sets are available in the COCO platform [10].…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…We believe that the time to call a flacco function from pflacco is negligible when 𝑛 is large enough. We used the BBOB function set [11] for 𝑛 ∈ {2, 3, 5, 10} and its large-scale version [48] for 𝑛 ∈ {20, 40, 80, 160, 320, 640}. Both function sets are available in the COCO platform [10].…”
Section: Methodsmentioning
confidence: 99%
“…In this context, first, we investigate the computation time of features in the R-package flacco [20], which currently provides 17 feature classes. We use the BBOB function set and its large-scale version [48], which consists of the 24 functions with 𝑛 ∈ {20, 40, 80, 160, 320, 640}. The computation time of the ELA features has been paid little attention in the literature.…”
Section: Ref Yearmentioning
confidence: 99%
See 1 more Smart Citation
“…In evolutionary computation, the arguably best established benchmarking environment is the already mentioned COCO platform [30]. Originally designed to compare derivative-free optimization algorithms operating on numeric optimization problems [29], the tool has seen several extensions in the last years, e.g., towards multiobjective optimization [58], mixed-integer optimization [57], and large-scale optimization [60]. COCO consists of an experimentation part that produces data iles with detailed performance traces, and an automated data analysis part in which a ixed number of standardized analyses are automatically generated.…”
Section: Related Benchmarking Environmentsmentioning
confidence: 99%
“…Algoritma performansını ölçmenin en önemli yolu algoritmaları benchmark problemleri üzerinden kıyaslamaktır. [28][29][30][31] çalışmalarında çeşitli benchmark problemleri üzerinden optimizasyon algoritmalarını kıyas-lamışlardır. Birçok gerçek mühendislik problemlerindeki ana karmaşıklık ise problemin boyutu, kısıtlayıcılara sahip olması ve değişkenlerin birbirleri ile etkileşiminden kaynaklanır [32][33][34][35][36].…”
Section: Introductionunclassified