2021
DOI: 10.1109/access.2021.3058285
|View full text |Cite
|
Sign up to set email alerts
|

On Selection of a Benchmark by Determining the Algorithms’ Qualities

Abstract: The authors got the motivation for writing the paper based on an issue, with which developers of the newly developed nature-inspired algorithms are usually confronted today: How to select the test benchmark such that it highlights the quality of the developed algorithm most fairly? In line with this, the CEC Competitions on Real-Parameter Single-Objective Optimization benchmarks that were issued several times in the last decade, serve as a testbed for evaluating the collection of nature-inspired algorithms sel… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 13 publications
(7 citation statements)
references
References 28 publications
0
7
0
Order By: Relevance
“…Various kinds of Swarm Intelligence and Evolutionary Algorithms have been compared multiple times in the literature, both theoretically and empirically, but we are not aware of papers in which a relative improvement of many novel algorithms over the classical ones would be addressed. In numerous guides a number of rules have been proposed that, at least in the opinion of their authors, should be followed when comparing various algorithms [39], [40], [41], [42]. In practice often it is considered that the safest choice is to follow the rules defined for some widely accepted benchmark sets, such as IEEE or BBOB ones [33], [34].…”
Section: Literature Reviewmentioning
confidence: 99%
“…Various kinds of Swarm Intelligence and Evolutionary Algorithms have been compared multiple times in the literature, both theoretically and empirically, but we are not aware of papers in which a relative improvement of many novel algorithms over the classical ones would be addressed. In numerous guides a number of rules have been proposed that, at least in the opinion of their authors, should be followed when comparing various algorithms [39], [40], [41], [42]. In practice often it is considered that the safest choice is to follow the rules defined for some widely accepted benchmark sets, such as IEEE or BBOB ones [33], [34].…”
Section: Literature Reviewmentioning
confidence: 99%
“…With the increasing number of nature-inspired algorithms, various benchmarking tests have been developed to examine their performance [46]. These include testing the algorithms on different types of functions [47], [48], and checking the number of objective function evaluations they use [49].…”
Section: B Nature-inspired Algorithmsmentioning
confidence: 99%
“…In this section, IKMA is tested and compared with other algorithms by using 23 benchmark functions (Tables 2 and 3) from the CEC2013 [34]. The fitness function values' average and standard deviation (STD) are used as evaluation metrics to compare the algorithms' merits.…”
Section: A Cec2013 Benchmark Functionsmentioning
confidence: 99%