2020
DOI: 10.1016/j.asoc.2020.106139
|View full text |Cite
|
Sign up to set email alerts
|

Scalable and customizable benchmark problems for many-objective optimization

Abstract: Solving many-objective problems (MaOPs) is still a significant challenge in the multi-objective optimization (MOO) field. One way to measure algorithm performance is through the use of benchmark functions (also called test functions or test suites), which are artificial problems with a well-defined mathematical formulation, known solutions and a variety of features and difficulties. In this paper we propose a parameterized generator of scalable and customizable benchmark problems for MaOPs. It is able to gener… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
12
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 27 publications
(13 citation statements)
references
References 48 publications
0
12
0
1
Order By: Relevance
“…In this paper, it is considered that a typical MCDM approach arranges a finite number of alternatives in a preferable sort way. These preferences can be represented by using a twodimensional matrix as in (1). Also, it is considered a post hoc decision making problems.…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…In this paper, it is considered that a typical MCDM approach arranges a finite number of alternatives in a preferable sort way. These preferences can be represented by using a twodimensional matrix as in (1). Also, it is considered a post hoc decision making problems.…”
Section: Methodsmentioning
confidence: 99%
“…The alternatives used in this work come from the GPD [1]. A many-objective problem was formulated to generate a more complex problem.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations