2019
DOI: 10.3390/math7090824
|View full text |Cite
|
Sign up to set email alerts
|

Tuning Multi-Objective Evolutionary Algorithms on Different Sized Problem Sets

Abstract: Multi-Objective Evolutionary Algorithms (MOEAs) have been applied successfully for solving real-world multi-objective problems. Their success can depend highly on the configuration of their control parameters. Different tuning methods have been proposed in order to solve this problem. Tuning can be performed on a set of problem instances in order to obtain robust control parameters. However, for real-world problems, the set of problem instances at our disposal usually are not very plentiful. This raises the qu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 29 publications
0
6
0
Order By: Relevance
“…For instance, these algorithms generate some random potential answers to a problem and, then, try to improve them based on some random operations [46]. Therefore, they must be executed at least 30 times, and their performance should be evaluated based on the produced data [47]. An algorithm which can yield very similar results in disparate runs, is more stable than the others and consequently, its generated outcomes should be better than the others'.…”
Section: Resultsmentioning
confidence: 99%
“…For instance, these algorithms generate some random potential answers to a problem and, then, try to improve them based on some random operations [46]. Therefore, they must be executed at least 30 times, and their performance should be evaluated based on the produced data [47]. An algorithm which can yield very similar results in disparate runs, is more stable than the others and consequently, its generated outcomes should be better than the others'.…”
Section: Resultsmentioning
confidence: 99%
“…Therefore, they must be executed at least 30 times, and their performances should be evaluated based on the produced data [44]. An algorithm, which yields very similar results in disparate runs, is more stable than the others and consequently, its generated outcomes should be better than the other ones.…”
Section: -Resultsmentioning
confidence: 99%
“…Therefore, they must be executed at least 30 times, and their performances are evaluated based on the produced data (47). An algorithm, whose results in disparate runs are close together, is more stable than others.…”
Section: -Resultsmentioning
confidence: 99%