2011
DOI: 10.1109/tevc.2010.2069567
|View full text |Cite
|
Sign up to set email alerts
|

A Multicriteria Statistical Based Comparison Methodology for Evaluating Evolutionary Algorithms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
27
0
1

Year Published

2012
2012
2023
2023

Publication Types

Select...
3
2
2

Relationship

3
4

Authors

Journals

citations
Cited by 40 publications
(28 citation statements)
references
References 24 publications
0
27
0
1
Order By: Relevance
“…There can be different ways (e.g. see the recent work of Carrano et al ), and the description of a simple but practical technique, which was used, for example, by Fraser and Arcuri , is here provided.…”
Section: Multiple Testsmentioning
confidence: 99%
“…There can be different ways (e.g. see the recent work of Carrano et al ), and the description of a simple but practical technique, which was used, for example, by Fraser and Arcuri , is here provided.…”
Section: Multiple Testsmentioning
confidence: 99%
“…Carrano [3] showed that evolutionary algorithms cannot be compared only by means of computational performance. Being stochastic search heuristics, it is feasible that each execution have a different result.…”
Section: B Comparative Analysis Of De Algorithmsmentioning
confidence: 99%
“…To differentiate solutions obtained in an MO analysis, an approach quite widespread in the literature is the concept of Pareto Dominance. According to [3], the concept of Pareto Dominance can be used to compare feasible solutions to a problem. Given two solutions, x and y, it is said that x dominates y (denoted x y) if the following conditions are met:…”
Section: B Comparative Analysis Of De Algorithmsmentioning
confidence: 99%
“…In [12], Carrano et al proposed a comparison methodology for evaluation of the performance of evolutionary algorithms. This comparison methodology is based on a multicriteria analysis where each criterion is analyzed by constructing a ranking of the algorithms under analysis.…”
Section: Introductionmentioning
confidence: 99%
“…The criteria used for evaluate their methodology involved a trade-off between the computational cost of the comparing algorithms and the quality of the solution achieved by each of them. This algorithm evaluation (an others discussed in [12]) tries to deal with both learning tasks: that of fitting parameters to some training data and, then, selecting the best model. In our particular case though, we pursue a different objective since we want a method able to evaluate predictive models already deployed into a DSS, which gives us a measure of how well these classification models are working in the system according to the data introduced along time by the users for diagnosis support.…”
Section: Introductionmentioning
confidence: 99%