2012
DOI: 10.1007/s10472-012-9303-0
|View full text |Cite
|
Sign up to set email alerts
|

Multi-objective optimization with an adaptive resonance theory-based estimation of distribution algorithm

Abstract: The introduction of learning to the search mechanisms of optimization algorithms has been nominated as one of the viable approaches when dealing with complex optimization problems, in particular with multi-objective ones. One of the forms of carrying out this hybridization process is by using multi-objective optimization estimation of distribution algorithms (MOEDAs). However, it has been pointed out that current MOEDAs have an intrinsic shortcoming in their modelbuilding algorithms that hamper their performan… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 47 publications
0
4
0
Order By: Relevance
“…In most EDAs that do not restrict the dependence relationships between variables, the joint probability distribution is estimated by a Bayesian network (Section II-C) learned from data. EDAs have also been developed with the probability distribution estimated from log-linear probability models [39], probabilistic principal component analysis [40], Kikuchi approximations [41], Markov networks [42], [43], Markov chains [44], copulas and vines [45], a reinforcement learningbased method [46], Gaussian adaptive resonance theory neural networks [47], growing neural gas networks [48], restricted Boltzmann machines [49], [50], [51] and in the deep learning area, from autoencoders [52], variational autoencoders [53], [54], and generative adversarial networks [55]. Model selection in EDAs is a more complex problem.…”
Section: Initial Population Of Candidate Solutionsmentioning
confidence: 99%
See 1 more Smart Citation
“…In most EDAs that do not restrict the dependence relationships between variables, the joint probability distribution is estimated by a Bayesian network (Section II-C) learned from data. EDAs have also been developed with the probability distribution estimated from log-linear probability models [39], probabilistic principal component analysis [40], Kikuchi approximations [41], Markov networks [42], [43], Markov chains [44], copulas and vines [45], a reinforcement learningbased method [46], Gaussian adaptive resonance theory neural networks [47], growing neural gas networks [48], restricted Boltzmann machines [49], [50], [51] and in the deep learning area, from autoencoders [52], variational autoencoders [53], [54], and generative adversarial networks [55]. Model selection in EDAs is a more complex problem.…”
Section: Initial Population Of Candidate Solutionsmentioning
confidence: 99%
“…Most of these algorithms, EDAs included, simplify the problem by reducing the m-dimensional space to a scalar value with fitness functions like the convergence indicator, the Pareto-optimal front coverage indicator, the hypervolume indicator and the unary additive -indicator. This is the strategy followed by EDAs based on neural networks [47], [48], [54], [51], on probabilistic models [82], [103], [107] or on a Parzen estimator [108].…”
Section: E Multiobjective Edasmentioning
confidence: 99%
“…Estimation of distribution algorithm (EDA), the combination of statistical learning and evolutionary algorithms, has been claimes as a paradigm shift in the field of evolutionary computation [7]. EDA get estimate of the probability distribution of the candidate solutions through a probability model built by superior solutions set, then new solutions are generated by sampling the distribution encoded by this model.…”
Section: Intorductionmentioning
confidence: 99%
“…Model building growing neural gas (MB-GNG) algorithm [Martí et al, 2011] uses a specific single-layer neural network, called growing neural gas, to determine the location of mixture components which are Gaussian distributions. The approach is further extended in [Martí et al, 2012] by using the adaptive resonance theory and employing a hypervolume indicator-based selection method [Bader and Zitzler, 2011].…”
Section: A Survey Of Multi-objective Edasmentioning
confidence: 99%