2008
DOI: 10.1016/j.ejor.2006.06.051
|View full text |Cite
|
Sign up to set email alerts
|

Matching inductive search bias and problem structure in continuous Estimation-of-Distribution Algorithms

Abstract: Research into the dynamics of Genetic Algorithms (GAs) has led to the field of Estimation-of-Distribution Algorithms (EDAs). For discrete search spaces, EDAs have been developed that have obtained very promising results on a wide variety of problems. In this paper we investigate the conditions under which the adaptation of this technique to continuous search spaces fails to perform optimization efficiently. We show that without careful interpretation and adaptation of lessons learned from discrete EDAs, contin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
31
0

Year Published

2009
2009
2021
2021

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 44 publications
(31 citation statements)
references
References 28 publications
0
31
0
Order By: Relevance
“…[10][11][12][13] The particular EA that we employ is iMAMaLGaM (incremental Multi-objective Adapted Maximum Likelihood Gaussian Model mixture) in which the underlying probabilistic model is a Gaussian mixture distribution. 4 In related work, iMAMaLGaM was shown to have excellent performance, converging to high-quality approximations of the optimal Pareto front on well-known benchmark problems.…”
Section: Multi-objective Optimization Algorithmmentioning
confidence: 99%
“…[10][11][12][13] The particular EA that we employ is iMAMaLGaM (incremental Multi-objective Adapted Maximum Likelihood Gaussian Model mixture) in which the underlying probabilistic model is a Gaussian mixture distribution. 4 In related work, iMAMaLGaM was shown to have excellent performance, converging to high-quality approximations of the optimal Pareto front on well-known benchmark problems.…”
Section: Multi-objective Optimization Algorithmmentioning
confidence: 99%
“…the Gaussian distribution should fit the local neighborhood much better than in case of EMNA or CMA-ES, 2. it works with individuals marked only with select/discard labels, it does not need the fitness values for each of them (as is the case in [1]), 3. it estimates a 'reasonable' Gaussian even from a small number of individuals (much less than D(D + 3)/2 + 1 that are needed by [1]).…”
Section: Summary and Future Workmentioning
confidence: 99%
“…1(a) for an example of Estimation of Multivariate Normal distribution Algorithm (EMNA)). Without imposing limits on the minimal 'size' of the Gaussian, the variance of the distribution in the direction of the fitness function gradient quickly decreases, and the algorithm thus can get stuck even on the slope of the fitness function [3].…”
Section: Introductionmentioning
confidence: 99%
“…In [21] and [22], Adaptive Variance Scaling (AVS) and Correlation Triggered AVS (CT-AVS) were proposed to use along with normal pdf in IDEA. The essential idea of AVS is to scale covariance matrix with an adaptive (also positive) coefficient c AVS to help increase the area of exploration.…”
Section: ) Negative Variance Of Egnamentioning
confidence: 99%