Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation 2006
DOI: 10.1145/1143997.1144071
|View full text |Cite
|
Sign up to set email alerts
|

The correlation-triggered adaptive variance scaling IDEA

Abstract: It has previously been shown analytically and experimentally that continuous Estimation of Distribution Algorithms (EDAs) based on the normal pdf can easily suffer from premature convergence. This paper takes a principled first step towards solving this problem. First, prerequisites for the successful use of search distributions in EDAs are presented. Then, an adaptive variance scaling theme is introduced that aims at reducing the risk of premature convergence. Integrating the scheme into the iterated density-… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
56
0
1

Year Published

2007
2007
2009
2009

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 64 publications
(57 citation statements)
references
References 20 publications
0
56
0
1
Order By: Relevance
“…9 are known and can be computed theoretically (see [13]). The behavior of the simple Gaussian EDA with truncation selection in 1D space can be modelled by using statistics for the truncated normal distribution ( [13], [10]). In one iteration of the EDA, the variance of the population changes in the following way:…”
Section: Model Of Eda Behaviour In 1dmentioning
confidence: 99%
See 1 more Smart Citation
“…9 are known and can be computed theoretically (see [13]). The behavior of the simple Gaussian EDA with truncation selection in 1D space can be modelled by using statistics for the truncated normal distribution ( [13], [10]). In one iteration of the EDA, the variance of the population changes in the following way:…”
Section: Model Of Eda Behaviour In 1dmentioning
confidence: 99%
“…Adaptive variance scaling (AVS), i.e. enlarging the variance when better solutions were found and shrinking the variance in case of no improvement, was used along with various techniques to trigger the AVS only on the slope of the fitness function in [10] and [11].…”
Section: Introductionmentioning
confidence: 99%
“…UMDA [14], Compact Genetic Algorithm [10], Population-Based Incremental Learning [1], Relative Entropy [13], CrossEntropy [5] and Estimation of Multivariate Normal Algorithms (EMNA) [11] (our main inspiration), which combine (i) the current distribution (possibly), (ii) statistical properties of selected points, into a new distribution. We show in this paper that forgetting the old estimate and only using the new points is a good idea in the case of λ large; in particular, premature convergence as pointed out in [20,7,12,15] does not occur if λ >> 1 points are distributed on the search space with non-degenerated variance, and troubles around variance estimates for small sample size as in [6] are by definition not relevant for us. Its advantages are as follows for λ large: (i) it's very simple and parameter free; the reduced number of parameters is an advantage of mutative self adaptation in front of cumulative step-size adaptation, but we show here that yet fewer parameters (0!)…”
Section: Statisticalmentioning
confidence: 90%
“…In [21] and [22], Adaptive Variance Scaling (AVS) and Correlation Triggered AVS (CT-AVS) were proposed to use along with normal pdf in IDEA. The essential idea of AVS is to scale covariance matrix with an adaptive (also positive) coefficient c AVS to help increase the area of exploration.…”
Section: ) Negative Variance Of Egnamentioning
confidence: 99%