2013
DOI: 10.1016/j.asoc.2012.11.049
|View full text |Cite
|
Sign up to set email alerts
|

Regularized continuous estimation of distribution algorithms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
16
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 29 publications
(16 citation statements)
references
References 41 publications
0
16
0
Order By: Relevance
“…It is obvious that for an easy problem, a small value of NP is sufficient, but for difficult problems, a large value of NP is recommended in order to avoid trapping to a local optimum. The large NP may provide better optimization with larger calculation [16]. However, it may vary from problem to problem.…”
Section: The Selection Of Np and Nbmentioning
confidence: 99%
See 1 more Smart Citation
“…It is obvious that for an easy problem, a small value of NP is sufficient, but for difficult problems, a large value of NP is recommended in order to avoid trapping to a local optimum. The large NP may provide better optimization with larger calculation [16]. However, it may vary from problem to problem.…”
Section: The Selection Of Np and Nbmentioning
confidence: 99%
“…There are many researches about the EDAs. Karshenas and Santana [16] adopt regularized method to improve the conventional EDA performance, which used some benchmarks with 100 dimensions for testing. Valdez and Hernández [17] adopt Gaussian model to approximate the Boltzmann distribution, which can analyze the minimization of the Kullback-Leibler divergence instead of computing the mean and variance of Gaussian model, and the test suite is up to 50 dimensions.…”
Section: Introductionmentioning
confidence: 99%
“…Yet, due to high dimensionality of a covariance matrix and limited number of samples, the WMLE estimate of the covariance matrix is an unreliable estimator with high variance. This over-fitted estimation of the covariance makes the upper-level policy highly biased to a specific region of the parameter space, which often causes premature convergence [7]. Instead, we can estimate only a diagonal covariance matrix with fewer parameters [8], yet, such a solution has a high bias and might result in a slow learning performance as we neglect the correlations between the parameters.…”
Section: Introductionmentioning
confidence: 99%
“…One other solution is using regularization techniques for estimating the covariance matrix. Standard regularization techniques such as covariance shrinkage [9], [7] are based on a convex combination of different estimators, e.g., the high variance estimator of the sample covariance matrix and the high bias estimator of the diagonal covariance matrix. Yet, policy search algorithms have a big advantage when estimating the covariance matrix.…”
Section: Introductionmentioning
confidence: 99%
“…Other existing statistical methods have also been applied to control the amount of covariance/dependency modeling in EDAs. In [93], regularization techniques were adopted into EDAs. The resulting algorithm shows the ability to solve high dimensional problems with a comparable quality of solutions using much smaller populations.…”
Section: Issues Related To Edas and Their Remediesmentioning
confidence: 99%