It has previously been shown analytically and experimentally that continuous Estimation of Distribution Algorithms (EDAs) based on the normal pdf can easily suffer from premature convergence. This paper takes a principled first step towards solving this problem. First, prerequisites for the successful use of search distributions in EDAs are presented. Then, an adaptive variance scaling theme is introduced that aims at reducing the risk of premature convergence. Integrating the scheme into the iterated density-estimation evolutionary algorithm (ID A) yields the correlation-triggered adaptive variance scaling ID A (CT-AVS-ID A). The CT-AVS-ID A is compared to the original ID A and the Evolution Strategy with Covariance Matrix Adaptation (CMA-ES) on a wide range of unimodal test-problems by means of a scalability analysis. It is found that the average number of fitness evaluations grows subquadratically with the dimensionality, competitively with the CMA-ES. In addition, CT-AVS-ID A is indeed found to enlarge the class of problems that continuous EDAs can solve reliably.
We describe a parameter-free estimation-of-distribution algorithm (EDA) called the adapted maximum-likelihood Gaussian model iterated density-estimation evolutionary algorithm (AMaLGaM-ID[Formula: see text]A, or AMaLGaM for short) for numerical optimization. AMaLGaM is benchmarked within the 2009 black box optimization benchmarking (BBOB) framework and compared to a variant with incremental model building (iAMaLGaM). We study the implications of factorizing the covariance matrix in the Gaussian distribution, to use only a few or no covariances. Further, AMaLGaM and iAMaLGaM are also evaluated on the noisy BBOB problems and we assess how well multiple evaluations per solution can average out noise. Experimental evidence suggests that parameter-free AMaLGaM can solve a wide range of problems efficiently with perceived polynomial scalability, including multimodal problems, obtaining the best or near-best results among all algorithms tested in 2009 on functions such as the step-ellipsoid and Katsuuras, but failing to locate the optimum within the time limit on skew Rastrigin-Bueche separable and Lunacek bi-Rastrigin in higher dimensions. AMaLGaM is found to be more robust to noise than iAMaLGaM due to the larger required population size. Using few or no covariances hinders the EDA from dealing with rotations of the search space. Finally, the use of noise averaging is found to be less efficient than the direct application of the EDA unless the noise is uniformly distributed. AMaLGaM was among the best performing algorithms submitted to the BBOB workshop in 2009.
A consensus is beginning to emerge that the next phase of artificial intelligence (AI) induction in business organizations will require humans to work with AI in a variety of work arrangements. This article explores the issues related to human capabilities to work with AI. A key to working in many work arrangements is the ability to delegate work to entities that can do them most efficiently. Modern AI can do a remarkable job of efficient delegation to humans because it knows what it knows well and what it does not. Humans, on the other hand, are poor judges of their metaknowledge and are not good at delegating knowledge work to AI—this might prove to be a big stumbling block to create work environments where humans and AI work together. Humans have often created machines to serve them. The sentiment is perhaps exemplified by Oscar Wilde’s statement that “civilization requires slaves…. Human slavery is wrong, insecure and demoralizing. On mechanical slavery, on the slavery of the machine, the future of the world depends.” However, the time has come when humans might switch roles with machines. Our study highlights capabilities that humans need to effectively work with AI and still be in control rather than just being directed.
Research into the dynamics of Genetic Algorithms (GAs) has led to the field of Estimation-of-Distribution Algorithms (EDAs). For discrete search spaces, EDAs have been developed that have obtained very promising results on a wide variety of problems. In this paper we investigate the conditions under which the adaptation of this technique to continuous search spaces fails to perform optimization efficiently. We show that without careful interpretation and adaptation of lessons learned from discrete EDAs, continuous EDAs will fail to perform efficient optimization on even some of the simplest problems. We reconsider the most important lessons to be learned in the design of EDAs and subsequently show how we can use this knowledge to extend continuous EDAs that were obtained by straightforward adaptation from the discrete domain so as to obtain an improvement in performance. Experimental results are presented to illustrate this improvement and to additionally confirm experimentally that a proper adaptation of discrete EDAs to the continuous case indeed requires careful consideration.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.