a b s t r a c tThe increasing complexity of real-world optimization problems raises new challenges to evolutionary computation. Responding to these challenges, distributed evolutionary computation has received considerable attention over the past decade. This article provides a comprehensive survey of the state-of-the-art distributed evolutionary algorithms and models, which have been classified into two groups according to their task division mechanism. Population-distributed models are presented with master-slave, island, cellular, hierarchical, and pool architectures, which parallelize an evolution task at population, individual, or operation levels. Dimension-distributed models include coevolution and multi-agent models, which focus on dimension reduction. Insights into the models, such as synchronization, homogeneity, communication, topology, speedup, advantages and disadvantages are also presented and discussed. The study of these models helps guide future development of different and/or improved algorithms. Also highlighted are recent hotspots in this area, including the cloud and MapReduce-based implementations, GPU and CUDA-based implementations, distributed evolutionary multiobjective optimization, and real-world applications. Further, a number of future research directions have been discussed, with a conclusion that the development of distributed evolutionary computation will continue to flourish.
In pedagogy, teachers usually separate mixed-level students into different levels, treat them differently and teach them in accordance with their cognitive and learning abilities. Inspired from this idea, we consider particles in the swarm as mixed-level students and propose a level-based learning swarm optimizer (LLSO) to settle large-scale optimization, which is still considerably challenging in evolutionary computation. At first, a level-based learning strategy is introduced, which separates particles into a number of levels according to their fitness values and treats particles in different levels differently. Then, a new exemplar selection strategy is designed to randomly select two predominant particles from two different higher levels in the current swarm to guide the learning of particles. The cooperation between these two strategies could afford great diversity enhancement for the optimizer. Further, the exploration and exploitation abilities of the optimizer are analyzed both theoretically and empirically in comparison with two popular particle swarm optimizers. Extensive comparisons with several state-of-the-art algorithms on two widely used sets of large-scale benchmark functions confirm the competitive performance of the proposed optimizer in both solution quality and computational efficiency. Finally, comparison experiments on problems with dimensionality increasing from 200 to 2000 further substantiate the good scalability of the developed optimizer.
Seeking multiple optima simultaneously, which multimodal optimization aims at, has attracted increasing attention but remains challenging. Taking advantage of ant colony optimization (ACO) algorithms in preserving high diversity, this paper intends to extend ACO algorithms to deal with multimodal optimization. First, combined with current niching methods, an adaptive multimodal continuous ACO algorithm is introduced. In this algorithm, an adaptive parameter adjustment is developed, which takes the difference among niches into consideration. Second, to accelerate convergence, a differential evolution mutation operator is alternatively utilized to build base vectors for ants to construct new solutions. Then, to enhance the exploitation, a local search scheme based on Gaussian distribution is self-adaptively performed around the seeds of niches. Together, the proposed algorithm affords a good balance between exploration and exploitation. Extensive experiments on 20 widely used benchmark multimodal functions are conducted to investigate the influence of each algorithmic component and results are compared with several state-of-the-art multimodal algorithms and winners of competitions on multimodal optimization. These comparisons demonstrate the competitive efficiency and effectiveness of the proposed algorithm, especially in dealing with complex problems with high numbers of local optima.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.