Cloud workflow scheduling is a significant topic in both commercial and industrial applications. However, the growing scale of workflow has made such a scheduling problem increasingly challenging. Many current algorithms often deal with small-or medium-scale problems (e.g., less than 1000 tasks) and face difficulties in providing satisfactory solutions when dealing with the large-scale problems, due to the curse of dimensionality. To this aim, this article proposes a dynamic group learning distributed particle swarm optimization (DGLDPSO) for large-scale optimization and extends it for the large-scale cloud workflow scheduling. DGLDPSO is efficient for large-scale optimization due to its following two advantages. First, the entire population is divided into many groups, and these groups are coevolved by using the master-slave multigroup distributed model, forming a distributed PSO (DPSO) to enhance the algorithm diversity. Second, a dynamic group learning (DGL) strategy is adopted for DPSO to balance diversity and convergence. When applied DGLDPSO into the large-scale cloud workflow scheduling, an adaptive renumber strategy (ARS) is further developed to make solutions relate to the resource characteristic and to
Multimodal optimization problem (MMOP), which targets at searching for multiple optimal solutions simultaneously, is one of the most challenging problems for optimization. There are two general goals for solving MMOPs. One is to maintain population diversity so as to locate global optima as many as possible, while the other is to increase the accuracy of the solutions found. To achieve these two goals, a novel dual-strategy differential evolution (DSDE) with affinity propagation clustering (APC) is proposed in this paper. The novelties and advantages of DSDE include the following three aspects. First, a dual-strategy mutation scheme is designed to balance exploration and exploitation in generating offspring. Second, an adaptive selection mechanism based on APC is proposed to choose diverse individuals from different optimal regions for locating as many peaks as possible. Third, an archive technique is applied to detect and protect stagnated and converged individuals. These individuals are stored in the archive to preserve the found promising solutions and are reinitialized for exploring more new areas. The experimental results show that the proposed DSDE algorithm is better than or at least comparable to the state-of-the-art multimodal algorithms when evaluated on the benchmark problems from CEC2013, in terms of locating more global optima, obtaining higher accuracy solution, and converging with faster speed.
Large-scale optimization has become a significant and challenging research topic in the evolutionary computation (EC) community. Although many improved EC algorithms have been proposed for large-scale optimization, the slow convergence in the huge search space and the trap into local optima among massive suboptima are still the challenges. Targeted to these two issues, this article proposes an adaptive granularity learning distributed particle swarm optimization (AGLDPSO) with the help of machine-learning techniques, including clustering analysis based on locality-sensitive hashing (LSH) and adaptive granularity control based on logistic regression (LR). In AGLDPSO, a master-slave multisubpopulation distributed model is adopted, where the entire population is divided into multiple subpopulations, and these subpopulations are co-evolved. Compared with other large-scale optimization algorithms with single population evolution or centralized mechanism, the multisubpopulation distributed co-evolution mechanism will fully exchange the evolutionary information among different subpopulations to further enhance the population diversity. Furthermore, we propose an adaptive granularity learning strategy (AGLS) based on LSH and LR. The AGLS is helpful to determine an appropriate subpopulation size to control the learning granularity of the distributed subpopulations in different evolutionary states to balance the exploration ability for escaping from massive suboptima and the exploitation ability for converging in the huge search space. The experimental results Manuscript
Due to the increasing complexity of optimization problems, distributed differential evolution (DDE) has become a promising approach for global optimization. However, similar to the centralized algorithms, DDE also faces the difficulty of strategies' selection and parameters' setting. To deal with such problems effectively, this article proposes an adaptive DDE (ADDE) to relieve the sensitivity of strategies and parameters. In ADDE, three populations called exploration population, exploitation population, and balance population are co-evolved concurrently by using the master-slave multipopulation distributed framework. Different populations will adaptively choose their suitable mutation strategies based on the evolutionary state estimation to make full use of the feedback information from both individuals and the whole corresponding population. Besides, the historical successful experience and best solution improvement are collected and used to adaptively update the individual parameters (amplification factor F and crossover rate CR) and population parameter (population size N), respectively. The performance of ADDE is evaluated on all 30 widely used benchmark functions from the CEC 2014 test suite and all 22 widely used real-world application problems from the CEC 2011 test suite. The experimental results show that ADDE has great superiority compared with the other state-of-the-art DDE and adaptive differential evolution variants.
Aiming to dynamic optimization problems (DOPs), this paper develops a novel general distributed multiple populations (DMP) framework for evolutionary algorithms (EAs). DMP employs six strategies designed in three levels (i.e., population-level, subpopulation-level, and individual-level) to deal with different kinds of DOPs. First, the population-level subpopulation division estimation strategy in initialization phase rationally divides the whole population into several subpopulations to explore distinct subareas of search space sufficiently. Then, during the steady evolutionary process, diversity preservation in individual-level and population-level accelerates the responsiveness of the whole population to a new landscape, while subpopulation-level self-learning of elitist individuals promotes the exploitation of promising areas. Moreover, in subpopulation-level, the archive quality assurance technique avoids repeat exploring the same peaks by storing the locations of different peaks with low redundancy. When landscape variation occurs, in population-level, historical information containing excellent evolutionary pattern is recorded to guide the population evolution better in the new environment. DMP framework is easy to implement in various EAs due to its well generality and independence about operators and parameters of the embedded algorithm. Four DMP-EAs are accomplished in this paper whose basic algorithms are particle swarm optimization (PSO) and differential evolution (DE) with different settings. The performance of the four proposed DMP-EAs is evaluated on all the widely used complex DOP benchmarks from CEC 2009. The testing results indicate that the DMP-EAs generally significantly outperform many state-of-the-art dynamic EAs (DEAs) on most of DOP benchmarks.INDEX TERMS Dynamic optimization problem (DOP), distributed multiple population (DMP) framework, multi-level diversity preservation, adaptive historical information utilization, dynamic evolutionary algorithm (DEA)
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.