Cloud workflow scheduling is significantly challenging due to not only the large scale of workflow but also the elasticity and heterogeneity of cloud resources. Moreover, the pricing model of clouds makes the execution time and execution cost two critical issues in the scheduling. This paper models the cloud workflow scheduling as a multiobjective optimization problem that optimizes both execution time and execution cost. A novel multiobjective ant colony system based on a co-evolutionary multiple populations for multiple objectives framework is proposed, which adopts two colonies to deal with these two objectives, respectively. Moreover, the proposed approach incorporates with the following three novel designs to efficiently deal with the multiobjective challenges: 1) a new pheromone update rule based on a set of nondominated solutions from a global archive to guide each colony to search its optimization objective sufficiently; 2) a complementary heuristic strategy to avoid a colony only focusing on its corresponding single optimization objective, cooperating with the pheromone update rule to balance the search of both objectives; and 3) an elite study strategy to improve the solution quality of the global archive to help further approach the global Pareto front. Experimental simulations are conducted on five types of real-world scientific workflows and consider the properties of Amazon EC2 cloud platform. The experimental results show that the proposed algorithm performs better than both some state-of-the-art multiobjective optimization approaches and the constrained optimization approaches.
The artificial potential field approach is an efficient path planning method. However, to deal with the local-stable-point problem in complex environments, it needs to modify the potential field and increases the complexity of the algorithm. This study combines improved black-hole potential field and reinforcement learning to solve the problems which are scenarios of local-stable-points. The blackhole potential field is used as the environment in a reinforcement learning algorithm. Agents automatically adapt to the environment and learn how to utilize basic environmental information to find targets. Moreover, trained agents adopt variable environments with the curriculum learning method. Meanwhile, the visualization of the avoidance process demonstrates how agents avoid obstacles and reach the target. Our method is evaluated under static and dynamic experiments. The results show that agents automatically learn how to jump out of local stability points without prior knowledge.
Multimodal optimization problem (MMOP), which targets at searching for multiple optimal solutions simultaneously, is one of the most challenging problems for optimization. There are two general goals for solving MMOPs. One is to maintain population diversity so as to locate global optima as many as possible, while the other is to increase the accuracy of the solutions found. To achieve these two goals, a novel dual-strategy differential evolution (DSDE) with affinity propagation clustering (APC) is proposed in this paper. The novelties and advantages of DSDE include the following three aspects. First, a dual-strategy mutation scheme is designed to balance exploration and exploitation in generating offspring. Second, an adaptive selection mechanism based on APC is proposed to choose diverse individuals from different optimal regions for locating as many peaks as possible. Third, an archive technique is applied to detect and protect stagnated and converged individuals. These individuals are stored in the archive to preserve the found promising solutions and are reinitialized for exploring more new areas. The experimental results show that the proposed DSDE algorithm is better than or at least comparable to the state-of-the-art multimodal algorithms when evaluated on the benchmark problems from CEC2013, in terms of locating more global optima, obtaining higher accuracy solution, and converging with faster speed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.