Many evolutionary algorithms have been proposed for multi-/many-objective optimization problems; however, the tradeoff of convergence and diversity is still the challenge for optimization algorithms. In this paper, we propose a modified particle swarm optimization based on decomposition framework with different ideal points on each reference vector, called MPSO/DD, for many-objective optimization problems. In the MPSO/DD algorithm, the decomposition strategy is used to ensure the diversity of the population, and the ideal point on each reference vector can draw the population converge faster to the optimal front. The position of each individual will be updated by learning the demonstrators in its neighborhood that have less distance to the ideal point along the reference vector. Eight state-of-the-art evolutionary multi-/many-objective optimization algorithms are adopted to compare the performance with MPSO/DD for solving many-objective optimization problems. The experimental results on seven DTLZ test problems with 3, 5, 8, 10, 15 and 20 objectives, respectively, show the efficiency of our proposed method on solving problems with high-dimensional objective space.
It has been widely recognized that the efficient training of neural networks (NNs) is crucial to the classification performance. While a series of gradient based approaches have been extensively developed, they are criticized for the ease of trapping into local optima and sensitivity to hyper-parameters. Owing to the high robustness and wide applicability, evolutionary algorithms (EAs) have been regarded as a promising alternative for training NNs in recent years. However, EAs suffer from the curse of dimensionality and are inefficient in training deep NNs. By inheriting the advantages of both the gradient based approaches and EAs, this paper proposes a gradient guided evolutionary approach to train deep NNs. The proposed approach suggests a novel genetic operator to optimize the weights in the search space, where the search direction is determined by the gradient of weights. Moreover, the network sparsity is considered in the proposed approach, which highly reduces the network complexity and alleviates overfitting. Experimental results on singlelayer NNs, deep-layer NNs, recurrent NNs, and convolutional NNs demonstrate the effectiveness of the proposed approach. In short, this work not only introduces a novel approach for training deep NNs, but also enhances the performance of evolutionary algorithms in solving large-scale optimization problems.
Surrogate-assisted meta-heuristic algorithms have shown good performance to solve the computationally expensive problems within a limited computational resource. Compared to the method that only one surrogate model is utilized, the surrogate ensembles have shown more efficiency to get a good optimal solution. In this paper, we propose a bi-stage surrogate-assisted hybrid algorithm to solve the expensive optimization problems. The framework of the proposed method is composed of two stages. In the first stage, a number of global searches will be conducted in sequence to explore different sub-spaces of the decision space, and the solution with the maximum uncertainty in the final generation of each global search will be evaluated using the exact expensive problems to improve the accuracy of the approximation on corresponding sub-space. In the second stage, the local search is added to exploit the sub-space, where the best position found so far locates, to find a better solution for real expensive evaluation. Furthermore, the local and global searches in the second stage take turns to be conducted to balance the trade-off of the exploration and exploitation. Two different meta-heuristic algorithms are, respectively, utilized for the global and local search. To evaluate the performance of our proposed method, we conduct the experiments on seven benchmark problems, the Lennard–Jones potential problem and a constrained test problem, respectively, and compare with five state-of-the-art methods proposed for solving expensive problems. The experimental results show that our proposed method can obtain better results, especially on high-dimensional problems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.