2019
DOI: 10.1109/access.2018.2885947
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Differential Evolution With Evolution Memory for Multiobjective Optimization

Abstract: In this paper, a multiobjective differential evolution (MODE) algorithm is developed by incorporating the memory mechanism of particle swarm optimization. That is, the personal best concept is used in the MODE to memorize the evolution of each solution through maintaining a set of non-dominated solutions found by each solution. Besides the adaptive selection of multiple mutation operators that are often adopted in the MODE, an adaptive refining method is used to improve the global external archive. The MODE is… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 39 publications
(48 reference statements)
0
2
0
Order By: Relevance
“…This is due to the internal behaviour of the optimization process: it is stopped through an internal convergence criterion and not by founding a global minimum [3]. More information and applications on the different optimization algorithms can be seen in [10,21,2,22,26,7,40,35], among others. Therefore, considering that the convergence of the method depends on the Robin parameters as presented in [25], the initial Robin parameter (calculated as the condensed stiffness on the interface estimated from an initial strip of elements and presented in Figure 5) can be modified by means of two factors: α G for the global model and α L for the local fine model, as shown in Eq.…”
Section: Description Of the Optimization Processmentioning
confidence: 99%
“…This is due to the internal behaviour of the optimization process: it is stopped through an internal convergence criterion and not by founding a global minimum [3]. More information and applications on the different optimization algorithms can be seen in [10,21,2,22,26,7,40,35], among others. Therefore, considering that the convergence of the method depends on the Robin parameters as presented in [25], the initial Robin parameter (calculated as the condensed stiffness on the interface estimated from an initial strip of elements and presented in Figure 5) can be modified by means of two factors: α G for the global model and α L for the local fine model, as shown in Eq.…”
Section: Description Of the Optimization Processmentioning
confidence: 99%
“…In [40], a cooperative DE framework is designed for constrained MOPs, in which multiple DE operators are run in different subpopulations to optimize its own constrained subproblem. In [41], four DE operators are combined and a sliding window is adopted in [42] to provide the reward for each DE operator according to the enhancement on subproblems. Similarly, four DE operator pools are presented in [43] including two DE operators with complementary search patterns in each pool to provide an improved search ability.…”
Section: Indicator-based Moeasmentioning
confidence: 99%