2018
DOI: 10.1007/s10732-018-9369-x
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating selection methods on hyper-heuristic multi-objective particle swarm optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(9 citation statements)
references
References 49 publications
0
9
0
Order By: Relevance
“…Many multi-objective particle swarm optimization (MOPSO) algorithms have been proposed to solve the problem of multi-objective optimization in the literature [13] and [37][38][39][40]. Many MOPSOs algorithms concentrate on procedures for choosing the best global, and some MOPSOs use a fixed value for dynamics and accelerations.…”
Section: Multi-objective Particle Swarm Optimizationmentioning
confidence: 99%
See 1 more Smart Citation
“…Many multi-objective particle swarm optimization (MOPSO) algorithms have been proposed to solve the problem of multi-objective optimization in the literature [13] and [37][38][39][40]. Many MOPSOs algorithms concentrate on procedures for choosing the best global, and some MOPSOs use a fixed value for dynamics and accelerations.…”
Section: Multi-objective Particle Swarm Optimizationmentioning
confidence: 99%
“…Particle Swarm Optimization (PSO), which is an effective metaheuristic optimization method, stimulates the social behavior of the animals [27]. For many challenging optimization problems, it performs better than the other optimization methods at quicker and more robust rates of convergence [28][29][30][31][32][33][34][35][36][37][38][39][40]. Bonyadi and Michalewicz [28] reported a comprehensive study of limitations, modifications and applications of PSO types for optimization problems.…”
Section: Introductionmentioning
confidence: 99%
“…In their MOHH, HLH selects suitable operators through a 'reward' mechanism. Castro et al [59] integrated HH into MOPSO to select a proper combination of leader and archiving methods, and four selection strategies (i.e., CF, MAB, SR and roulette wheel) were designed. Zhang et al [60] applied extreme value credit to reward operators and probability matching to select suitable operators.…”
Section: Hyper-heuristic Reviewmentioning
confidence: 99%
“…Based on the development of MOHHs in Table 2, LLHs can be operators and metaheuristics, and thus two main modules can be obtained: (1) MOEA-based hyperheuristic (MOHH-I) which utilize operators (i.e., crossover, mutation, and domain-specific operators) or components (such as leader selection methods and archiving strategies in MOEAs [65,67,68,79]) as LLHs and corresponding elitism selection strategies, such as NSGA-II ranking mechanism [54,57,59,60,74,78,81,82] and Pareto Strength (SPEA2) [57,78], as acceptance criteria. Here, HLHs can access the objective value [60] and the methods involving objective space instead of solution space are independent of the domain problem.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Here, HLHs can access the objective value [60] and the methods involving objective space instead of solution space are independent of the domain problem. (2) Basic MOHH-II, which use MOEAs as LLHs, e.g., Li et al [60,83] and Maashi et al [61,64], and design corresponding acceptance criterion, such as AM [61,64], improving and equal [67,68,79], GDA and LA [64,83]. However, this classification is not shared by other studies.…”
Section: Literature Reviewmentioning
confidence: 99%