2021
DOI: 10.1371/journal.pone.0251204
|View full text |Cite
|
Sign up to set email alerts
|

Political optimizer with interpolation strategy for global optimization

Abstract: Political optimizer (PO) is a relatively state-of-the-art meta-heuristic optimization technique for global optimization problems, as well as real-world engineering optimization, which mimics the multi-staged process of politics in human society. However, due to a greedy strategy during the election phase, and an inappropriate balance of global exploration and local exploitation during the party switching stage, it suffers from stagnation in local optima with a low convergence accuracy. To overcome such drawbac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(4 citation statements)
references
References 68 publications
0
4
0
Order By: Relevance
“…Basetti et al (2021) proposed a Quasi-Oppositional-Based Political Optimizer (QOPO) by incorporating quasi-opposition-based learning (QOBL) (Tizhoosh, 2005) to improve the exploration and convergence capability of the political optimizer and utilized it to solve Economic Emission Load Dispatch Problem with Valve-Point Loading. Zhu et al (2021) proposed seven variants of PO, with different interpolation and refraction learning strategies. Xu et al (2022) proposed an improved political optimizer, namely the Quantum Nelder-Mead Political Optimizer (QNMPO), to solve performance optimization in photovoltaic systems.…”
Section: Parliamentarymentioning
confidence: 99%
“…Basetti et al (2021) proposed a Quasi-Oppositional-Based Political Optimizer (QOPO) by incorporating quasi-opposition-based learning (QOBL) (Tizhoosh, 2005) to improve the exploration and convergence capability of the political optimizer and utilized it to solve Economic Emission Load Dispatch Problem with Valve-Point Loading. Zhu et al (2021) proposed seven variants of PO, with different interpolation and refraction learning strategies. Xu et al (2022) proposed an improved political optimizer, namely the Quantum Nelder-Mead Political Optimizer (QNMPO), to solve performance optimization in photovoltaic systems.…”
Section: Parliamentarymentioning
confidence: 99%
“…PO-Based Parameter Optimization. Finally, the PO algorithm is applied for optimal adjustment of the parameters contained in the CFNN model [18,19]. PO approach is encouraged by b, the western political optimization algorithm which comprises 2 features.…”
Section: The Proposed Modelmentioning
confidence: 99%
“…Askari et al modified each stage of PO to improve the exploration ability and balance of the algorithm because it is found PO prematurely converges for complex problems [34]. Zhu et al integrated PO with quadratic interpolation, advanced quadratic interpolation, cubic interpolation, Lagrange interpolation, Newton interpolation, and refraction learning, and proposed a sequence of novel PO variants [35].…”
Section: Introductionmentioning
confidence: 99%