2023
DOI: 10.1016/j.eswa.2022.119246
|View full text |Cite
|
Sign up to set email alerts
|

QQLMPA: A quasi-opposition learning and Q-learning based marine predators algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 41 publications
(8 citation statements)
references
References 44 publications
0
8
0
Order By: Relevance
“…Zhao et al [ 185 ] introduced another modified version of MPA using quasi-opposition-based learning and Q-learning based to increase population diversity, enhance the global search ability and avoid traps in local optima. Their algorithm is called QQLMPA.…”
Section: Variants Of Marine Predators Algorithmmentioning
confidence: 99%
See 1 more Smart Citation
“…Zhao et al [ 185 ] introduced another modified version of MPA using quasi-opposition-based learning and Q-learning based to increase population diversity, enhance the global search ability and avoid traps in local optima. Their algorithm is called QQLMPA.…”
Section: Variants Of Marine Predators Algorithmmentioning
confidence: 99%
“…Modified MPA Integrated the sine-cosine algorithm (SCA) with the MPA for feature selection problems [12] Opposition-based learning Modified MPA Random opposition-based learning (ROBL) was integrated with the MPA to enhance its global search ability [33] Arabic Opinion Mining Modified MPA MPA is proposed is to determine the nature of opinions in the Arabic language in order to select the most pertinent terms Green Space Detection Original MPA Proposes a remote sensing and data-driven solution for urban green space detection at a regional scale via the employment of MPA and SVM [84] The prediction of metal prices Modified MPA MPA is proposed to enhance the prediction performance and conquer the limitations of individual models [55] Biological clustering Modified MPA A new binary EO was utilized and combined with the MPA to address the biological clustering problem and cluster multi-omics datasets [74] Ridge Regression Original MPA t-Distribution MPA is proposed to replace the traditional gradientdependent method to optimize the loss function solving process in ridge regression [97] Forecasting Modified MPA Improved MPA version to adaptive neuro-fuzzy inference system (ANFIS) model to forecast the number of infected people in four countries [22] Global Optimization Engineering optimization Original MPA MPA is proposed to enhance and emphasize the local search capabilities of the MPA using the Nelder-Mead algorithm [126] Continuous Optimization Modified MPA Combining the MPA with chaotic maps to enhance the exploitation ability of the MPA, and thus making the right balance exploitation and exploration abilities [103] Continuous Optimization Modified MPA Combining MPA with quasi-opposition-based learning and Q-learning based to increase population diversity, enhance the global search ability and avoid traps in local optima [185] Continuous Optimization Hybrid MPA Solving local optima stagnation and lack of population diversity by using the estimation distribution algorithm and Gaussian random walk strategy [145] Multi-objective Multiobjective MPA Introduced another multi-objective MPA for global optimization, called MOMPA [42] Engineering problems Modified MPA Enhanced the MPA performance and searching behaviour for the local optima [123] Multi-objective Multiobjective MPA A multi-objective version of MPA, named MOMPA, was introduced for multi-objective optimization problems [95] Another issue is related to the optimization problems' complexity, where the original MPA is modelled to address only continuous optimization problems with a single objective. However, the optimization problems are not limited to these types, where the problems can be formulated as multi or many objectives, binary or discrete, dynamic or combinatorial.…”
Section: Metabolomicsmentioning
confidence: 99%
“…MPA selects the appropriated motion based on the relationship between the number of iterations and the maximum number of iterations and does not utilize the information generated from previous iterations, increasing the computational cost and running time and reducing the convergence speed to address this shortcoming. The use of reinforcement learning Q-learning to fully use the iteration information can improve the convergence speed of MPA and avoid prematurely falling into the development phase [26]. In the Qlearning algorithm, the Q-table is updated according to the Bellman equation (Equation ( 8)) to gain experience.…”
Section: Qlmpa Optimization Principles and Processesmentioning
confidence: 99%
“…Furthermore, an optimization algorithm must be used to determine the global optimum value of the agent model and the corresponding parameter combinations. Since the Marine Predator algorithm (MPA) has significant advantages in engineering optimization problems [24,25], which, combined with the Q-learning algorithm, accelerates the convergence of the standard MPA [26], QLMPA can be applied to optimize the heat extraction performance of the SFCBHE.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, some swarm intelligence algorithms have gained popularity as effective search tools for hyperparameter optimization, including marine predators [15], firefly colonies [16], and bird flocks, due to their characteristics of self-organization, parallel operation, flexibility, and robustness [17]. For example, the traditional particle swarm algorithm (PSO) is used to adjust the parameters of BPNN for PV power prediction [18].…”
Section: Introductionmentioning
confidence: 99%