2021
DOI: 10.3390/sym13122364
|View full text |Cite
|
Sign up to set email alerts
|

Harris Hawks Optimization with Multi-Strategy Search and Application

Abstract: The probability of the basic HHO algorithm in choosing different search methods is symmetric: about 0.5 in the interval from 0 to 1. The optimal solution from the previous iteration of the algorithm affects the current solution, the search for prey in a linear way led to a single search result, and the overall number of updates of the optimal position was low. These factors limit Harris Hawks optimization algorithm. For example, an ease of falling into a local optimum and the efficiency of convergence is low. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 91 publications
0
2
0
Order By: Relevance
“…Jiao et al [162] proposed a multistrategy search HHO using the Least Squares Support Vector Machine (LSSVM). They used the Gauss chaotic method as the initialization method.…”
Section: Other Hho Variantsmentioning
confidence: 99%
“…Jiao et al [162] proposed a multistrategy search HHO using the Least Squares Support Vector Machine (LSSVM). They used the Gauss chaotic method as the initialization method.…”
Section: Other Hho Variantsmentioning
confidence: 99%
“…With the continuous research on deep learning, Alexey Bochkovskiy et al proposed a new YOLOv4 model in 2020 [10] , which used CSPDarknet53 backbone network and Mosaic data enhancement method to modify the parameters of the model through self-adversarial training to improve the stability and accuracy of the model, and was trained on MS COCO dataset, achieving 43.5% AP, but due to the complexity of the model and the large amount of computation, which resulted in a long training time, and when dealing with small-scale dataset is prone to overfitting problem. Afterwards, some teams also further improved YOLOv4 by replacing the CSPBlock module with the ResBlock-D module [11] that aims to increase the training speed, reduce the complexity, and apply to real-time detection, but again it is difficult to balance the contradiction between model size and detection accuracy.…”
Section: Introductionmentioning
confidence: 99%
“…It can expand the predation area and increase the randomness of the algorithm [20]. To obtain a better search effect, Shangbin Jiao et al [21] added a nonlinear weight ( ) w t  in the HHO search process, which improved the early global search ability. So, the Harris hawk search strategy with nonlinear weights is introduced into the WOA search-predation stage, and its expression is as follows [21]:…”
Section: Introducing the Harris Hawk Strategymentioning
confidence: 99%