2022
DOI: 10.1007/s11063-022-10804-x
|View full text |Cite
|
Sign up to set email alerts
|

Improved Sparrow Search Algorithm with the Extreme Learning Machine and Its Application for Prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(11 citation statements)
references
References 30 publications
0
11
0
Order By: Relevance
“…In this formula, S is the step size, and ( ) LS is the probability of moving the step size. Then, the Mantegna method is used to generate the random step size of Levy distribution [12]:…”
Section: Self-adaptive Levy Flight Strategy Improves Dung Beetle Stea...mentioning
confidence: 99%
“…In this formula, S is the step size, and ( ) LS is the probability of moving the step size. Then, the Mantegna method is used to generate the random step size of Levy distribution [12]:…”
Section: Self-adaptive Levy Flight Strategy Improves Dung Beetle Stea...mentioning
confidence: 99%
“…To avoid accidental errors, IPOA-ELM, POA-ELM, and ELM, three models are run on the data set. After 40 results, the mean and the maximum and minimum values are taken, and compared with the ISSA-ELM model in [32].…”
Section: Ipoa-elm Engineering Applicationsmentioning
confidence: 99%
“…To address these issues, a single Hidden Layer (HL) feedforward neural network model called Extreme Learning Machine (ELM) has been proposed, which has significant advantages in parameter setting [7][8]. The operational efficiency refers to the resource utilization and execution efficiency exhibited by the model during task execution.ELM uses random initialization parameters to obtain output layer weights analytically, eliminating the need for parameter updates and greatly reducing the model's running time, making it an efficient neural network model [9]. However, ELM still faces challenges in addressing classification issues, such as model uncertainty due to random parameter generation and the inability to guarantee optimal classification performance.…”
Section: Introductionmentioning
confidence: 99%
“…The weight W and bias B from the input layer to the first HL are randomly initialized. The output matrix of the first HL is shown in formula(9).…”
mentioning
confidence: 99%