2023
DOI: 10.1007/s10462-023-10478-4
|View full text |Cite
|
Sign up to set email alerts
|

A Review of multilayer extreme learning machine neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 144 publications
0
3
0
Order By: Relevance
“…Huang et al presented a mathematical model for a singlehidden-layer feedforward ELM as shown in figure 2 that contains QUOTE N N neurons in the hidden layer and provides an output through function f N (•) which is represented by the following mathematical expression [38],…”
Section: Extreme Learning Machine (Elm)mentioning
confidence: 99%
See 1 more Smart Citation
“…Huang et al presented a mathematical model for a singlehidden-layer feedforward ELM as shown in figure 2 that contains QUOTE N N neurons in the hidden layer and provides an output through function f N (•) which is represented by the following mathematical expression [38],…”
Section: Extreme Learning Machine (Elm)mentioning
confidence: 99%
“…The following equation denotes the relationship between the function G i (•), and the activation function g (•). The operation is performed on a data representation for additive neurons through a radial basis function, respectively [38],…”
Section: Extreme Learning Machine (Elm)mentioning
confidence: 99%
“…These highlight the need to investigate further the number of neurons in the hidden layer in future studies. Recently, two reviews of multilayer ELM neural networks [13,14] have been presented, highlighting the importance of implementing parallel and distributed computing to address big data problems with this variant. Finally, Patil and Sharma [12] provide a review of theories, algorithms, and applications of ELM, while Huérfano-Maldonado et al [11] present a comprehensive review of medical image processing with ELM.…”
Section: Elm Based On Metaheuristicsmentioning
confidence: 99%
“…The matrices H T H + 1 C I and H H T + 1 C I in (6) are non-singular matrices, since both H T H and H H T are positive semidefinite symmetric matrices and C > 0. When we remove the direct links from inputs to outputs in RVFL, the literature often presents this feedforward neural network as an ELM [55,56].…”
Section: Random Vector Functional Link Networkmentioning
confidence: 99%
“…• ELM and RVFL networks: These two non-iterative architectures learned nonlinear features thanks to the sigmoid function ϕ(z) = 1/(1 − exp(z)), chosen for its efficiency in this class of algorithms [56,66]. The random weights and biases in the hidden layer followed a uniform distribution in the [−1, 1] range [55].…”
Section: Selection Of Learning Models Based On Neural Networkmentioning
confidence: 99%