2019
DOI: 10.1016/j.neucom.2019.04.092
|View full text |Cite
|
Sign up to set email alerts
|

Hybrid extreme learning machine approach for heterogeneous neural networks

Abstract: In this paper, a hybrid learning approach, which combines the extreme learning machine (ELM) with a genetic algorithm (GA), is proposed. The utilization of this hybrid algorithm enables the creation of heterogeneous single layer neural networks (SLNNs) with better generalization ability than traditional ELM in terms of lower mean square error (MSE) for regression problems or higher accuracy for classification problems. The architecture of this method is not limited to traditional linear neurons, where each inp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 24 publications
(11 citation statements)
references
References 60 publications
0
11
0
Order By: Relevance
“…For example, the poor performance indices and low learning rate of the BP algorithm, along with how easily it becomes trapped in a local optimum, limit the compensation accuracy of this method; the simplified approximation and iteration estimation in the analytical compensation method lead to a reduction in the prediction accuracy and high on-line computational complexity, which is not suitable for the on-orbit embedded system; the bivariate polynomial compensation template is valid only for some specific Gaussian width cases, and its application range is thus limited; and the problem of scientifically setting the penalty factor and kernel parameter in LSSVR still remains unsettled, meaning that model training is more difficult because of a time-consuming parameter selection process. ELM [24,25] has attracted attention in robot control [26], human face recognition [27], medical diagnosis [28], sales forecasting [29], and protein structure prediction [30] fields, among others, due to its simple training process and excellent generalization ability when compared to other traditional algorithms [31]. However, the randomness of input weights and hidden layer biases become a bottleneck restricting the stability and prediction accuracy of the ELM network.…”
Section: Methodsmentioning
confidence: 99%
“…For example, the poor performance indices and low learning rate of the BP algorithm, along with how easily it becomes trapped in a local optimum, limit the compensation accuracy of this method; the simplified approximation and iteration estimation in the analytical compensation method lead to a reduction in the prediction accuracy and high on-line computational complexity, which is not suitable for the on-orbit embedded system; the bivariate polynomial compensation template is valid only for some specific Gaussian width cases, and its application range is thus limited; and the problem of scientifically setting the penalty factor and kernel parameter in LSSVR still remains unsettled, meaning that model training is more difficult because of a time-consuming parameter selection process. ELM [24,25] has attracted attention in robot control [26], human face recognition [27], medical diagnosis [28], sales forecasting [29], and protein structure prediction [30] fields, among others, due to its simple training process and excellent generalization ability when compared to other traditional algorithms [31]. However, the randomness of input weights and hidden layer biases become a bottleneck restricting the stability and prediction accuracy of the ELM network.…”
Section: Methodsmentioning
confidence: 99%
“…The variable b j is the hidden layer threshold; β j is the connection weight between the hidden layer and the output layer. The variable O j is the output of the network [25].…”
Section: Extreme Learning Machinementioning
confidence: 99%
“…The ELM algorithm, which was initially proposed by Guangbin Huang in 2004 (Christou et al, 2018), is characterized by randomly or artificially assigned weights of the hidden layer nodes and very fast training and prediction phases. The hidden layer weights do not need to be updated, and the random weights and learning process calculate only the output weight.…”
Section: Extreme Learning Machinementioning
confidence: 99%