2020
DOI: 10.1155/2020/9175106
|View full text |Cite
|
Sign up to set email alerts
|

Smoothing L0 Regularization for Extreme Learning Machine

Abstract: Extreme learning machine (ELM) has been put forward for single hidden layer feedforward networks. Because of its powerful modeling ability and it needs less human intervention, the ELM algorithm has been used widely in both regression and classification experiments. However, in order to achieve required accuracy, it needs many more hidden nodes than is typically needed by the conventional neural networks. This paper considers a new efficient learning algorithm for ELM with smoothing Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8

Relationship

3
5

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 28 publications
0
6
0
Order By: Relevance
“…Yang et al, proposed an ETLM algorithm with a smoothing regularizer for improving the compactness of the network and confirmed experimentally that the proposed algorithm possessed better functions in prediction and network sparsity [11]. Fan et al, suggested a new efficient limit learning ML algorithm with smooth L-0 regularization and experimentally demonstrated that the proposed algorithm had fewer hidden nodes and better generalization performance [12]. Zhang et al, proposed a directional algorithm for convex nonlinear second-order conic programming.…”
Section: Related Workmentioning
confidence: 96%
“…Yang et al, proposed an ETLM algorithm with a smoothing regularizer for improving the compactness of the network and confirmed experimentally that the proposed algorithm possessed better functions in prediction and network sparsity [11]. Fan et al, suggested a new efficient limit learning ML algorithm with smooth L-0 regularization and experimentally demonstrated that the proposed algorithm had fewer hidden nodes and better generalization performance [12]. Zhang et al, proposed a directional algorithm for convex nonlinear second-order conic programming.…”
Section: Related Workmentioning
confidence: 96%
“…However, the regularization term is not differentiable at the origin of coordinates, which makes theoretical analysis difficult. To overcome this defect, a smoothing technique is proposed [5,14,19,23].…”
Section: Introductionmentioning
confidence: 99%
“…However, since the input parameters are generated randomly and the ELM requires a large number of hidden neurons, the amplitude of the output weight will be large when the output matrix of the hidden layer is ill, which will cause the trained model to fall into the local minimum and show the phenomenon of overfitting [21]. In [22,23], an ELM based on different regularization was proposed to effectively overcome the overfitting phenomenon. e accuracy and effectiveness of the ELM algorithm largely rest with the internal parameters of the model.…”
Section: Introductionmentioning
confidence: 99%