2020
DOI: 10.1155/2020/3604579
|View full text |Cite
|
Sign up to set email alerts
|

BD-ELM: A Regularized Extreme Learning Machine Using Biased DropConnect and Biased Dropout

Abstract: In order to prevent the overfitting and improve the generalization performance of Extreme Learning Machine (ELM), a new regularization method, Biased DropConnect, and a new regularized ELM using the Biased DropConnect and Biased Dropout (BD-ELM) are both proposed in this paper. Like the Biased Dropout to hidden nodes, the Biased DropConnect can utilize the difference of connection weights to keep more information of network after dropping. The regular Dropout and DropConnect set the connection weights and outp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 16 publications
(21 reference statements)
0
7
0
Order By: Relevance
“…The Extreme Learning Machine is a learning algorithm with high speed for the single hidden layer feed-forward neural networks (SLFN) [ 17 ]. The ELM network structure is shown in Fig.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…The Extreme Learning Machine is a learning algorithm with high speed for the single hidden layer feed-forward neural networks (SLFN) [ 17 ]. The ELM network structure is shown in Fig.…”
Section: Methodsmentioning
confidence: 99%
“…The input weights and the hidden layer biases are determined randomly and only the output layer is trained [ 6 , 17 ].…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Authors of ELM claim that time of assigning weights for ELM is significantly lower than when using standard network training methods and with comparable or even better performance. Extreme learning machines were used as a baseline for many further models, such as stacked ELM [51], on-line sequential ELM [14], LRF-ELM designed for image classification [17], and biased dropout ELM [25]. ELM can also be considered as a non-recurrent equivalent of reservoir computing (RC) models [40].…”
Section: Motivationmentioning
confidence: 99%
“…However, the nodes with the smaller weights tend to learn the noise in the data resulting in poor generalizing capabilities. Reducing overfitting while maintaining a sufficient number of hidden nodes to capture nonlinear input-output relationships using ELM has received a significant amount of attention in recent years (Yu et al, 2014;Shukla et al, ;Feng et al, 2017;Zhou et al, 2018;Duan et al, 2018;Lai et al, 2020).…”
Section: Regularization For Robust Estimation For Hidden Node Selectionmentioning
confidence: 99%