2015
DOI: 10.1016/j.neucom.2014.01.072
|View full text |Cite
|
Sign up to set email alerts
|

Binary/ternary extreme learning machines

Abstract: a b s t r a c tIn this paper, a new hidden layer construction method for Extreme Learning Machines (ELMs) is investigated, aimed at generating a diverse set of weights. The paper proposes two new ELM variants: Binary ELM, with a weight initialization scheme based on f0; 1g-weights; and Ternary ELM, with a weight initialization scheme based on fÀ1; 0; 1g-weights. The motivation behind this approach is that these features will be from very different subspaces and therefore each neuron extracts more diverse infor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
19
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 38 publications
(19 citation statements)
references
References 21 publications
0
19
0
Order By: Relevance
“…By adjusting the value of γ, the regularized version of ELM can prevent itself from overfitting [39,57]. Furthermore, adding a linear component in the ELM might be helpful in some cases, making the model as:…”
Section: Regularized Elm With Linear Componentsmentioning
confidence: 99%
“…By adjusting the value of γ, the regularized version of ELM can prevent itself from overfitting [39,57]. Furthermore, adding a linear component in the ELM might be helpful in some cases, making the model as:…”
Section: Regularized Elm With Linear Componentsmentioning
confidence: 99%
“…Online sequential extreme learning machine (OS-ELM) [13], a sequential modification of ELM proposed by Huang et al [14], can deal the sequential arriving data with good generalization and fast learning speed. It has widespread use in many applications, especially in the field of fault diagnosis [15][16][17]. However, although OS-ELM works effectively on online sequential data, it tends to get poorer classification effect, especially the minority classified accuracy [18], when applied in the severely imbalanced data.…”
Section: Introductionmentioning
confidence: 99%
“…Ref. [5] proposed two weight initialization schemes, i.e., binary ELM based on {0, 1}-weights and ternary ELM based on {-1, 0, 1}-weights, to improve the diversity of neurons in the hidden layer. For binary/ternary ELMs, the necessary optimizations are also required to select the better…”
Section: Introductionmentioning
confidence: 99%