2018
DOI: 10.1007/s00500-018-3128-7
|View full text |Cite
|
Sign up to set email alerts
|

Efficient extreme learning machine via very sparse random projection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 21 publications
(7 citation statements)
references
References 34 publications
0
7
0
Order By: Relevance
“…Chen et al [20] proved that very sparse matrix can obtain the similar results with random gaussian matrix, and x can be defined as , or even . In our work, we define for building the very sparse measurement matrix.…”
Section: Methodsmentioning
confidence: 99%
“…Chen et al [20] proved that very sparse matrix can obtain the similar results with random gaussian matrix, and x can be defined as , or even . In our work, we define for building the very sparse measurement matrix.…”
Section: Methodsmentioning
confidence: 99%
“…This may easily results to the memory-overflow issue in the applications with large and(or) N. Instead of directly calculating −1 using Netwon-Raphson method with the full training data, we propose a Quasi-Newton (QN) method using gradient ∇ ( ) to approximate −1 for SBELM. In QN-SBELM, the approximated −1 can be employed to formula (8) to update the ARD prior and no full training data loaded into memory beforehand required. Therefore, it is scalable to large problems.…”
Section: Inverse-free Sbelmmentioning
confidence: 99%
“…Many ELM variants have been developed to address some particular problems. Examples include ELMs for online sequential learning [3], ELMs for imbalanced problems [4], ELMs for semi-supervised learning [5], ELMs for unsupervised learning [6], ELMs for compressive learning [8], etc. However, as analyzed in [9] [10], most of them still suffer from heavy overfitting and large model sizes for benchmark applications.…”
Section: Introductionmentioning
confidence: 99%
“…For real word application, we can increase the size of hash vector to improve the performance while Malytics still requires a light computation. Motivated from [45], we replaced the tf -simashing weights (i.e -1 and 1 values) with a sparse matrix including -1, 1 and 0 [46]. We set the sparsity to 1% and the size of tf -simashing vector is 3000.…”
Section: Modelmentioning
confidence: 99%