2011
DOI: 10.1016/j.neucom.2010.11.034
|View full text |Cite
|
Sign up to set email alerts
|

GPU-accelerated and parallelized ELM ensembles for large-scale regression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
42
0
3

Year Published

2013
2013
2024
2024

Publication Types

Select...
7
2
1

Relationship

4
6

Authors

Journals

citations
Cited by 151 publications
(45 citation statements)
references
References 14 publications
0
42
0
3
Order By: Relevance
“…Meanwhile, when a more complex model and more variables are introduced, the time for the training process such as feature selection might increase significantly. Thus the parallelization of those algorithms, such as the ELM-based feature selection, might be a promising topic for future implementation [59]. Multi-step head predictions could be studied in the future, which would be able to provide timely forecasts for the public.…”
Section: Future Work and Discussionmentioning
confidence: 99%
“…Meanwhile, when a more complex model and more variables are introduced, the time for the training process such as feature selection might increase significantly. Thus the parallelization of those algorithms, such as the ELM-based feature selection, might be a promising topic for future implementation [59]. Multi-step head predictions could be studied in the future, which would be able to provide timely forecasts for the public.…”
Section: Future Work and Discussionmentioning
confidence: 99%
“…The multilayer learning architecture that uses ELM auto-encoder [11] and subnetwork nodes [12] expands ELM from a single layer structure to a multilayer structure. Also, recent applications of ELM have included: machine vision [13,14], ensemble learning [15,16], sparse learning [17,18] big data applications [19,20], etc.…”
Section: Smooth Average Congestedmentioning
confidence: 99%
“…(6), it can be seen that a large part of the HAT-matrix consists of H † , the Moore-Penrose generalized inverse of the matrix H. Therefore, by explicitly computing H † , and reusing H † to compute the LOO error MSE PRESS , model structure selection of the ELM comes at very low overhead. A detailed description of this approach can be found in [17]. In summary, the algorithm for training and LOO-based model structure selection of ELM is stated in Algorithm 2.…”
Section: Efficient Loo Computation and Model Selectionmentioning
confidence: 99%