2016
DOI: 10.1016/j.neucom.2015.10.033
|View full text |Cite
|
Sign up to set email alerts
|

Cholesky factorization based online regularized and kernelized extreme learning machines with forgetting mechanism

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
13
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(13 citation statements)
references
References 21 publications
0
13
0
Order By: Relevance
“…The experiment compares the performance of KELM-ML algorithm with the existing multi-label classification learning algorithm: Rank-SVM [9], ML-KNN [8], ECC [6]. KELM-ML is deployed on a distributed cluster built on Hadoop-2.7.3 platform.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…The experiment compares the performance of KELM-ML algorithm with the existing multi-label classification learning algorithm: Rank-SVM [9], ML-KNN [8], ECC [6]. KELM-ML is deployed on a distributed cluster built on Hadoop-2.7.3 platform.…”
Section: Resultsmentioning
confidence: 99%
“…Certainly, the parameter selection of KELM-ML algorithm is time-consuming. In terms of ECC, ensemble size is set to be 10, and sample ratio is set to be 67% [6]. The "cross -validation" method is used in the training process.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Zhou et al and Javier [26][27][28] et al applied ELM to remote sensing images. Zhou et al [29] proposed various improvements for ELM to solve the problems in online continuous data applications. Recently, researchers have combined ELM and dimensionality reduction techniques for application.…”
Section: Introductionmentioning
confidence: 99%
“…RELM has a better generalization ability by introducing parameters to weigh structural risks and empirical risks [12][13][14][15]. For the single hidden layer RELM model with multiple input and single output, the literature [16] designed the Cholesky factorization method for regularized output weight matrix. In the learning and forgetting process of the sample sequence, the Cholesky factorization factor is calculated recursively by adding and deleting samples one by one, and then the output weights are adjusted, and the network structure is fixed.…”
Section: Introductionmentioning
confidence: 99%