2017
DOI: 10.1109/access.2017.2740420
|View full text |Cite
|
Sign up to set email alerts
|

Hybrid Structure-Adaptive RBF-ELM Network Classifier

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
8
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 23 publications
(8 citation statements)
references
References 29 publications
0
8
0
Order By: Relevance
“…Except for the DM, CC, and IS datasets, all benchmark datasets are imbalanced datasets. In each dataset, the inputs to all the classifiers are scaled to appropriately [−1, 1]; the classification performance of each network is measured by the overall (η o ) and average (η a ) per-category classification accuracies [23]. Table 1 gives the description of the classification datasets.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Except for the DM, CC, and IS datasets, all benchmark datasets are imbalanced datasets. In each dataset, the inputs to all the classifiers are scaled to appropriately [−1, 1]; the classification performance of each network is measured by the overall (η o ) and average (η a ) per-category classification accuracies [23]. Table 1 gives the description of the classification datasets.…”
Section: Resultsmentioning
confidence: 99%
“…To generate the optimal number and parameters of the RBF kernel, in our previous work, an incremental learning algorithm for the hybrid RBF-BP network (ILRBF-BP) [22] and a hybrid structure adaptive RBF-ELM network (HSARBF-ELM) [23] are presented. In ILRBF-BP, the method of potential density clustering is presented to generate RBF kernels automatically, which utilizes the global distribution information of sample space.…”
Section: Introductionmentioning
confidence: 99%
“…where the H + denotes the pseudoinverse of matrix H. Therefore, ELM has the advantages of fast learning and well generalization performance and has been extensively applied in many fields [16][17][18][19][20][21][22][23][24][25][26][27][28][29]. Huang et al first proposed the primary concept and theory of ELM in [17].…”
Section: Introductionmentioning
confidence: 99%
“…In [24], Huang et al applied ELM to the regression and multiclass classification. Wen et al proposed a hybrid structure-adaptive radial basis function-extreme learning machine (HSARBF-ELM) network in [25]. Based on acoustic feature transfer learning, Deng et al used ELM for recognizing emotions from whispered speech [26].…”
Section: Introductionmentioning
confidence: 99%
“…The key feature of ELM is that the input-hidden-layer parameters can be given at random and need not to be tuned, and this feature greatly reduces the training computational cost and mostly retains strong generalization ability [8], [22]- [24]. Meanwhile, ELM with the output weights solved by regularized least squares has been proved to maintain the unique optimal fixed point [25], [26], and the stability proof about randomly setting of hidden layer parameters have been given in [14], [15].…”
Section: Introductionmentioning
confidence: 99%