2017
DOI: 10.1155/2017/7479140
|View full text |Cite
|
Sign up to set email alerts
|

Mexican Hat Wavelet Kernel ELM for Multiclass Classification

Abstract: Kernel extreme learning machine (KELM) is a novel feedforward neural network, which is widely used in classification problems. To some extent, it solves the existing problems of the invalid nodes and the large computational complexity in ELM. However, the traditional KELM classifier usually has a low test accuracy when it faces multiclass classification problems. In order to solve the above problem, a new classifier, Mexican Hat wavelet KELM classifier, is proposed in this paper. The proposed classifier succes… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 17 publications
0
2
0
Order By: Relevance
“…However, the ELM algorithm uses linear weighted mapping to train the calibration data sets. For nonlinear samples, the accuracy of ELM is reduced [28]. In addition, the ELM algorithm randomly selects the weights and uses the trial and error method to determine the optimal number of neurons, which will affect the stability and rigor of the calibration algorithm.…”
Section: Introductionmentioning
confidence: 99%
“…However, the ELM algorithm uses linear weighted mapping to train the calibration data sets. For nonlinear samples, the accuracy of ELM is reduced [28]. In addition, the ELM algorithm randomly selects the weights and uses the trial and error method to determine the optimal number of neurons, which will affect the stability and rigor of the calibration algorithm.…”
Section: Introductionmentioning
confidence: 99%
“…In addition, KELM has the advantages of fast convergence and good generalization. In recent years, the algorithm has been applied widely (Wang et al, 2019;Wang, Song, & Ma, 2017;Zhao et al, 2017).…”
Section: Introductionmentioning
confidence: 99%