2021
DOI: 10.1016/j.eswa.2021.115293
|View full text |Cite
|
Sign up to set email alerts
|

A novel extreme learning machine based kNN classification method for dealing with big data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
24
0
2

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 79 publications
(26 citation statements)
references
References 44 publications
0
24
0
2
Order By: Relevance
“…The k-NN algorithm assumes that related things are located nearby. Therefore, the success of the classification depends heavily on the value of k and the distance metric, which must be chosen before using k-NN [ 41 ]. Here, we have used a grid search algorithm to obtain the optimal hyper-parameters, where the classification accuracy was used as the optimization metric.…”
Section: Methodsmentioning
confidence: 99%
“…The k-NN algorithm assumes that related things are located nearby. Therefore, the success of the classification depends heavily on the value of k and the distance metric, which must be chosen before using k-NN [ 41 ]. Here, we have used a grid search algorithm to obtain the optimal hyper-parameters, where the classification accuracy was used as the optimization metric.…”
Section: Methodsmentioning
confidence: 99%
“…This algorithm is one of the simplest, but still one of the most efficient classification algorithms. The basic idea of this algorithm is that similar objects are found at a very short distance from each other 19 .…”
Section: Methodsmentioning
confidence: 99%
“…Since the selection and trade-off of the window function is an obstacle to the application of the Parzen algorithm, while the KNN algorithm is based on the posterior probability of the sequence element distance or similarity measure, which is more generalizable. 1315…”
Section: Introductionmentioning
confidence: 99%
“…Since the selection and trade-off of the window function is an obstacle to the application of the Parzen algorithm, while the KNN algorithm is based on the posterior probability of the sequence element distance or similarity measure, which is more generalizable. [13][14][15] The principle of the prediction algorithm based on KNN estimation is to construct a certain segment vector sequence, find K neighbors by the method of distance or similarity measurement, and obtain a new prediction value through a certain average or weighting process. Due to the simple and intuitive algorithm of non-parametric prediction methods, when the sample value is large enough, its error is low, and the prediction effect under special (fault or singular point) conditions is more accurate than parametric modeling.…”
Section: Introductionmentioning
confidence: 99%