2018
DOI: 10.1088/1757-899x/318/1/012046
|View full text |Cite
|
Sign up to set email alerts
|

An Improvement To The k-Nearest Neighbor Classifier For ECG Database

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(8 citation statements)
references
References 8 publications
0
8
0
Order By: Relevance
“…That is, kNN finds the k nearest instances to the instance (query) and determines its class by observing the single most common class label. The potential of the kNN classification model has been shown in an ECG-based biometric recognition system, yet there are some limitations regarding its practicality, which includes [356], [357]: (i) they possess high storage requirements, (ii) the classification is sensitive to the local distribution of the training samples, which may potentially result in the instability of performance (iii) the classification lacks a reputable means to choose k, besides by cross-validation or a related computationally-expensive method.…”
Section: ) Knn Classification Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…That is, kNN finds the k nearest instances to the instance (query) and determines its class by observing the single most common class label. The potential of the kNN classification model has been shown in an ECG-based biometric recognition system, yet there are some limitations regarding its practicality, which includes [356], [357]: (i) they possess high storage requirements, (ii) the classification is sensitive to the local distribution of the training samples, which may potentially result in the instability of performance (iii) the classification lacks a reputable means to choose k, besides by cross-validation or a related computationally-expensive method.…”
Section: ) Knn Classification Modelsmentioning
confidence: 99%
“…Notably, choosing a proper value of k is crucial as it influences the performance of the classification task of the k-NN algorithm. For example, consider the following rationale why a kNN classifier may wrongly classify a training sample: 1) The kNN is an instance-based machine learning algorithm as the learning process immediately applies the available training set, making the classification sensitive to the local distribution of the training samples leading to instability of performance [357], [358], and 2) for the basic kNN algorithm, the data cluster density influences its performance, which results in wrong decision-making [357], [358]. In [359], a kNN linear SVM and neural network were used as the classifier model for ECG-based human recognition on MIT-BIH and ECG-ID database.…”
Section: ) Knn Classification Modelsmentioning
confidence: 99%
“…In classical K NN [28], given a set U of patients described by a set A − of features, a normalization function is first applied to the set N ⊂ A − to scale the numerical features to the interval [0, 1] to prevent features with large values from outweighing those with smaller values. A normalized feature…”
Section: Preliminariesmentioning
confidence: 99%
“…It uses rough set theory (RST) techniques to handle categorical features and classical distance metrics to handle numerical features. As such, K NNV does not convert categorical feature values into numbers, thereby making all the features numerical and then using distance metrics to identify nearest neighbors, as is done in classical K NN (see for example, [28][29][30]). By applying RST techniques to categorical features, K NNV solves two problems at once: incompleteness of those features and vagueness of the proper value of K .…”
Section: Knnv Algorithmmentioning
confidence: 99%
See 1 more Smart Citation