2012
DOI: 10.1016/j.patrec.2011.10.021
|View full text |Cite
|
Sign up to set email alerts
|

An affinity-based new local distance function and similarity measure for kNN algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
25
0
1

Year Published

2014
2014
2023
2023

Publication Types

Select...
4
4
2

Relationship

2
8

Authors

Journals

citations
Cited by 57 publications
(26 citation statements)
references
References 25 publications
0
25
0
1
Order By: Relevance
“…However, Bhattacharyya [24] proposes a bound to the optimal k, that is k < √ m. In binary classification, one can restrain the range of k by using only odd values in order to avoid ties in equation (2.1). The class posterior distribution, p(ω j |x), is an alternative to the optimum Bayesian classification approach, which requires the complete knowledge of data generation underlying mechanisms.…”
Section: K-nearest Neighbors (Knn) Algorithmmentioning
confidence: 99%
“…However, Bhattacharyya [24] proposes a bound to the optimal k, that is k < √ m. In binary classification, one can restrain the range of k by using only odd values in order to avoid ties in equation (2.1). The class posterior distribution, p(ω j |x), is an alternative to the optimum Bayesian classification approach, which requires the complete knowledge of data generation underlying mechanisms.…”
Section: K-nearest Neighbors (Knn) Algorithmmentioning
confidence: 99%
“…As in [4], the time complexities of Algorithm 1 can be analyzed as follows: as in the case of SemiBoost, almost all of the processing CPUtime of the algorithm is consumed in computing the confidence levels and training the classifier. More specifically, the time complexity for each step can be analyzed as follows …”
Section: End Algorithmmentioning
confidence: 99%
“…The classification error rate of nearest neighbor is not more than twice the Bayes [10] where the number of training instances is sufficiently large. Even in nearest neighbor classifier that has no training phase, without any priority knowledge of the query instance, it is more likely that the nearest neighbor is a prototype from the major class.…”
Section: Introductionmentioning
confidence: 99%