2017
DOI: 10.22214/ijraset.2017.8166
|View full text |Cite
|
Sign up to set email alerts
|

A Review of various KNN Techniques

Abstract: K-Nearest Neighbor is highly efficient classification algorithm due to its key features like: very easy to use, requires low training time, robust to noisy training data, easy to implement, but alike other algorithms it also has some shortcomings as computation complexity, large memory requirement for large training datasets, curse of dimensionality which can't be ignored and it also gives equal weights to all attributes. Many researchers have developed various techniques to overcome these shortcomings. In thi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
5
0
2

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(7 citation statements)
references
References 34 publications
0
5
0
2
Order By: Relevance
“…In K-NN classification, the class of the query is obtained based on the nearest neighbors to that of the example query. K-NN searches the pattern to the closest data and then assigns to data that is unknown [27]. In this classification, we used Euclidean distance as a criterion along with 10-fold cross validation to predict the class label from the prediction dataset.…”
Section: Resultsmentioning
confidence: 99%
“…In K-NN classification, the class of the query is obtained based on the nearest neighbors to that of the example query. K-NN searches the pattern to the closest data and then assigns to data that is unknown [27]. In this classification, we used Euclidean distance as a criterion along with 10-fold cross validation to predict the class label from the prediction dataset.…”
Section: Resultsmentioning
confidence: 99%
“…Before the rule was introduced for classification in Cover and Hart [1], the NN rule was mentioned in Nilsson [2] as "minimum distance classifier" and Sebestyen [3] as "proximity algorithm". The kNN method is an instance-based learning algorithm [4] which performs classification on a data point by determining parameter k (number of nearest neighbors), calculating the distance between the test instance and all the training examples and sorting the distance to determine nearest neighbors based on the kth minimum distance, gathering the class of the nearest neighbors and using the majority of the class of nearest neighbors as the prediction value of the test instance. The kNN algorithm are termed instance-based, or lazy learners because they make decisions by comparing the training set with the test set for each classification they perform.…”
Section: Introductionmentioning
confidence: 99%
“…They simply store the instances and they do not do any work on instances until it is given a test tuple. Other methods for classification such as rule-based classification, Decision Tree induction, Classification by Back-propagation, Bayesian classification, and Support Vector Machines (SVM) are examples of non-instance-based learners, otherwise known as eager learners [4,5,6].…”
Section: Introductionmentioning
confidence: 99%
“…Secara skema yang diterapkan k-NN mampu mendeskripsikan atribut atau fitur dari sebuah data [8]. Sebagai algoritma yang digunakan pada kasus data mining, algoritma k-NN dapat diklasifikasi menjadi kategori k-NN terstruktur atau Structure base k-NN dan k-NN tidak terstruktur Non-structure base k-NN [9], [10]. Algoritma terstruktur tidak banyak digunakan karena dianggap kurang efisien dan membutuhkan waktu yang relatif lebih lama dalam pencarian variabel k pada saat awal proses, khususnya terhadap data dengan atribut banyak, sehingga Teknik Non-structure base k-NN dianggap lebih sederhana dan efisien namun berpengaruh pada akurasi terhadap metode penentuan nilai k dan penentuan pengukuran jarak terdekat.…”
unclassified
“…Fungsi pengukuran jarak berbasis Euclidean atau dikenal dengan Euclidean distance menjadi fungsi penentu jarak terdekat yang paling umum digunakan pada k-NN [11]. Selain pengukuran jarak berbasis gometri Euclid, fungsi jarak yang digunakan pada algoritma k-NN adalah Minkowski distance dan Manhattan distance [10]. Kinerja algoritma k-NN sangat dipengaruhi dari beberapa faktor, diantaranya adalah penggunaan fungsi jarak, penentuan nilai k dan nilai atribut yang tidak relevan [6] [3].…”
unclassified