2019 International Conference on Intelligent Computing and Control Systems (ICCS) 2019
DOI: 10.1109/iccs45141.2019.9065747
|View full text |Cite
|
Sign up to set email alerts
|

A Brief Review of Nearest Neighbor Algorithm for Learning and Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
192
0
10

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 496 publications
(202 citation statements)
references
References 3 publications
0
192
0
10
Order By: Relevance
“…Selection of a relevant kernel to characterise the dataset and the run time for large dataset is a challenge. [31]. Effective distance between the dataset and the training sample is computed.…”
Section: Figure 17 Classification Of the Techniques For Vehicle And On-road Object Detectionmentioning
confidence: 99%
“…Selection of a relevant kernel to characterise the dataset and the run time for large dataset is a challenge. [31]. Effective distance between the dataset and the training sample is computed.…”
Section: Figure 17 Classification Of the Techniques For Vehicle And On-road Object Detectionmentioning
confidence: 99%
“…Here, Optimal hyperplane is the optimal decision planes that divide the class boundaries with subject to: y n (w In these experiments, the SVM classifier was utilized with a varying kernel function: linear, polynomial, and radial basis function (RBF). The kNN [49] method is straightforward to understand and calculate. It is also a lazy algorithm because it does not involve a training phase.…”
Section: ) Support Vector Machinementioning
confidence: 99%
“…kNN using conventional features: Increasing the k values did not yield important changes to detection accuracy due to the distribution of similarity of training images. Accordingly, the distance of k-nearest neighbors algorithm is a majority vote close to a group of training images[49]. MLP using conventional features: It requires optimizing the suitable number of nodes in hidden layer relating to type of feature and number of input features.…”
mentioning
confidence: 99%
“…• k-nearest neighbor (k-NN) [29] is a non-parametric lazy-learner algorithm used for classification and regression. k-NN stores all the available dataset and classifies a new sample based on the feature similarity between this new case and the available data, assigning the most numerous class of the nearest k labeled points.…”
Section: Supervised Learningmentioning
confidence: 99%
“…The k-means method is efficient with guaranteed convergence, and fast when running on small processors with low capabilities [26]. k-NN is a simple algorithm with good performance and requires no training [29]. Finally, DTs require little to no data preprocessing effort and can effectively handle missing values in the data [30].…”
Section: Supervised Learningmentioning
confidence: 99%