2012 IEEE 13th International Conference on Information Reuse &Amp; Integration (IRI) 2012
DOI: 10.1109/iri.2012.6302995
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic selection of k nearest neighbors in instance-based learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 24 publications
0
3
0
Order By: Relevance
“…Then, the profiles were applied as inputs to the advanced data classification methods: Multilayer Percepton, a feedforward ANN model of interconnected nodes that maps, in a non-linear way, the connections between the input and the output data [11], a more practical Support Vector Machine (SVM) algorithm that simplifies the calculations performed when the number of variables is higher [12], and Instance Based for K-Nearest neighbour (k-NN), that assigns each instance to a given group according to the similarity with the majority of its neighbour [13]. 10.21611/qirt.2018.013…”
Section: Methodsmentioning
confidence: 99%
“…Then, the profiles were applied as inputs to the advanced data classification methods: Multilayer Percepton, a feedforward ANN model of interconnected nodes that maps, in a non-linear way, the connections between the input and the output data [11], a more practical Support Vector Machine (SVM) algorithm that simplifies the calculations performed when the number of variables is higher [12], and Instance Based for K-Nearest neighbour (k-NN), that assigns each instance to a given group according to the similarity with the majority of its neighbour [13]. 10.21611/qirt.2018.013…”
Section: Methodsmentioning
confidence: 99%
“…This algorithm compares the similarity between data points and will put the input data in the same class as the data point that is closest to Hulett et al (2012); Piryonesi and El-Diraby (2020). The distance can be calculated by using a distant equation, e.g., the Euclidean distance (Hastie et al, 2009) as shown in formula 1, when x, y is the data coordinate and a, b is the target coordinate that we want to find the distance between them.…”
Section: K-nearest Neighbor Algorithmmentioning
confidence: 99%
“…The label of an unlabeled instance is decided by the label belonging to most of the K neighbors which have nearest distance between them and the instance. 4 Traditional K-Means algorithm has many shortcomings, for instance, initial representative points are selected randomly and updating them is sensitive to noisy. 2 Clustering result is greatly affected by the cluster center selected randomly and is often trapped in local optimal solution rather than global optimal solution.…”
Section: Introductionmentioning
confidence: 99%