2022 14th International Conference on COMmunication Systems &Amp; NETworkS (COMSNETS) 2022
DOI: 10.1109/comsnets53615.2022.9668540
|View full text |Cite
|
Sign up to set email alerts
|

An efficient approach to kNN algorithm for IoT devices

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 14 publications
0
1
0
Order By: Relevance
“…As well the testing (prediction) time complexity of the KNN method is O(k 1 dN), the space complexity is O(dN), where the N is the number of samples in the dataset (dataset size), d is the dimension of the dataset, and k 1 is the number of Nearest Neighbors [29].…”
Section: Analysis Of Computational Complexitymentioning
confidence: 99%
“…As well the testing (prediction) time complexity of the KNN method is O(k 1 dN), the space complexity is O(dN), where the N is the number of samples in the dataset (dataset size), d is the dimension of the dataset, and k 1 is the number of Nearest Neighbors [29].…”
Section: Analysis Of Computational Complexitymentioning
confidence: 99%
“…Given a pattern x ω for classification and a training set E, the k-NN algorithm works by calculating the distance between x ω and each of the elements in E, and assigning the majority class among the nearest k patterns. Due to its simplicity and good performance, the k-NN algorithm is widely used with all sorts of datasets [33][34][35], modified versions of it have been presented [36,37], and even an efficient approach for IoT devices has been proposed [38].…”
Section: Classical K-nn Algorithmmentioning
confidence: 99%
“…It is employed in the categorization and regression of data. The important feature of any k-NN technique, whether it's for classification or regression, is to locate the k-NN, which let us estimate the value or class for a given point [48]. The vanishing gradient problem is a significant drawback of RNNs [49].…”
Section: A Attacker Modelmentioning
confidence: 99%