2022
DOI: 10.14500/aro.10880
|View full text |Cite
|
Sign up to set email alerts
|

Network Transmission Flags Data Affinity-based Classification by K-Nearest Neighbor

Abstract: Abstract—This research is concerned with the data generated during a network transmission session to understand how to extract value from the data generated and be able to conduct tasks. Instead of comparing all of the transmission flags for a transmission session at the same time to conduct any analysis, this paper conceptualized the influence of each transmission flag on network-aware applications by comparing the flags one by one on their impact to the application during the transmission session, rather tha… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 24 publications
0
1
0
Order By: Relevance
“…To formulate the kNN classification algorithm, assume that the pair (x i , f(x i )) contains the feature vector x i and its corresponding label f(x i ) where f(x i ) ∈ {1,2,…, n} and i = 1,2,…, N (n and N is the number of classes and the number of train feature vectors, respectively). The principal idea behind kNN is to measure the distance between feature vectors such that the nearest neighbor for the tested sample makes a decision about the label of the features (Aljojo, 2022). The majority voting strategy among the k-nearest samples is basically adopted in this classifier.…”
Section: Introductionmentioning
confidence: 99%
“…To formulate the kNN classification algorithm, assume that the pair (x i , f(x i )) contains the feature vector x i and its corresponding label f(x i ) where f(x i ) ∈ {1,2,…, n} and i = 1,2,…, N (n and N is the number of classes and the number of train feature vectors, respectively). The principal idea behind kNN is to measure the distance between feature vectors such that the nearest neighbor for the tested sample makes a decision about the label of the features (Aljojo, 2022). The majority voting strategy among the k-nearest samples is basically adopted in this classifier.…”
Section: Introductionmentioning
confidence: 99%