“…KNN mainly targets the nearest neighbors’ labels using two instances with p features, x i = { x i 1 , x i 2 ,…, x i,p } and x j = { x j 1 , x j 2 ,…, x j,p } of the training data set, where i = 1,2,…, n , j = 1,2,…, n , and n being the total sample number. A metric d ( x i , xj )) which can be Manhattan, Euclidean, chi-square, or cosine among other distance measures is used to determine the distance between the two instances (Sanchez et al, 2018). Basically, for labeling a new class, KNN algorithm locates k neighbors in the training data set, with the nearest possible distances based on the used metric; thereafter, KNN selects the most dominating class among the KNN.…”