2019
DOI: 10.1109/access.2019.2955864
|View full text |Cite
|
Sign up to set email alerts
|

kNN-STUFF: kNN STreaming Unit for Fpgas

Abstract: This paper presents kNN STreaming Unit For Fpgas (kNN-STUFF), a modular, scalable and efficient Hardware/Software implementation of k-Nearest Neighbors (kNN) classifier targeting System on Chip (SoC) devices. It takes advantage of custom accelerators, implemented on the reconfigurable fabric of the SoC device, to perform most of the classifier's workload, whereas the processor coordinates the accelerators and runs the remaining workload of the kNN algorithm. kNN-STUFF offers a highly flexible framework, where … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
23
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 37 publications
(23 citation statements)
references
References 21 publications
0
23
0
Order By: Relevance
“…As with other M techniques, data are divided into the training set with N samples and M features and the testing set that is used to probe the algorithm performance. K is the number of neighbors used in the calculation [37].…”
Section: K-nearest Neighborsmentioning
confidence: 99%
See 1 more Smart Citation
“…As with other M techniques, data are divided into the training set with N samples and M features and the testing set that is used to probe the algorithm performance. K is the number of neighbors used in the calculation [37].…”
Section: K-nearest Neighborsmentioning
confidence: 99%
“…For the first step, there are several methods to calculate the distance between two data points (e.g., Euclidean distance and Manhattan distance, among others). The most common distance used in KNN is the Euclidean distance, which uses the sum of squared differences, as represented in (10) [37].…”
Section: K-nearest Neighborsmentioning
confidence: 99%
“…To reduce the access overhead imposed by DRAM, we fetch the samples in bursts to reduce the number of memory accesses. Such technique has shown its efficiency in [13] and [19]. First, the distance between a testing sample and all training samples is performed.…”
Section: B Selection-based Knn Architecturementioning
confidence: 99%
“…Other authors target hardware accelerators and present significant performance speedups and energy-efficient solutions for both standard and approximate kNN implementations. For example, recent approaches using FPGAs include implementations of a standard kNN [49], and of a distance-based hashing approximate kNN [50]. They provide significant achievements, that are, however, at the moment, not possible to replicate in wearable devices and smartphones due to the typical lack of FPGA-based hardware acceleration in these devices.…”
Section: Related Workmentioning
confidence: 99%