2009 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery 2009
DOI: 10.1109/cyberc.2009.5399145
|View full text |Cite
|
Sign up to set email alerts
|

A CUDA-based parallel implementation of K-nearest neighbor algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
20
0
1

Year Published

2011
2011
2019
2019

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 26 publications
(21 citation statements)
references
References 8 publications
0
20
0
1
Order By: Relevance
“…Since KNN is a fundamental algorithm used in a variety of machine learning contexts, a significant amount of time has gone into making it run fast on parallel hardware [5], [6], [7]. However, most of this work has been done to reduce algorithmic complexity or parallelization in a shared memory setting.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Since KNN is a fundamental algorithm used in a variety of machine learning contexts, a significant amount of time has gone into making it run fast on parallel hardware [5], [6], [7]. However, most of this work has been done to reduce algorithmic complexity or parallelization in a shared memory setting.…”
Section: Introductionmentioning
confidence: 99%
“…Such data structures work well in a variety of scientific applications where the dimensionality of data is not very high, and are critical for large data sets where we cannot afford to scan the entire data set for each query. Furthermore, parallel algorithms that can utilize multiple cores of a single shared memory machine to speed up kd-tree based KNN have been developed [5], [6], [7]. However, there has not been much work on parallelizing this in a distributed setting.…”
Section: Introductionmentioning
confidence: 99%
“…General parallel algorithms have been devised for sorting [26], clustering [8,27], classification [28,29], and neural networks training [30]. Our work also takes a parallel design and is implemented cooperatively on the CPU and GPU, referencing the implementation of the previous work.…”
Section: Related Workmentioning
confidence: 99%
“…In addition, increased use of technology has resulted in a wealth of digital trails generated by learners, providing large volumes of trace data collected during the learning process (Knight et al, 2013). Many data mining algorithms have implementations adapted for this big-data environment, for example, Decision Tree (Ben-Haim & TomTov, 2010), k-NN (Liang et al, 2009), Neural Networks (Gu et al, 2013, SVM and regression (Luo et al, 2012), and supporting tools are available (Prekopcsák et al, 2011), facilitating quick analysis and feedback (Siemens & Long, 2011). Recent developments in learning analytics frameworks (e.g., the learning warehouse, Buckingham Shum & Deakin Crick, 2012) illustrate the potential for learning analytics to support automation of the full life cycle from data gathering through to deployment of recommendations and interventions based on analysis results.…”
Section: Benefits Of Greater Collaboration Between Educational Psychomentioning
confidence: 99%