2010 IEEE International Conference on Granular Computing 2010
DOI: 10.1109/grc.2010.145
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Parallel Algorithm for Nonlinear Dimensionality Reduction on GPU

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2012
2012
2015
2015

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 7 publications
0
6
0
Order By: Relevance
“…Their approach subdivides the data into cells in order to gain spatial locality and efficient parallelisation. Yeh et al [16] developed an efficient GPU-based k-NN search using kd-tree, where they performed fast parallel radix sort for calculating the median values in kd-tree construction. Garcia et al [17] proposed a newer GPU-based k-NN approach, by using the cuBLAS (CUDA Basic Linear Algebra Subroutines) library in order to efficiently calculate a parallel distance matrix for faster brute-force k-NN parallelisation.…”
Section: Related Workmentioning
confidence: 99%
“…Their approach subdivides the data into cells in order to gain spatial locality and efficient parallelisation. Yeh et al [16] developed an efficient GPU-based k-NN search using kd-tree, where they performed fast parallel radix sort for calculating the median values in kd-tree construction. Garcia et al [17] proposed a newer GPU-based k-NN approach, by using the cuBLAS (CUDA Basic Linear Algebra Subroutines) library in order to efficiently calculate a parallel distance matrix for faster brute-force k-NN parallelisation.…”
Section: Related Workmentioning
confidence: 99%
“…The importance of efficiently using memory in CUDA cannot be overstated. There are three orders of magnitude un similar speed between the fastest on-chip register memory and mapped host memory that should be traverse the PCIe bus, CUDA developers must have understanding about the most efficient way to memory used [13,17].…”
Section: Efficient Cuda Memory Using Reduction Kernelmentioning
confidence: 99%
“…This approach has many advantages such as easy to implement, suitable for most of cases, fast enough and can be parallelized and further accelerated [2]. But for some other type of dataset, the k-nn approach may face difficulties.…”
Section: Introductionmentioning
confidence: 99%