2018
DOI: 10.1016/j.engappai.2017.10.023
|View full text |Cite
|
Sign up to set email alerts
|

Accelerating nearest neighbor partitioning neural network classifier based on CUDA

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0
1

Year Published

2018
2018
2022
2022

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 41 publications
0
4
0
1
Order By: Relevance
“…7. Third, rather than evaluating the photoacoustic spectra with an extensive look-up table (i.e., spectral atlas), the classification time can be reduced by implementing an artificial neural network (ANN) to learn and match features of the acoustic spectra (i.e., bypassing PCA feature extraction), which has been successfully implemented in previous studies (Jain and Mao, 1991;Wang et al, 2018).…”
Section: Discussionmentioning
confidence: 99%
“…7. Third, rather than evaluating the photoacoustic spectra with an extensive look-up table (i.e., spectral atlas), the classification time can be reduced by implementing an artificial neural network (ANN) to learn and match features of the acoustic spectra (i.e., bypassing PCA feature extraction), which has been successfully implemented in previous studies (Jain and Mao, 1991;Wang et al, 2018).…”
Section: Discussionmentioning
confidence: 99%
“…Authors in [ 50 ] recommended a parallel nearest neighbor partitioning (NNP) method to accelerate NNP. In their method, blocks and threads are used to evaluate potential neural networks and to perform parallel subtasks.…”
Section: Review Of Some Related Workmentioning
confidence: 99%
“…The performance of the proposed method, image moment anomaly detection based on Gaussian distribution model (IMA-GM), is evaluated by comparing with three common supervised classification algorithms, logistic regression (LR) [33], support vector machine (SVM) [34] and neural network (NN) [35]. In particular, LR is based on L 2 regularization and its penalty coefficient is set to 10 −5 ; the kernel function of SVM is a radial basis function and its kernel factor is 10; the NN has three layers and the node number of input, hidden and output layers are 32, 64 and 32 respectively, and the Sigmoid activation function is used.…”
Section: B Experiments 2: Comparative Test Using Different Defect Detmentioning
confidence: 99%