2004
DOI: 10.1016/j.patcog.2004.01.013
|View full text |Cite
|
Sign up to set email alerts
|

GPU implementation of neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
161
0
4

Year Published

2011
2011
2023
2023

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 326 publications
(165 citation statements)
references
References 2 publications
0
161
0
4
Order By: Relevance
“…It is not even necessary that this data be from other robots, as shown by Yang's use of general-purpose cooking videos for object and grasp recognition [79]. Regarding training time, local parallel processing [17] and increases in raw processing speed have led to significant improvements. Distributed computing offers the potential to direct more computing resources to a given problem [88] but can be limited by communication speeds [2].…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…It is not even necessary that this data be from other robots, as shown by Yang's use of general-purpose cooking videos for object and grasp recognition [79]. Regarding training time, local parallel processing [17] and increases in raw processing speed have led to significant improvements. Distributed computing offers the potential to direct more computing resources to a given problem [88] but can be limited by communication speeds [2].…”
Section: Resultsmentioning
confidence: 99%
“…In the 2000s, researchers began using graphical processing units (GPUs) to parallelize implementations of artificial neural networks [17]. The largest bottleneck in training neural networks is a matrix-vector multiplication step, which can be parallelized using GPUs.…”
Section: A Brief History Of Deep Learningmentioning
confidence: 99%
“…(Figure 2.) Oh and Jung [6] and Luo et al [7] both treat neural networks as matrix operations and report considerable speed ups by using a GPU when the neural network is used on compute intensive image processing tasks. For example, Ribeiro et al [8] demonstrated speed ups in the region of 170 fold when using Multiple Back-Propagation to predict bankruptcy.…”
Section: Neural Networkmentioning
confidence: 99%
“…Among them, the naïve Bayes classifier assumes the data distribution to be Gaussian and finds the discriminant function using prior probability and likelihood. Finally, it derives the decision curve using the discriminant function [2]. However, this method presumes that all of the features are independent, and thus, errors can occur in the process of obtaining the discriminant function [3].…”
Section: Introductionmentioning
confidence: 99%