2016 45th International Conference on Parallel Processing (ICPP) 2016
DOI: 10.1109/icpp.2016.15
|View full text |Cite
|
Sign up to set email alerts
|

Performance Analysis of GPU-Based Convolutional Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
50
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 109 publications
(50 citation statements)
references
References 6 publications
0
50
0
Order By: Relevance
“…Table (I) contains a summary of all the convolutional parameters described so far. One of the challenges with convolutions is that they are computationally intensive operations, taking up 86% to 94% of execution time for CNNs [1]. For heavy workloads, convolutions are typically run on graphical processing units (GPUs), as they are able to perform many mathematical operations in parallel.…”
Section: Ii1 Convolutions Backgroundmentioning
confidence: 99%
See 1 more Smart Citation
“…Table (I) contains a summary of all the convolutional parameters described so far. One of the challenges with convolutions is that they are computationally intensive operations, taking up 86% to 94% of execution time for CNNs [1]. For heavy workloads, convolutions are typically run on graphical processing units (GPUs), as they are able to perform many mathematical operations in parallel.…”
Section: Ii1 Convolutions Backgroundmentioning
confidence: 99%
“…One of the primary bottlenecks is computing the matrix multiplication required for forward propagation. In fact, over 80% of the total processing time is spent on the convolution [1]. Therefore, techniques that improve the efficiency of even forward-only propagation are in high demand and researched extensively [2,3].…”
Section: Introductionmentioning
confidence: 99%
“…Nvprof provides us with information related to the type of kernels running on the GPU, GPU utilization and other metrics. Our work differs from prior works that have used GPU based profiling tools such as nvprof to analyse the performance of ConvNets [47] or existing performance benchmark on desktop GPUs [2], where we restrict our studies to fine-grained energy and performance measurements on the CPUs.…”
Section: Power Sampling Methodsmentioning
confidence: 99%
“…There are several approaches to compute the Convolution operation [6][7][8][9][10][11][12]. Fast Fourier transformation (FFT), Winograd minimal filtering algorithm, the look-up table and matrix multiplication-based convolution are a few of them.…”
Section: Introductionmentioning
confidence: 99%
“…This algorithm reduces the arithmetic complexity of the convolutional layer by using a minimal filtering technique. These approaches to compute the convolution can further be optimized by using different techniques and schemes [12][13][14].…”
Section: Introductionmentioning
confidence: 99%