2019
DOI: 10.5815/ijisa.2019.05.03
|View full text |Cite
|
Sign up to set email alerts
|

Accelerating Training of Deep Neural Networks on GPU using CUDA

Abstract: The development of fast and efficient training algorithms for Deep Neural Networks has been a subject of interest over the past few years because the biggest drawback of Deep Neural Networks is enormous cost in computation and large time is consumed to train the parameters of Deep Neural Networks. This aspect motivated several researchers to focus on recent advancements of hardware architectures and parallel programming models and paradigms for accelerating the training of Deep Neural Networks. We revisited th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 17 publications
0
2
0
Order By: Relevance
“…By averaging, we get the measured probability of turning on for each given u. The approximation probability is derived from (19). Fig.…”
Section: A Approximation Ability Of Lemmamentioning
confidence: 99%
“…By averaging, we get the measured probability of turning on for each given u. The approximation probability is derived from (19). Fig.…”
Section: A Approximation Ability Of Lemmamentioning
confidence: 99%
“…and GPUs [6], [7]. Due to the demand for an accelerated high computational environment, an algorithm is required to decrease the execution time and improves performance [8]. Rather than doing shifting and allocating the memory to the host and device allocate a special pointer that can be used by both CPU and GPU, this is the concept of unified memory allocation [9].…”
mentioning
confidence: 99%