2016 49th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO) 2016
DOI: 10.1109/micro.2016.7783720
|View full text |Cite
|
Sign up to set email alerts
|

From high-level deep neural models to FPGAs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
220
0
2

Year Published

2017
2017
2021
2021

Publication Types

Select...
3
3
3

Relationship

0
9

Authors

Journals

citations
Cited by 392 publications
(224 citation statements)
references
References 25 publications
2
220
0
2
Order By: Relevance
“…These techniques can be integrated into Multi-CLP designs. [24] and [30] propose complete frameworks for generating FPGAbased accelerators from CNN specifications. Our Multi-CLP approach can be integrated into these frameworks to improve the performance of auto-generated accelerators.…”
Section: Related Workmentioning
confidence: 99%
“…These techniques can be integrated into Multi-CLP designs. [24] and [30] propose complete frameworks for generating FPGAbased accelerators from CNN specifications. Our Multi-CLP approach can be integrated into these frameworks to improve the performance of auto-generated accelerators.…”
Section: Related Workmentioning
confidence: 99%
“…However, recent research on DNNs is still increasing the depth of models and introducing new architectures, resulting in higher number of parameters per network and higher computational complexity. Other than CPUs and GPUs, FPGAs are becoming a platform candidate to achieve energy efficient neural network computation [12], [13], [22], [24]- [27]. Equipped with the necessary hardware for basic DNN operations, FPGAs are able to achieve high parallelism and utilize the properties of neural network computation to remove unnecessary logic.…”
Section: Prior Work On Accelerating Dnns For Fpgasmentioning
confidence: 99%
“…Prior works have shown FPGAs to be successful in accelerating the inference of pre-trained neural networks by providing custom data paths to achieve high parallelism. A vast amount of such research focuses on accelerating neural networks in the image domain [12], [13], speech recognition [14], [15] and language modelling [16]. To the best of our knowledge, similar efforts have not been made for accelerating neural networks for speech/audio synthesis.…”
Section: Introductionmentioning
confidence: 99%
“…In [17], Chen et al used batch processing to maximise weights reuse in ConvNet layers across multiple inputs. [18] and [19] are more similar to our approach in presenting automated flows for mapping ConvNets to FPGAs. Both frameworks optimise for throughput and employ favourable batch sizes, with [19] also aiming to keep the batch size small.…”
Section: Performance Comparisonmentioning
confidence: 99%