2017 International Conference on Field Programmable Technology (ICFPT) 2017
DOI: 10.1109/fpt.2017.8280150
|View full text |Cite
|
Sign up to set email alerts
|

FPGA-based training of convolutional neural networks with a reduced precision floating-point library

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(7 citation statements)
references
References 4 publications
0
7
0
Order By: Relevance
“…FPGAs allow for faster development time and therefore are often used to explore various new research areas for CNNs, such as low-precision and binary networks [14,59], novel training regimes [15], and model compression through weight pruning or novel CNN structures [21,16]. In Section 7, we validate the performance of our filter matrix packing algorithm with an FPGA implementation.…”
Section: Asic and Fpga Accelerators For Cnnsmentioning
confidence: 99%
“…FPGAs allow for faster development time and therefore are often used to explore various new research areas for CNNs, such as low-precision and binary networks [14,59], novel training regimes [15], and model compression through weight pruning or novel CNN structures [21,16]. In Section 7, we validate the performance of our filter matrix packing algorithm with an FPGA implementation.…”
Section: Asic and Fpga Accelerators For Cnnsmentioning
confidence: 99%
“…Recently, High-Level Synthesis libraries for supporting custom floating-point precision have been researched. These libraries are easy to use and portable compared to RTL approaches [56][57][58].…”
Section: B Reduced Precision Toolsmentioning
confidence: 99%
“…DiCecco et al [56] propose a custom-precision floatingpoint library (CPFP [59]) for High-Level Synthesis, and they evaluate it on a small convolution neural network. While the custom floating-point IP employs fewer resources than single precision, the FPGA design has a lower throughput than the CPU.…”
Section: B Reduced Precision Toolsmentioning
confidence: 99%
“…Much FPGA-based work focuses on Convolutional Neural Network (CNN) acceleration to meet the challenges of computing resources and energy consumption on CNN. The authors of [9][10][11][12] proposed to accelerate CNN on FPGA using simplified numerical precision to save chip resource consumption. The authors of [13,14] proposed CNN architecture implemented in FPGA with the Winograd algorithm to reduce the complexity of convolution operation and accelerate the computation process.…”
Section: Related Workmentioning
confidence: 99%