2022
DOI: 10.21553/rev-jec.286
|View full text |Cite
|
Sign up to set email alerts
|

An FPGA-based Convolution IP Core for Deep Neural Networks Acceleration

Abstract: The development of machine learning has madea revolution in various applications such as object detection,image/video recognition, and semantic segmentation. Neuralnetworks, a class of machine learning, play a crucial role inthis process because of their remarkable improvement overtraditional algorithms. However, neural networks are now goingdeeper and cost a significant amount of computation operations.Therefore they usually work ineffectively in edge devices thathave limited resources and low performance. In… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 11 publications
0
4
0
Order By: Relevance
“…As shown in Figure 6, according to different design concepts and requirements, FPGA-based neural network optimization technology can be roughly divided into optimization for data and operation, optimization for bandwidth, and optimization for memory and access, among others, which are introduced in detail below. [71][72][73][74][75][76][77][78], less computations [79][80][81], improve calculation speed [82][83][84][85], Winograd fast convolution algorithm [86][87][88][89][90][91], Im2col convolution optimization algorithm [92][93][94][95][96][97], pipelined design [98][99][100][101][102], Roofline model [103][104][105], ping-pong cache [106][107][108][109], input feature map reuse [110,111], filter reuse [111,112], convolutional reuse [110]…”
Section: Neural Network Optimization Technology Based On Fpgamentioning
confidence: 99%
See 3 more Smart Citations
“…As shown in Figure 6, according to different design concepts and requirements, FPGA-based neural network optimization technology can be roughly divided into optimization for data and operation, optimization for bandwidth, and optimization for memory and access, among others, which are introduced in detail below. [71][72][73][74][75][76][77][78], less computations [79][80][81], improve calculation speed [82][83][84][85], Winograd fast convolution algorithm [86][87][88][89][90][91], Im2col convolution optimization algorithm [92][93][94][95][96][97], pipelined design [98][99][100][101][102], Roofline model [103][104][105], ping-pong cache [106][107][108][109], input feature map reuse [110,111], filter reuse [111,112], convolutional reuse [110]…”
Section: Neural Network Optimization Technology Based On Fpgamentioning
confidence: 99%
“…Aiming at the data itself, a method to reduce the data accuracy and computational complexity is usually used. For example, fixed-point Roof-line model [103][104][105], ping-pong cache [106][107][108][109], input feature map reuse [110,111], filter reuse [111,112], convolutional reuse [110][111][112], time reuse or space reuse [111], standardize data access and storage [113][114][115]).…”
Section: Optimization Of Data and Its Operationsmentioning
confidence: 99%
See 2 more Smart Citations