2022
DOI: 10.1049/tje2.12174
|View full text |Cite
|
Sign up to set email alerts
|

An FPGA‐based JPEG preprocessing accelerator for image classification

Abstract: The FPGA‐based image classification accelerator has achieved success in many practical applications. However, most accelerators focus on solving the problem of convolution computation efficiency. End‐to‐end image classification involves many non‐convolutional operations, which can also become performance bottlenecks. Therefore, the authors propose an FPGA‐based JPEG preprocessing accelerator, which can accelerate non‐convolution operations of JPEG before feature extraction. To improve throughput and energy eff… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(7 citation statements)
references
References 24 publications
0
6
0
Order By: Relevance
“…As shown in Figure 6, according to different design concepts and requirements, FPGA-based neural network optimization technology can be roughly divided into optimization for data and operation, optimization for bandwidth, and optimization for memory and access, among others, which are introduced in detail below. [71][72][73][74][75][76][77][78], less computations [79][80][81], improve calculation speed [82][83][84][85], Winograd fast convolution algorithm [86][87][88][89][90][91], Im2col convolution optimization algorithm [92][93][94][95][96][97], pipelined design [98][99][100][101][102], Roofline model [103][104][105], ping-pong cache [106][107][108][109], input feature map reuse [110,111], filter reuse [111,112], convolutional reuse [110]…”
Section: Neural Network Optimization Technology Based On Fpgamentioning
confidence: 99%
See 4 more Smart Citations
“…As shown in Figure 6, according to different design concepts and requirements, FPGA-based neural network optimization technology can be roughly divided into optimization for data and operation, optimization for bandwidth, and optimization for memory and access, among others, which are introduced in detail below. [71][72][73][74][75][76][77][78], less computations [79][80][81], improve calculation speed [82][83][84][85], Winograd fast convolution algorithm [86][87][88][89][90][91], Im2col convolution optimization algorithm [92][93][94][95][96][97], pipelined design [98][99][100][101][102], Roofline model [103][104][105], ping-pong cache [106][107][108][109], input feature map reuse [110,111], filter reuse [111,112], convolutional reuse [110]…”
Section: Neural Network Optimization Technology Based On Fpgamentioning
confidence: 99%
“…Aiming at the data itself, a method to reduce the data accuracy and computational complexity is usually used. For example, fixed-point Roof-line model [103][104][105], ping-pong cache [106][107][108][109], input feature map reuse [110,111], filter reuse [111,112], convolutional reuse [110][111][112], time reuse or space reuse [111], standardize data access and storage [113][114][115]).…”
Section: Optimization Of Data and Its Operationsmentioning
confidence: 99%
See 3 more Smart Citations