2019 29th International Conference on Field Programmable Logic and Applications (FPL) 2019
DOI: 10.1109/fpl.2019.00037
|View full text |Cite
|
Sign up to set email alerts
|

Towards an Efficient Accelerator for DNN-Based Remote Sensing Image Segmentation on FPGAs

Abstract: Among popular techniques in remote sensing image (RSI) segmentation, Deep Neural Networks (DNNs) have gained increasing interest but often require high computation complexity, which largely limit their applicability in on-board space platforms. Therefore, various dedicated hardware designs on FPGAs have been developed to accelerate DNNs. However, it imposes difficulty on the design of efficient accelerator for DNN-based segmentation algorithms, since they need to perform both convolution and deconvolution whic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
21
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 29 publications
(21 citation statements)
references
References 22 publications
0
21
0
Order By: Relevance
“…Porting a neural network to hardware accelerators has been performed for various architectures [27]. This is especially common for FPGAs [28], [29], and used for example in hand tracking [30] or language processing [31]. Special attention is also on adapting key principles in neural network archi- tectures, such as depth-wise convolutions for FPGAs [32] or quantized-based operations, such as binary neural networks [33].…”
Section: Related Workmentioning
confidence: 99%
“…Porting a neural network to hardware accelerators has been performed for various architectures [27]. This is especially common for FPGAs [28], [29], and used for example in hand tracking [30] or language processing [31]. Special attention is also on adapting key principles in neural network archi- tectures, such as depth-wise convolutions for FPGAs [32] or quantized-based operations, such as binary neural networks [33].…”
Section: Related Workmentioning
confidence: 99%
“…The main difference between the traditional classification networks and the segmentation networks is the deconvolution layer, so these studies focused on the acceleration of deconvolution, presenting hardware architectures for padding free deconvolution or zero padding deconvolution. Liu et al [20] and FCN-engine [23] accelerated the padding free deconvolution of an 8-bit network with FPGA and application specific integrated circuit (ASIC), respectively. Liu et al inferred the 256 × 256 images at 1.79 FPS/W.…”
Section: Related Workmentioning
confidence: 99%
“…The resulting power consumption and latency are expected to be large, but existing studies have not considered it [21]- [23], [37], [39], [41]. Liu et al [20] and Song et al [38] store all feature maps in on-chip memory. This was possible because the former significantly reduced the image size and network channel size, and the latter was GAN, the input image and network channel size being small.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations