2018
DOI: 10.1145/3242900
|View full text |Cite
|
Sign up to set email alerts
|

Optimizing CNN-based Segmentation with Deeply Customized Convolutional and Deconvolutional Architectures on FPGA

Abstract: Convolutional Neural Networks (CNNs) based algorithms have been successful in solving image recognition problems, showing very large accuracy improvement. In recent years, deconvolution layers are widely used as key components in the state-of-the-art CNNs for end-to-end training and models to support tasks such as image segmentation and super resolution. However, the deconvolution algorithms are computationally intensive which limits their applicability to real time applications. Particularly, there has been l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
47
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 61 publications
(47 citation statements)
references
References 25 publications
0
47
0
Order By: Relevance
“…The popular DNN models for semantic segmentation include SegNet, U-Net, DeepLab, FCN, etc. These models are much more computationally intensive than the classification network models such as LeNet, AlexNet and GoogleNet [6].…”
Section: A Dnn-based Rsi Segmentationmentioning
confidence: 99%
See 3 more Smart Citations
“…The popular DNN models for semantic segmentation include SegNet, U-Net, DeepLab, FCN, etc. These models are much more computationally intensive than the classification network models such as LeNet, AlexNet and GoogleNet [6].…”
Section: A Dnn-based Rsi Segmentationmentioning
confidence: 99%
“…The most relevant work to this paper is presented in [6], which optimized the computation of both Conv and Deconv layers for image segmentation. [6] proposed an efficient method to deal with the computational inefficiency occurred by deconvolution and used the design space exploration methodology to achieve the optimal resource allocation between Conv and Deconv modules. However, their design utilized separate modules for Conv and Deconv, which didn't share the DSPs for multipliers and led to the problem of under-utilization of resources.…”
Section: B Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Data Quantization. The main benefit of accelerating CNN models in FPGAs comes from the fact that CNNs are robust to low bitwidth quantization [11]. Instead of using the default double or single floating point precision in CPU, fixed-point precision can be used in FPGA-based CNN accelerator to achieve an efficient design optimized for performance and power efficiency [9,10].…”
Section: Optimizationsmentioning
confidence: 99%