2021
DOI: 10.1007/s11265-021-01642-6
|View full text |Cite
|
Sign up to set email alerts
|

FPGA-Based Inter-layer Pipelined Accelerators for Filter-Wise Weight-Balanced Sparse Fully Convolutional Networks with Overlapped Tiling

Abstract: Convolutional neural networks (CNNs) exhibit state-of-the-art performance while performing computer-vision tasks. CNNs require high-speed, low-power, and high-accuracy hardware for various scenarios, such as edge environments. However, the number of weights is so large that embedded systems cannot store them owing to their limited on-chip memory. A different method is used to minimize the input image size, for real-time processing, but it causes a considerable drop in accuracy. Although pruned sparse CNNs and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 39 publications
0
5
0
Order By: Relevance
“…Image classification, object detection, and semantic segmentation are just few of the computer vision applications where convolutional neural networks (CNNs) [8,9] have shown amazing performance. Convolutional layers, pooling layers, and fully linked layers are the standard building blocks of a normal CNN [10].…”
Section: Cnns and Fpga-based Pipeliningmentioning
confidence: 99%
“…Image classification, object detection, and semantic segmentation are just few of the computer vision applications where convolutional neural networks (CNNs) [8,9] have shown amazing performance. Convolutional layers, pooling layers, and fully linked layers are the standard building blocks of a normal CNN [10].…”
Section: Cnns and Fpga-based Pipeliningmentioning
confidence: 99%
“…In [64], [262], deconvolution is replaced by bi-linear interpolation. The authors use a tiled approach, so they divide the input image into smaller images and perform the segmentation on them.…”
Section: Semantic Segmentationmentioning
confidence: 99%
“…FCN differs from CNN in the last layer, where the fully connected layer is replaced by a convolutional layer. Some techniques used in [64], [262] are also applied, including filter-wise pruning, bi-linear interpolation, coordinate memory overhead. In this case, the evaluation is on Camvid dataset.…”
Section: Semantic Segmentationmentioning
confidence: 99%
“…All models can reach the same or similar MAE of their non-pruned full model counterparts. This suggests that our proposed pruning technique is a viable approach to reduce the model parameters under hardware resource constraints or to decrease hardware processing cycles [11,12].…”
Section: Neural Network Model Pruningmentioning
confidence: 99%