2020 IEEE International Conference on Image Processing (ICIP) 2020
DOI: 10.1109/icip40778.2020.9191028
|View full text |Cite
|
Sign up to set email alerts
|

An Efficient Accelerator Design Methodology For Deformable Convolutional Networks

Abstract: Deformable convolutional networks have demonstrated outstanding performance in object recognition tasks with an effective feature extraction. Unlike standard convolution, the deformable convolution decides the receptive field size using dynamically generated offsets, which leads to an irregular memory access. Especially, the memory access pattern varies both spatially and temporally, making static optimization ineffective. Thus, a naive implementation would lead to an excessive memory footprint. In this paper,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(1 citation statement)
references
References 23 publications
(23 reference statements)
0
1
0
Order By: Relevance
“…The authors in [17] proposed to replace the bilinear interpolation algorithm with a simple rounding strategy and restrict the sampling locations to avoid dynamic memory access induced buffering problems. Similarly, the authors in [18] also proposed to modify the DCN models to reduce the receptive field size substantially so that the sampling locations are limited to a small region, which avoids the dynamic memory accesses across the whole input feature map. Although these approaches are demonstrated to be effective on existing neural network accelerators with minor model accuracy, it essentially poses hardware constraints to the model design and particularly limits its use on scenarios that are sensitive to the model accuracy loss.…”
Section: Introductionmentioning
confidence: 99%
“…The authors in [17] proposed to replace the bilinear interpolation algorithm with a simple rounding strategy and restrict the sampling locations to avoid dynamic memory access induced buffering problems. Similarly, the authors in [18] also proposed to modify the DCN models to reduce the receptive field size substantially so that the sampling locations are limited to a small region, which avoids the dynamic memory accesses across the whole input feature map. Although these approaches are demonstrated to be effective on existing neural network accelerators with minor model accuracy, it essentially poses hardware constraints to the model design and particularly limits its use on scenarios that are sensitive to the model accuracy loss.…”
Section: Introductionmentioning
confidence: 99%