2018 International Conference on Field-Programmable Technology (FPT) 2018
DOI: 10.1109/fpt.2018.00014
|View full text |Cite
|
Sign up to set email alerts
|

A Real-Time Object Detection Accelerator with Compressed SSDLite on FPGA

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
33
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 55 publications
(33 citation statements)
references
References 20 publications
0
33
0
Order By: Relevance
“…Block-wise pruning [5,22,23] prunes weights such that the number of non-zero weights in each block is equal. Channel pruning [10,16,31] prunes a certain filter that meets a condition. Finally, kernel-level and vector-level pruning [33] prunes kernels and vectors in kernels.…”
Section: Sparseness Approach For Weight Memory Reductionmentioning
confidence: 99%
“…Block-wise pruning [5,22,23] prunes weights such that the number of non-zero weights in each block is equal. Channel pruning [10,16,31] prunes a certain filter that meets a condition. Finally, kernel-level and vector-level pruning [33] prunes kernels and vectors in kernels.…”
Section: Sparseness Approach For Weight Memory Reductionmentioning
confidence: 99%
“…The base network, feature extractor is a truncated classification network of VGG-16. The bounding box predictor is a combination of small convolutional filters used to predict the score, category and box offsets for a fixed set of default bounding boxes [23].…”
Section: Features With Algorithm 221 Object Recognitionmentioning
confidence: 99%
“…SSD and Yolo are characteristic for their irregularities, which results in the output being produced at different times, while the ResNet is known for its residual blocks. Each network was trained in 32-bit floating-point representation and then linearly quantised into 8-bit integer representation [4]. In total giving P training samples X as 156 and the input feature size M being 15 corresponding to the first 15 parameters in the Table 1.…”
Section: Datasetmentioning
confidence: 99%
“…Field-programmable gate arrays (FPGAs) are becoming increasingly popular in the deep learning community, particularly in the acceleration of convolutional neural networks (CNNs) [4,11,5]. This acceleration is achieved by parallelising the extensive concurrency exhibited by CNNs.…”
Section: Introductionmentioning
confidence: 99%