2020 28th European Signal Processing Conference (EUSIPCO) 2021
DOI: 10.23919/eusipco47968.2020.9287739
|View full text |Cite
|
Sign up to set email alerts
|

Memory Requirement Reduction of Deep Neural Networks for Field Programmable Gate Arrays Using Low-Bit Quantization of Parameters

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 13 publications
0
2
0
Order By: Relevance
“…The reduction in the size of the speech separation model was considered in [ 76 ]. The authors proposed a low-bit quantization method based on nonuniform and dynamic quantization methods (where the parameters of the quantization are adjusted according to the data).…”
Section: Techniques For the Reduction In Computational And Memory Req...mentioning
confidence: 99%
See 1 more Smart Citation
“…The reduction in the size of the speech separation model was considered in [ 76 ]. The authors proposed a low-bit quantization method based on nonuniform and dynamic quantization methods (where the parameters of the quantization are adjusted according to the data).…”
Section: Techniques For the Reduction In Computational And Memory Req...mentioning
confidence: 99%
“…Quantization can also lead to a reduction in model size of more than 50%. In some cases (for example, [ 76 ]), this at the cost of a small reduction in STOI (by 2.7%). It is also possibly to speed up the quantization when integer computations are appropriately used.…”
Section: Techniques For the Reduction In Computational And Memory Req...mentioning
confidence: 99%