2019 18th IEEE International Conference on Machine Learning and Applications (ICMLA) 2019
DOI: 10.1109/icmla.2019.00127
|View full text |Cite
|
Sign up to set email alerts
|

Low-Bit Quantization and Quantization-Aware Training for Small-Footprint Keyword Spotting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 19 publications
(9 citation statements)
references
References 14 publications
0
8
0
Order By: Relevance
“…For wakeword detection, the DNN posteriors are smoothed across the time dimension using a running average sliding window of 25 frames. In Alexa wakeword model training settings, these models provide a strong baseline performance of small footprint keyword spotting [10].…”
Section: Model Architecture and Training Parametersmentioning
confidence: 99%
“…For wakeword detection, the DNN posteriors are smoothed across the time dimension using a running average sliding window of 25 frames. In Alexa wakeword model training settings, these models provide a strong baseline performance of small footprint keyword spotting [10].…”
Section: Model Architecture and Training Parametersmentioning
confidence: 99%
“…Several representative AC techniques like approximate arithmetic unit [5]- [11], voltage-over-scaling [12], [13], bit-width scaling (quantization) [13]- [17] can be either solely or hybridly applied for gaining efficiency. However, most of them [5], [11]- [14] focus on inference, whereas [8]- [10], [15]- [17] aim at training. Besides, the AC techniques are conventionally applied to forward propagation only, and back propagation rely on exact computation [5], [11]- [14] or additional training stages are required [5], [11].…”
Section: Introductionmentioning
confidence: 99%
“…However, most of them [5], [11]- [14] focus on inference, whereas [8]- [10], [15]- [17] aim at training. Besides, the AC techniques are conventionally applied to forward propagation only, and back propagation rely on exact computation [5], [11]- [14] or additional training stages are required [5], [11]. Although the inference and forward propagation enjoy AC benefits, the efficiency improvement of back propagation in training has less explored despite its importance.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations