2019
DOI: 10.1007/978-3-030-38364-0_22
|View full text |Cite
|
Sign up to set email alerts
|

A Channel-Pruned and Weight-Binarized Convolutional Neural Network for Keyword Spotting

Abstract: We study channel number reduction in combination with weight binarization (1-bit weight precision) to trim a convolutional neural network for a keyword spotting (classification) task. We adopt a group-wise splitting method based on the group Lasso penalty to achieve over 50 % channel sparsity while maintaining the network performance within 0.25 % accuracy loss. We show an effective three-stage procedure to balance accuracy and sparsity in network training.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 12 publications
0
1
0
Order By: Relevance
“…4d), with 256 filters (f) and a kernel size of 3 (k). A one-dimensional max pooling layer with pooling size of 3 (p) was evaluated between the two convolutional layers in order to help remove variability in the time-frequency domain that exists due to speech variability within each recording [36]. Next, four FC hidden layers with reducing dimensionality (1024-512-256-128 units) were used and followed by the output layer.…”
Section: ) Convolutional Neural Network (Cnn)mentioning
confidence: 99%
“…4d), with 256 filters (f) and a kernel size of 3 (k). A one-dimensional max pooling layer with pooling size of 3 (p) was evaluated between the two convolutional layers in order to help remove variability in the time-frequency domain that exists due to speech variability within each recording [36]. Next, four FC hidden layers with reducing dimensionality (1024-512-256-128 units) were used and followed by the output layer.…”
Section: ) Convolutional Neural Network (Cnn)mentioning
confidence: 99%