2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2014
DOI: 10.1109/icassp.2014.6853595
|View full text |Cite
|
Sign up to set email alerts
|

Reshaping deep neural network for fast decoding by node-pruning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
84
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 99 publications
(84 citation statements)
references
References 10 publications
0
84
0
Order By: Relevance
“…Automatic node and arc level restructuring algorithms are given. Different from other model restructuring works [8], [9], [10], which performed the model rebuilding at the end of neural network training, we find ways to restructure the DNN before BP to achieve significant accelerations in training speed.…”
Section: Ill Aut Om Atic Model Redundancy Reductionmentioning
confidence: 99%
See 3 more Smart Citations
“…Automatic node and arc level restructuring algorithms are given. Different from other model restructuring works [8], [9], [10], which performed the model rebuilding at the end of neural network training, we find ways to restructure the DNN before BP to achieve significant accelerations in training speed.…”
Section: Ill Aut Om Atic Model Redundancy Reductionmentioning
confidence: 99%
“…Different from some earlier works of node restructuring [l3], [14], which were applied to small scale shallow neural networks with the motivation of alleviating the over-fitting problem, our previous work [10] investigated how node re structuring can help reduce the complexity of a large scale DNN. Several manual node pruning approaches were per formed on well trained DNNs to accelerate the decoding.…”
Section: B Node Restructuringmentioning
confidence: 99%
See 2 more Smart Citations
“…Optimizations like those found in [34] could also be applied to a network obtained through this approach for further speed-up. Note that unlike weight matrix decomposition and node pruning [40] approaches, soft target training is considerably more flexible because the input feature type and NN architecture are not restricted to be the same as the teacher. Furthermore, an ensemble of networks could be used to relabel inputs for the small network.…”
Section: Training a Small Dnn With Soft Targets From A Large Dnnmentioning
confidence: 99%