2015 International Joint Conference on Neural Networks (IJCNN) 2015
DOI: 10.1109/ijcnn.2015.7280335
|View full text |Cite
|
Sign up to set email alerts
|

Automatic model redundancy reduction for fast back-propagation for deep neural networks in speech recognition

Abstract: Although deep neural networks (DNNs) have achieved great performance gain, the immense computational cost of DNN model training has become a major block to utilize massive speech data for DNN training. Previous research on DNN training acceleration mostly focussed on hardware-based parallelization. In this paper, node pruning and arc restructur ing are proposed to explore model redundancy after a novel lightly discriminative pretraining process. With some measures of node/arc importance, model redundancies are… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 14 publications
0
2
0
Order By: Relevance
“…Existing research on the mechanical characteristics and spectrum of windings is mostly based on steady-state vibration signals. However, vibration sensors in transformers are often difficult to install in substations, and the acoustic signals transmitted through the oil tank from the internal vibrations of transformers are easier to monitor and can contain more comprehensive information than vibration signals [ 18 , 19 ].…”
Section: Introductionmentioning
confidence: 99%
“…Existing research on the mechanical characteristics and spectrum of windings is mostly based on steady-state vibration signals. However, vibration sensors in transformers are often difficult to install in substations, and the acoustic signals transmitted through the oil tank from the internal vibrations of transformers are easier to monitor and can contain more comprehensive information than vibration signals [ 18 , 19 ].…”
Section: Introductionmentioning
confidence: 99%
“…Yu et al [4] incorporated a soft regularization during training to minimize the number of nonzero elements of weight matrices. He et al [5] and Qian et al [6] proposed a method based on node pruning and arc restructuring to prune DNNs for fast inference. Han et al [7] explored an iterative process to prune unimportant connections of DNNs, which reduced the storage and computation by an order of magnitude.…”
Section: Introductionmentioning
confidence: 99%