2022
DOI: 10.1109/taslp.2022.3153253
|View full text |Cite
|
Sign up to set email alerts
|

Neural Architecture Search for LF-MMI Trained Time Delay Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 15 publications
(3 citation statements)
references
References 67 publications
0
3
0
Order By: Relevance
“…Hu et al [86] applied neural architecture search (NAS) to automatically learn the two hyper-parameters of factored time-delay neural networks (TDNN-Fs), namely the left and right splicing context offsets and the dimensionality of the bottleneck linear projection at each hidden layer. These techniques included the differentiable neural architecture search (DARTS) to integrate architecture learning with lattice-free MMI training; Gumbel-Softmax and Pipelined DARTS to reduce the confusion over candidate architectures and improve the generalization of architecture selection; and penalized DARTS to incorporate resource constraints to balance the trade-off between system performance and complexity.…”
Section: Acoustic Model Of Asr For Dysarthric Speechmentioning
confidence: 99%
“…Hu et al [86] applied neural architecture search (NAS) to automatically learn the two hyper-parameters of factored time-delay neural networks (TDNN-Fs), namely the left and right splicing context offsets and the dimensionality of the bottleneck linear projection at each hidden layer. These techniques included the differentiable neural architecture search (DARTS) to integrate architecture learning with lattice-free MMI training; Gumbel-Softmax and Pipelined DARTS to reduce the confusion over candidate architectures and improve the generalization of architecture selection; and penalized DARTS to incorporate resource constraints to balance the trade-off between system performance and complexity.…”
Section: Acoustic Model Of Asr For Dysarthric Speechmentioning
confidence: 99%
“…Hu, et al [100] applied neural architecture search (NAS) to automatically learn the two hyper-parameters of factored time delay neural networks (TDNN-Fs), namely the left and right splicing context offsets and the dimensionality of the bottleneck linear projection at each hidden layer. They utilized differentiable neural architecture search (DARTS) to integrate architecture learning with lattice-free maximum mutual information (LF-MMI) training, Gumbel-Softmax and Pipelined DARTS to reduce confusion over candidate architectures and improve generalization of architecture selection, and penalized DARTS to incorporate resource constraints to balance the trade-off between system performance and complexity.…”
Section: Deep Learning Technologies Of Asr For Dysarthric Speechmentioning
confidence: 99%
“…the NAS was used to automatically learn two hyperparameters of factorized TDNN, namely left-right splicing and contextual offset, and the linear projection dimension of each hidden layer, allowing TDNN to perform in different systems through parameter sharing effective search. Experimental results show that its word error rate is large and model size is greatly reduced, and speech recognition performance is improved [9]. [1] used TDNN to predict the active power demand on a P4 bus in President Prudente.…”
Section: Introductionmentioning
confidence: 99%