6th Workshop on Spoken Language Technologies for Under-Resourced Languages (SLTU 2018) 2018
DOI: 10.21437/sltu.2018-44
|View full text |Cite
|
Sign up to set email alerts
|

Improved Language Identification Using Stacked SDC Features and Residual Neural Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…For the sequence models, recently transformer architectures are outperforming the RNN and LSTM based models for speech processing applications [80]. Residual networks, allowing better gradient low for longer neural networks, became popular for LID tasks with suiciently large corpus [161,162,192]. In Fig.…”
Section: Dnnmentioning
confidence: 99%
“…For the sequence models, recently transformer architectures are outperforming the RNN and LSTM based models for speech processing applications [80]. Residual networks, allowing better gradient low for longer neural networks, became popular for LID tasks with suiciently large corpus [161,162,192]. In Fig.…”
Section: Dnnmentioning
confidence: 99%