Interspeech 2023 2023
DOI: 10.21437/interspeech.2023-2310
|View full text |Cite
|
Sign up to set email alerts
|

A Compact End-to-End Model with Local and Global Context for Spoken Language Identification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2025
2025

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…Both models have been introduced into language recognition due to their outstanding performance [23][24][25][26]. While CNN-based models are effective in speech processing tasks, their limited kernel size often restricts them to capturing only local context, this is particularly disadvantageous for LID tasks, where the quantity of available context is crucial [27].…”
Section: Introductionmentioning
confidence: 99%
“…Both models have been introduced into language recognition due to their outstanding performance [23][24][25][26]. While CNN-based models are effective in speech processing tasks, their limited kernel size often restricts them to capturing only local context, this is particularly disadvantageous for LID tasks, where the quantity of available context is crucial [27].…”
Section: Introductionmentioning
confidence: 99%