Interspeech 2022 2022
DOI: 10.21437/interspeech.2022-11401
|View full text |Cite
|
Sign up to set email alerts
|

Multitask Learning for Low Resource Spoken Language Understanding

Abstract: We explore the benefits that multitask learning offer to speech processing as we train models on dual objectives with automatic speech recognition and intent classification or sentiment classification. Our models, although being of modest size, show improvements over models trained end-to-end on intent classification. We compare different settings to find the optimal disposition of each task module compared to one another. Finally, we study the performance of the models in lowresource scenario by training the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
references
References 27 publications
0
0
0
Order By: Relevance