Interspeech 2019 2019
DOI: 10.21437/interspeech.2019-1832
|View full text |Cite
|
Sign up to set email alerts
|

Curriculum-Based Transfer Learning for an Effective End-to-End Spoken Language Understanding and Domain Portability

Abstract: We present an end-to-end approach to extract semantic concepts directly from the speech audio signal. To overcome the lack of data available for this spoken language understanding approach, we investigate the use of a transfer learning strategy based on the principles of curriculum learning. This approach allows us to exploit out-of-domain data that can help to prepare a fully neural architecture. Experiments are carried out on the French MEDIA and PORTMEDIA corpora and show that this end-toend SLU approach re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
57
0

Year Published

2020
2020
2025
2025

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 39 publications
(58 citation statements)
references
References 21 publications
1
57
0
Order By: Relevance
“…4 SLU performances are given in Table 3. Our results can be compared with some previous works [4,3]. We note however that results reported in [4,3] are obtained with models trained with much more data exploiting NER tasks with transfer learning.…”
Section: Resultsmentioning
confidence: 57%
See 3 more Smart Citations
“…4 SLU performances are given in Table 3. Our results can be compared with some previous works [4,3]. We note however that results reported in [4,3] are obtained with models trained with much more data exploiting NER tasks with transfer learning.…”
Section: Resultsmentioning
confidence: 57%
“…We evaluate both ASR (Word Error Rate) and SLU (Concept Error Rate) results on MEDIA corpus (Dev and Test). State-of-the-art Models E2E SLU [4] 300h 30.1 27.0 E2E Baseline [3] 41.5h -39.8 E2E SLU [3] 500h -23.7 E2E SLU + curr. [3] 500h -16.4 ASR results are presented in Table 2.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Nowadays there is a growing research interest in end-to-end systems for various SLU tasks [23][24][25][26][27][28][29][30][31]. In this work, similarly to [26,29], end-to-end training of signal-to-concept models is performed through the recurrent neural network (RNN) architecture and the connectionist temporal classification (CTC) loss function [32] as shown in Figure 1.…”
Section: End-to-end Signal-to-concept Neural Architecturementioning
confidence: 99%