2019
DOI: 10.1109/lsp.2019.2895284
|View full text |Cite
|
Sign up to set email alerts
|

Utterance Generation With Variational Auto-Encoder for Slot Filling in Spoken Language Understanding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(13 citation statements)
references
References 7 publications
0
13
0
Order By: Relevance
“…Comparably, Variational AutoEncoder (VAE) can generate more various utterances by adding randomness to decoding conditions in both the train phase and the test phase. Thus, it is used in some data augmentation methods [12,13]. It is worth to note that all of the above methods add other parameters in the decoder to predict slot labels for generated utterances.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Comparably, Variational AutoEncoder (VAE) can generate more various utterances by adding randomness to decoding conditions in both the train phase and the test phase. Thus, it is used in some data augmentation methods [12,13]. It is worth to note that all of the above methods add other parameters in the decoder to predict slot labels for generated utterances.…”
Section: Related Workmentioning
confidence: 99%
“…Most of the previous work about SLU data augmentation [8,9,10,11] focus on context augmentation and ignore the importance of different slot values. Although some methods [7,12,13] try to augment slot values and contexts simultaneously in a generative method, their augmented data do not contain many new slot values in actual, since they only use the knowledge from few training data. To aim at augmenting new slot values, we try to use pretrained language models by exploiting the latent language knowledge in these models.…”
Section: Introductionmentioning
confidence: 99%
“…Evaluation Following previous works (Shin, Yoo, and Lee 2019;Hou et al 2018), we compute F1-score as evaluation metric with the conlleval script. 4 Implementation We built our Cluster2Cluster model with the transformer implemented by Wolf et al (2019).…”
Section: Cross Expansionmentioning
confidence: 99%
“…Following previous works (Shin, Yoo, and Lee 2019;Hou et al 2018), we conduct experiments with Bi-LSTM as slotfilling model and train it with both original training data and data augmented by different data augmentation methods. We use the same Bi-LSTM implements as previous work.…”
Section: Cross Expansionmentioning
confidence: 99%
See 1 more Smart Citation