Interspeech 2019 2019
DOI: 10.21437/interspeech.2019-1226
|View full text |Cite
|
Sign up to set email alerts
|

Slot Filling with Weighted Multi-Encoders for Out-of-Domain Values

Abstract: This paper proposes a new method for slot filling of out-ofdomain (OOD) slot values, which are not included in the training data, in spoken dialogue systems. Word embeddings have been proposed to estimate the OOD slot values included in the word embedding model from keyword information. At the same time, context information is an important clue for estimation because the values in a given slot tend to appear in similar contexts. The proper use of either or both keyword and context information depends on the se… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 24 publications
0
2
0
Order By: Relevance
“…This paper also introduced a new dataset for this problem that is publicly available, which is used in our work as one of the datasets for evaluating model performances. In [11,12] the authors proposed augmenting the training data for the slot-filling model, by injecting noise tokens into the regular slot values of training data, to force the model to learn about the context of the respective slots. This augmentation is done to improve the model robustness in predicting the correct span, on unseen slot values.…”
Section: Nlu Gap Predictionmentioning
confidence: 99%
“…This paper also introduced a new dataset for this problem that is publicly available, which is used in our work as one of the datasets for evaluating model performances. In [11,12] the authors proposed augmenting the training data for the slot-filling model, by injecting noise tokens into the regular slot values of training data, to force the model to learn about the context of the respective slots. This augmentation is done to improve the model robustness in predicting the correct span, on unseen slot values.…”
Section: Nlu Gap Predictionmentioning
confidence: 99%
“…In previous years, several studies were carried out to improve and evaluate the performance of NLU tasks. Former works evaluated the impact of using different techniques to improve slot filling [ 4 , 5 , 6 ] and intent detection [ 7 , 8 ]. Interestingly, research in joint NLU (jointly learning both intent detection and slot filling) achieved better results in both tasks [ 9 , 10 ].…”
Section: Introductionmentioning
confidence: 99%