2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU) 2019
DOI: 10.1109/asru46091.2019.9003735
|View full text |Cite
|
Sign up to set email alerts
|

Joint Learning of Word and Label Embeddings for Sequence Labelling in Spoken Language Understanding

Abstract: We propose an architecture to jointly learn word and label embeddings for slot filling in spoken language understanding. The proposed approach encodes labels using a combination of word embeddings and straightforward word-label association from the training data. Compared to the state-ofthe-art methods, our approach does not require label embeddings as part of the input and therefore lends itself nicely to a wide range of model architectures. In addition, our architecture computes contextual distances between … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 14 publications
0
2
0
Order By: Relevance
“…Chen et al [34] exploit label embedding for intent detection but not suitable for slot filling. Some works choose to extract label embeddings from data samples [38], [39], [40], [41] by exploiting words of values tagged with a semantic label, but they only focus on the corresponding values. We consider both the value and context information in the slot exemplar encoding.…”
Section: Related Workmentioning
confidence: 99%
“…Chen et al [34] exploit label embedding for intent detection but not suitable for slot filling. Some works choose to extract label embeddings from data samples [38], [39], [40], [41] by exploiting words of values tagged with a semantic label, but they only focus on the corresponding values. We consider both the value and context information in the slot exemplar encoding.…”
Section: Related Workmentioning
confidence: 99%
“…Since SLU is proved to exert significant influence on the final performance of dialogue systems [1], improving SLU performance is a crucial problem and attracts much attention in both academia and industry. Traditionally, SLU is trained in a supervised way with sufficient labeled data, achieving excellent performance [2,3,4]. Unfortunately, it is difficult and expensive to acquire enough labeled data in practice.…”
Section: Introductionmentioning
confidence: 99%