Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1214
|View full text |Cite
|
Sign up to set email alerts
|

A Stack-Propagation Framework with Token-Level Intent Detection for Spoken Language Understanding

Abstract: Intent detection and slot filling are two main tasks for building a spoken language understanding (SLU) system. The two tasks are closely tied and the slots often highly depend on the intent. In this paper, we propose a novel framework for SLU to better incorporate the intent information, which further guides the slot filling. In our framework, we adopt a joint model with Stack-Propagation which can directly use the intent information as input for slot filling, thus to capture the intent semantic knowledge. In… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
228
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 231 publications
(231 citation statements)
references
References 23 publications
3
228
0
Order By: Relevance
“…In the self-attentive encoder, following Qin et al (2019), we use BiLSTM with the self-attention mechanism to leverage both advantages of temporal features within word orders and contextual information. Bidirectional LSTM A bidirectional LSTM (BiLSTM) (Hochreiter and Schmidhuber, 1997) consists of two LSTM layers.…”
Section: Self-attentive Encodermentioning
confidence: 99%
See 3 more Smart Citations
“…In the self-attentive encoder, following Qin et al (2019), we use BiLSTM with the self-attention mechanism to leverage both advantages of temporal features within word orders and contextual information. Bidirectional LSTM A bidirectional LSTM (BiLSTM) (Hochreiter and Schmidhuber, 1997) consists of two LSTM layers.…”
Section: Self-attentive Encodermentioning
confidence: 99%
“…One is the ATIS dataset (Hemphill et al, 1990) and the other is SNIPS dataset (Coucke et al, 2018), which are widely used as benchmark in SLU research. Both datasets follow the same format and partition as in Goo et al (2018) and Qin et al (2019).…”
Section: Datasetsmentioning
confidence: 99%
See 2 more Smart Citations
“…Taking inspiration from the same, we use CT-Bert as part of our architecture. A variety of slot-filling approaches have been built on top of these deep learning advancements (Kurata et al, 2016;Qin et al, 2019). The proposed baseline for our task (Zong et al, 2020) modifies Bert model for slot-filling problem inspired by Baldini Soares et al (2019).…”
Section: Related Workmentioning
confidence: 99%