2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2017
DOI: 10.1109/icassp.2017.7953246
|View full text |Cite
|
Sign up to set email alerts
|

End-to-end joint learning of natural language understanding and dialogue manager

Abstract: Natural language understanding and dialogue policy learning are both essential in conversational systems that predict the next system actions in response to a current user utterance. Conventional approaches aggregate separate models of natural language understanding (NLU) and system action prediction (SAP) as a pipeline that is sensitive to noisy outputs of error-prone NLU. To address the issues, we propose an end-to-end deep recurrent neural network with limited contextual dialogue memory by jointly training … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
55
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 70 publications
(55 citation statements)
references
References 27 publications
0
55
0
Order By: Relevance
“…However, the development of a user simulator was complex and it took considerable time to built an appropriate user policy. Additionally, some studies [4,5,14,16] relied on considerable supervised data. Reference [16] proposed an end-to-end model by jointly training NLU and DM with supervised learning.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…However, the development of a user simulator was complex and it took considerable time to built an appropriate user policy. Additionally, some studies [4,5,14,16] relied on considerable supervised data. Reference [16] proposed an end-to-end model by jointly training NLU and DM with supervised learning.…”
Section: Related Workmentioning
confidence: 99%
“…Additionally, some studies [4,5,14,16] relied on considerable supervised data. Reference [16] proposed an end-to-end model by jointly training NLU and DM with supervised learning. References [4,5,14] applied the demonstration data to speed up the convergence in a supervised manner.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Training probabilistic models to predict such intents has been aided in recent years by combinations of deep neural network models [2,6] and distributed word representations in the form of word embeddings [3,4]. Despite these improvements, building a system that performs adequately in a production role remains a substantial challenge, even when we focus on the NLU component only and leave aside the dialogue state management [9].…”
Section: Introductionmentioning
confidence: 99%
“…End-to-end learning overcomes these problems by training SDSs using the data obtained from the available dialogues. Endto-end learning has been applied to dialogue systems such as short response generation [1], open domain conversational dialogue systems [2], task-oriented dialogue systems [3], and the joint training of understanding models with a dialogue manager to predict system actions [4].…”
Section: Introductionmentioning
confidence: 99%