Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1002
|View full text |Cite
|
Sign up to set email alerts
|

Incremental Transformer with Deliberation Decoder for Document Grounded Conversations

Abstract: Document Grounded Conversations is a task to generate dialogue responses when chatting about the content of a given document. Obviously, document knowledge plays a critical role in Document Grounded Conversations, while existing dialogue models do not exploit this kind of knowledge effectively enough. In this paper, we propose a novel Transformerbased architecture for multi-turn document grounded conversations. In particular, we devise an Incremental Transformer to encode multi-turn utterances along with knowl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
84
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 93 publications
(85 citation statements)
references
References 25 publications
1
84
0
Order By: Relevance
“…Performance was evaluated in terms of concept error rate (CER) 3 and concept value error rate (CVER) 4 on the MEDIA test dataset.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Performance was evaluated in terms of concept error rate (CER) 3 and concept value error rate (CVER) 4 on the MEDIA test dataset.…”
Section: Resultsmentioning
confidence: 99%
“…The task of spoken language understanding (SLU) system is to detect fragments of semantic knowledge in speech data. Popular models are made of frames describing relations between entities and their properties [1][2][3]. The SLU system instantiates a predefined set of frame structures called concepts that can be mentioned in a sentence or a dialogue turn.…”
Section: Introductionmentioning
confidence: 99%
“…Deliberation mechanism has succeeded in improving the performance of single-task learning [39], [40]. Y. Xia et al proposed deliberation networks for word sequence generation and demonstrated its effectiveness in machine translation and text summarization [39].…”
Section: Related Workmentioning
confidence: 99%
“…trains the Seq2Seq network to simultaneously perform a discriminant classifier, which measures the difference between the human-generated response and the machine-generated response and introduces an approximate embedding layer to solve the non-differentiable problem caused by sampling-based output decoding in the Seq2Seq generation model steps. • BigLM-24: (the code and models are available at https://github.com/lipiji/Guyu) This is a language model with both the pre-training and fine-tuning procedures [26]. BigLM-24 is the typical GPT-2 model with 345 million parameters (1024 dimensions, 24 layers, 16 heads).…”
Section: Comparison Modelsmentioning
confidence: 99%
“…Reference [25] proposed a NMT model via multi-head attention; others were inspired by this paper. Reference [26] proposed an incremental transformer with the deliberation decoder to solve the task of document grounded conversations. Reference [27] proposed a transformer-based model to address multi-turn unstructured text facts open-domain dialogue.…”
mentioning
confidence: 99%