Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.272
|View full text |Cite
|
Sign up to set email alerts
|

Knowledge-Grounded Dialogue Generation with Pre-trained Language Models

Abstract: We study knowledge-grounded dialogue generation with pre-trained language models. To leverage the redundant external knowledge under capacity constraint, we propose equipping response generation defined by a pretrained language model with a knowledge selection module, and an unsupervised approach to jointly optimizing knowledge selection and response generation with unlabeled dialogues. Empirical results on two benchmarks indicate that our model can significantly outperform state-of-the-art methods in both aut… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
161
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 126 publications
(161 citation statements)
references
References 53 publications
0
161
0
Order By: Relevance
“…Automated and human evaluation results on three different datasets demonstrate substantial improvements ( §5.1 and §5.2). Specifically, we achieve an improvement of 19.7 BLEU-4 points compared to Zhao et al (2020b) on the dialogue generation task. Additionally, significant gains are observed in BLEU-4 compared to BART-based baseline.…”
Section: Introductionmentioning
confidence: 90%
See 2 more Smart Citations
“…Automated and human evaluation results on three different datasets demonstrate substantial improvements ( §5.1 and §5.2). Specifically, we achieve an improvement of 19.7 BLEU-4 points compared to Zhao et al (2020b) on the dialogue generation task. Additionally, significant gains are observed in BLEU-4 compared to BART-based baseline.…”
Section: Introductionmentioning
confidence: 90%
“…A dialogue manager is used to combine the vocabulary distributions provided by these three components. Zhao et al (2020b) propose a knowledge selection module integrated with pre-trained language models for this task. Cao et al (2020) use pre-trained language model GPT-2 (Radford et al) and explore various attention fusion techniques for persona-based dialogue generation (Zhang et al, 2018b;.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…To address this issue, a line of research proposes to incorporate external knowledge into the generation process. Most of the work in this line retrieves knowledge based on a search or retrieval step first, and followed by further reranking of retrieved relevant knowledge snippets (Ghazvininejad et al, 2018;Young et al, 2018;Zhou et al, 2018b;Gopalakrishnan et al, 2019;Zhao et al, 2020). In our work, we propose neural entity recognition and linking to identify and resolve entities more accurately in order to obtain more relevant knowledge for knowledge grounded response generation.…”
Section: The First Two Authors Have Equal Contributionmentioning
confidence: 99%
“…Research in dialogue generation has rapidly evolved from sequence-to-sequence (Sutskever et al, 2014) and Transformer models (Vaswani et al, 2017) to approaches with pre-trained models such as BERT (Devlin et al, 2019), XLNet and T5 (Raffel et al, 2020). More recently, it included techniques that use knowledge, in addition to the original posts, to improve the quality of the generated responses (Ghazvininejad et al (2018), Moghe et al (2018), Dinan et al (2019), Galley et al (2019), Lian et al (2019), Zheng and Zhou (2019), Zhao et al (2020a), Zhao et al (2020b)). 1 This approach is referred to as…”
Section: Introductionmentioning
confidence: 99%