Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.275
|View full text |Cite
|
Sign up to set email alerts
|

Bridging the Gap between Prior and Posterior Knowledge Selection for Knowledge-Grounded Dialogue Generation

Abstract: Knowledge selection plays an important role in knowledge-grounded dialogue, which is a challenging task to generate more informative responses by leveraging external knowledge. Recently, latent variable models have been proposed to deal with the diversity of knowledge selection by using both prior and posterior distributions over knowledge and achieve promising performance. However, these models suffer from a huge gap between prior and posterior knowledge selection. Firstly, the prior selection module may not … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
57
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 66 publications
(57 citation statements)
references
References 33 publications
0
57
0
Order By: Relevance
“…Automatic Evaluation. We automatically evaluate knowledge selection with accuracy (Acc), response generation with perplexity (PPL), unigram F1 (R-1) and bigram F1 (R-2), which are commonly used in this task (Dinan et al, 2019;Kim et al, 2020a;Chen et al, 2020b). We also remove all the punctuation and (a, an, the) to compute the R-1 and R-2 scores as (Kim et al, 2020a) do.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Automatic Evaluation. We automatically evaluate knowledge selection with accuracy (Acc), response generation with perplexity (PPL), unigram F1 (R-1) and bigram F1 (R-2), which are commonly used in this task (Dinan et al, 2019;Kim et al, 2020a;Chen et al, 2020b). We also remove all the punctuation and (a, an, the) to compute the R-1 and R-2 scores as (Kim et al, 2020a) do.…”
Section: Discussionmentioning
confidence: 99%
“…Knowledge Distillation: We further alleviate the noisy labeling problem of distance supervision via Knowledge Distillation (KD) as shown in Figure 1 (b). Following (Tian et al, 2020;Chen et al, 2020b), the teacher takes the context and response as input and generates the distribution of knowledge selection as soft target. Compared with the student, i.e., the standard knowledge selection module described in Section 2.4, teacher has the gold response as an additional input.…”
Section: Distilled Distant Supervision Lossmentioning
confidence: 99%
See 1 more Smart Citation
“…In addition to the works to enrich the contents of open-domain conversations by controllable generation (Lin et al, 2020;Madotto et al, 2020b), the knowledge grounded dialogue task aims to offer more informative conversation by leveraging an external knowledge source (Dinan et al, 2018;. Relevant knowledge selection is the key to improving the whole system, and very recently, latent variable models have been attracting more attention for this purpose (Lian et al, 2019;Liu et al, 2019b;Kim et al, 2020;Chen et al, 2020;Xu et al, 2021).…”
Section: Related Workmentioning
confidence: 99%
“…Recently, there is increasing interest in visionlanguage tasks, such as image caption Anderson et al, 2016Anderson et al, , 2018Cornia et al, 2020) and visual question answering (Ren et al, 2015a;Gao et al, 2015;Lu et al, 2016;Anderson et al, 2018). In the real world, our conversations (Chen et al, 2020b(Chen et al, , 2019 usually have multiple turns. As an extension of conventional single-turn visual question answering, Das et al (2017) introduce a multi-turn visual question answering task named visual dialogue, which aims to Q1: how many people ?…”
Section: Introductionmentioning
confidence: 99%