Findings of the Association for Computational Linguistics: EACL 2023 2023
DOI: 10.18653/v1/2023.findings-eacl.155
|View full text |Cite
|
Sign up to set email alerts
|

Generative Knowledge Selection for Knowledge-Grounded Dialogues

Weiwei Sun,
Pengjie Ren,
Zhaochun Ren

Abstract: Knowledge selection is the key in knowledgegrounded dialogues (KGD), which aims to select an appropriate knowledge snippet to be used in the utterance based on dialogue history. Previous studies mainly employ the classification approach to classify each candidate snippet as "relevant" or "irrelevant" independently. However, such approaches neglect the interactions between snippets, leading to difficulties in inferring the meaning of snippets. Moreover, they lack modeling of the discourse structure of dialogue-… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 20 publications
0
1
0
Order By: Relevance
“…demonstrated this by implementing the MemNet concept(Dinan et al, 2019b) with BART Lotfi et al (2021). proposed an unsupervised knowledge selection method based on BART, which back-propagates the generation loss into a knowledge fusion module to train the selector Sun et al (2023a). used a fully generative approach augmented by explicit dialogue-knowledge connections, which generates the selected knowledge's identifier before the response.…”
mentioning
confidence: 99%
“…demonstrated this by implementing the MemNet concept(Dinan et al, 2019b) with BART Lotfi et al (2021). proposed an unsupervised knowledge selection method based on BART, which back-propagates the generation loss into a knowledge fusion module to train the selector Sun et al (2023a). used a fully generative approach augmented by explicit dialogue-knowledge connections, which generates the selected knowledge's identifier before the response.…”
mentioning
confidence: 99%