Proceedings of the 3rd Workshop on Natural Language Processing for Conversational AI 2021
DOI: 10.18653/v1/2021.nlp4convai-1.24
|View full text |Cite
|
Sign up to set email alerts
|

Teach Me What to Say and I Will Learn What to Pick: Unsupervised Knowledge Selection Through Response Generation with Pretrained Generative Models

Abstract: Knowledge Grounded Conversation Models (KGCM) are usually based on a selection/retrieval module and a generation module, trained separately or simultaneously, with or without having access to a 'gold' knowledge option. With the introduction of large pre-trained generative models, the selection and generation part have become more and more entangled, shifting the focus towards enhancing knowledge incorporation (from multiple sources) instead of trying to pick the best knowledge option. These approaches however … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 16 publications
0
6
0
Order By: Relevance
“…Classification methods SKT (Kim et al, 2020) 26.8 18.3 29.2 39.2 DukeNet (Meng et al, 2020) 26.4 19.6 30.0 40.3 DiffKS (Zheng et al, 2020) 25.5 19.7 33.0 -KnowledGPT (Zhao et al, 2020b) 28.0 25.4 --MIKe (Meng et al, 2021) 28.4 21.5 31.9 41.8 K-Mine (Lotfi et al, 2021) 29.7 28.3 31.7 -TAKE 28.8 25.8 --…”
Section: Seen Unseen Single Multimentioning
confidence: 99%
“…Classification methods SKT (Kim et al, 2020) 26.8 18.3 29.2 39.2 DukeNet (Meng et al, 2020) 26.4 19.6 30.0 40.3 DiffKS (Zheng et al, 2020) 25.5 19.7 33.0 -KnowledGPT (Zhao et al, 2020b) 28.0 25.4 --MIKe (Meng et al, 2021) 28.4 21.5 31.9 41.8 K-Mine (Lotfi et al, 2021) 29.7 28.3 31.7 -TAKE 28.8 25.8 --…”
Section: Seen Unseen Single Multimentioning
confidence: 99%
“…-CS2 introduces a novel approach/model for unsupervised knowledge selection in knowledge-grounded dialogue (Lotfi et al, 2021), which while eliminating the need for annotated knowledge pools, achieves SOTA results in knowledge selection among models with comparable size. The work won the Best Paper award in the 3rd NLP4ConvAI (NLP for Conversational AI) workshop, in the EMNLP-2021 conference.…”
Section: Contentsmentioning
confidence: 99%
“…Starting with GPT (Radford et al, 2018), generative pre-training already achieved SoTA results in generic and personalised open-domain dialogue (Wolf et al, 2019), but it was the introduction of encoder-decoder pre-trained models like BART (Lewis et al, 2020a) and T5 (Raffel et al, 2020) (Li et al, 2019b) Incremental transformer --PostKS (Lian et al, 2019) Posterior signal 22.5 15.8 KIC (Lin et al, 2020) Soft selection in decoder --DKS Posterior signal + topic drift --SKT (Kim et al, 2020) Sequential latent kn selection 26.8 18.3 DiffKS Difference aware 25.6 20.1 DukeNet (Meng et al, 2020) Knowledge tracking & shifting 26.4 19.6 SKT+ SKT + posterior signal + distillation 27.7 19.4 MIKe (Meng et al, 2021) Initiative aware 28.4 21.5 SKT-KG (Zhan et al, 2021b) Knowledge transition with CRF 26 -KMine * (Lotfi et al, 2021) Posterior signal via generation 27.9 27.0 CoLV (Zhan et al, 2021a) Collaborative latent spaces 30.1 18.9 DIALKI Dialogue-knowledge contextualization 32.9 35.5 DSG (Li et al, 2022) Document semantic graph 29.4 30.8 TAKE Modeling topic shift 28.8 25.8 CorefDiffs † (Xu et al, 2022c) Co-referential and differential kn flow 42.4 41.4 GENKS (Sun et al, With the incremental integration of the KS and RG tasks, knowledge grounding has gradually become a fine-grained process which can happen at the token level while decoding. RAG (Lewis et al, 2020b) and FID (Izacard and Grave, 2020) -both developed originally for abstractive QA-are examples of this approach, which leaves the final and fine-grained knowledge selection to the decoder.…”
Section: The Knowledge Selection Bottleneckmentioning
confidence: 99%
See 2 more Smart Citations