Findings of the Association for Computational Linguistics: EMNLP 2021 2021
DOI: 10.18653/v1/2021.findings-emnlp.320
|View full text |Cite
|
Sign up to set email alerts
|

Retrieval Augmentation Reduces Hallucination in Conversation

Abstract: Despite showing increasingly human-like conversational abilities, state-of-the-art dialogue models often suffer from factual incorrectness and hallucination of knowledge (Roller et al., 2021).In this work we explore the use of neural-retrieval-in-the-loop architectures -recently shown to be effective in open-domain QA (Lewis et al., 2020b; Izacard and Grave, 2021b) -for knowledge-grounded dialogue, a task that is arguably more challenging as it requires querying based on complex multi-turn dialogue context a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
93
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 173 publications
(153 citation statements)
references
References 30 publications
1
93
0
Order By: Relevance
“…First, decoders could attend to the wrong part of the encoded input source [172]. This leads the generated output to contain mixed up facts between two similar entities [40,158]. Second, the design of decoding strategy itself can contribute to hallucinations.…”
Section: Erroneous Decodingmentioning
confidence: 99%
See 4 more Smart Citations
“…First, decoders could attend to the wrong part of the encoded input source [172]. This leads the generated output to contain mixed up facts between two similar entities [40,158]. Second, the design of decoding strategy itself can contribute to hallucinations.…”
Section: Erroneous Decodingmentioning
confidence: 99%
“…For instance, Wang et al [184] proposes PARENT-T that simplifies PARENT by only using table content as the reference. Similarly, Knowledge F1 [158] -a variant of unigram F1 -is proposed for knowledge grounded dialogue tasks to measure the overlap between the model's generation and the knowledge used to ground the dialogue during dataset collection.…”
Section: Statistical Metricmentioning
confidence: 99%
See 3 more Smart Citations