Findings of the Association for Computational Linguistics: ACL 2023 2023
DOI: 10.18653/v1/2023.findings-acl.275
|View full text |Cite
|
Sign up to set email alerts
|

RHO: Reducing Hallucination in Open-domain Dialogues with Knowledge Grounding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…Hallucination is a prevalent problem in generative AI, especially in contexts where the accuracy of information is critical such as misinformation and disinformation on social media. Much research in recent years has been dedicated to its detection [178] and mitigation [179], [180]. Unexpectedly, an alternate perspective has emerged that notes AI hallucination as non-harmful in special cases [160], as even hallucinating LLMs can be used collaboratively as partners to provide research leads, support creative writing, and alleviate writer's block.…”
Section: B Mitigating Hallucinationmentioning
confidence: 99%
“…Hallucination is a prevalent problem in generative AI, especially in contexts where the accuracy of information is critical such as misinformation and disinformation on social media. Much research in recent years has been dedicated to its detection [178] and mitigation [179], [180]. Unexpectedly, an alternate perspective has emerged that notes AI hallucination as non-harmful in special cases [160], as even hallucinating LLMs can be used collaboratively as partners to provide research leads, support creative writing, and alleviate writer's block.…”
Section: B Mitigating Hallucinationmentioning
confidence: 99%
“…Measuring Hallucinations Hallucination of language models or the generation of contents that are either non-factual or not supported by evidence have been studied and reported in various fields (Ji et al, 2023b;Bang et al, 2023), such as machine translation (Raunak et al, 2021), abstractive summarization (Maynez et al, 2020;Lee et al, 2022), Open Domain Dialogue (Ji et al, 2023c;, Question Answering (Lin et al, 2022), or image captioning (Rohrbach et al, 2018). Recently developed LLMs such as Bing Chat, or perplexity.ai even serve as generative search engines, although their seemingly fluent and informative responses are not always verifiable (Liu et al, 2023b).…”
Section: Related Workmentioning
confidence: 99%
“…In contrast, GenExpan performs better, benefiting from the given contextual corpus and targeted retrieval augmentation strategies. On the other hand, GPT-4 is prone to haphazardly generate non-existent entities (e.g., fake mobile phone brands), which is referred to hallucination problem in recent work [8,45]. We are currently unable to solve this issue by simple output post-processing.…”
Section: Methods Type Methodsmentioning
confidence: 99%