Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2022
DOI: 10.18653/v1/2022.acl-long.236
|View full text |Cite
|
Sign up to set email alerts
|

Hallucinated but Factual! Inspecting the Factuality of Hallucinations in Abstractive Summarization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
41
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 39 publications
(41 citation statements)
references
References 0 publications
0
41
0
Order By: Relevance
“…Second, it is important to explore generative capabilities with qualitative metrics (Figure 2 in the appendix illustrates retrieved text and answers generated by rag-end2end and rag-original). This could be aligned with research areas like measuring factual consistency (Kryściński et al, 2019;Cao et al, 2022) and hallucinations (Cao et al, 2022;Shuster et al, 2021;Nie et al, 2019) of generative language models. Future work could explore whether updating the retriever and document embeddings during the training phase could improve factual consistency and reduce hallucinations in final generations.…”
Section: Discussionmentioning
confidence: 66%
“…Second, it is important to explore generative capabilities with qualitative metrics (Figure 2 in the appendix illustrates retrieved text and answers generated by rag-end2end and rag-original). This could be aligned with research areas like measuring factual consistency (Kryściński et al, 2019;Cao et al, 2022) and hallucinations (Cao et al, 2022;Shuster et al, 2021;Nie et al, 2019) of generative language models. Future work could explore whether updating the retriever and document embeddings during the training phase could improve factual consistency and reduce hallucinations in final generations.…”
Section: Discussionmentioning
confidence: 66%
“…These three types of hallucinations are well-documented in literature studying generative model hallucinations [18,32,41,67]. We add to this previous literature by showing how such hallucinations occur in this reading context.…”
Section: E Factuality In Generated Summariesmentioning
confidence: 56%
“…The extent and kind of hallucinations in our summaries can tell us what risk such hallucinations pose and how much effort an expert must invest to make the summaries publishable. For example, if the majority of hallucinations are new but correct information (a common type of hallucination [18]), then they pose less of a risk and require less expert knowledge to fix than if the hallucinations instead reverse the direction of a found effect (another type of hallucination [32]). We generated summaries with no restriction on hallucinated content.…”
Section: E Factuality In Generated Summariesmentioning
confidence: 99%
“…• Summaries suffer from hallucinations, i.e., information leaked to the output from the outside of source text. However, Cao et al, 2022 find that much hallucinated content is mainly consistent with world knowledge.…”
Section: Challenges Of Abstractive Summarizationmentioning
confidence: 75%