2024
DOI: 10.1162/tacl_a_00695
|View full text |Cite
|
Sign up to set email alerts
|

Investigating Hallucinations in Pruned Large Language Models for Abstractive Summarization

George Chrysostomou,
Zhixue Zhao,
Miles Williams
et al.

Abstract: Despite the remarkable performance of generative large language models (LLMs) on abstractive summarization, they face two significant challenges: their considerable size and tendency to hallucinate. Hallucinations are concerning because they erode reliability and raise safety issues. Pruning is a technique that reduces model size by removing redundant weights, enabling more efficient sparse inference. Pruned models yield downstream task performance comparable to the original, making them ideal alternatives whe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
references
References 52 publications
0
0
0
Order By: Relevance