Findings of the Association for Computational Linguistics: ACL 2023 2023
DOI: 10.18653/v1/2023.findings-acl.7
|View full text |Cite
|
Sign up to set email alerts
|

Abstractive Text Summarization Using the BRIO Training Paradigm

Abstract: sentences produced by abstractive summarization models may be coherent and comprehensive, but they lack control and rely heavily on reference summaries. The BRIO training paradigm assumes a non-deterministic distribution to reduce the model's dependence on reference summaries, and improve model performance during inference. This paper presents a straightforward but effective technique to improve abstractive summaries by finetuning pre-trained language models, and training them with the BRIO paradigm. We build … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
references
References 18 publications
0
0
0
Order By: Relevance