Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2021
DOI: 10.18653/v1/2021.naacl-main.476
|View full text |Cite
|
Sign up to set email alerts
|

Inference Time Style Control for Summarization

Abstract: How to generate summaries of different styles without requiring corpora in the target styles, or training separate models? We present two novel methods that can be deployed during summary decoding on any pre-trained Transformer-based summarization model. (1) Decoder state adjustment instantly modifies decoder final states with externally trained style scorers, to iteratively refine the output against a target style. (2) Word unit prediction constrains the word usage to impose strong lexical control during gene… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 13 publications
0
5
0
Order By: Relevance
“…In doing so, we also introduce an effective method to create synthetic datasets for aspect-guided opinion summarization. Our work also relates to approaches which attempt to control summarization output based on length (Kikuchi et al, 2016), content (Fan et al, 2018), style (Cao and Wang, 2021), or textual queries (Dang, 2006). Although we focus solely on aspect, our method is general and could be used to adjust additional properties of a summary such as sentiment (e.g., positive vs. negative) or style (e.g., formal vs. colloquial).…”
Section: Related Workmentioning
confidence: 99%
“…In doing so, we also introduce an effective method to create synthetic datasets for aspect-guided opinion summarization. Our work also relates to approaches which attempt to control summarization output based on length (Kikuchi et al, 2016), content (Fan et al, 2018), style (Cao and Wang, 2021), or textual queries (Dang, 2006). Although we focus solely on aspect, our method is general and could be used to adjust additional properties of a summary such as sentiment (e.g., positive vs. negative) or style (e.g., formal vs. colloquial).…”
Section: Related Workmentioning
confidence: 99%
“…Diverse text generation has been studied in previous work (Yu et al, 2017), including in dialogue (Li et al, 2016), story generation (Fan et al, 2019), and particularly paraphrasing (Iyyer et al, 2018;Goyal and Durrett, 2020). Our method can also diversify content coverage (Gehrmann et al, 2018) and word choice (Cao and Wang, 2021).…”
Section: Related Workmentioning
confidence: 99%
“…Evaluating Alignment Algorithm. We search the hyperparameters on the Basil dataset (Fan et al, 2019) and test the algorithm on the Allsides dataset collected in Cao and Wang (2021). The Allsides dataset consists of manually aligned news articles from 251 media outlets.…”
Section: Appendix a Bignews Cleaning Stepsmentioning
confidence: 99%