Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.456
|View full text |Cite
|
Sign up to set email alerts
|

Hooks in the Headline: Learning to Generate Headlines with Controlled Styles

Abstract: Current summarization systems only produce plain, factual headlines, but do not meet the practical needs of creating memorable titles to increase exposure. We propose a new task, Stylistic Headline Generation (SHG), to enrich the headlines with three style options (humor, romance and clickbait), in order to attract more readers. With no style-specific article-headline pair (only a standard headline summarization dataset and mono-style corpora), our method TitleStylist generates style-specific headlines by comb… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
43
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 38 publications
(43 citation statements)
references
References 35 publications
0
43
0
Order By: Relevance
“…Summarizing documents into different styles are mainly studied on news articles, where one appends style codes as extra embeddings to the encoder (Fan et al, 2018), or connects separate decoders with a shared encoder (Zhang et al, 2018). Similar to our work, Jin et al (2020) leverage large pre-trained seq2seq models, but they modify model architecture by adding extra style-specific parameters. Nonetheless, existing work requires training new summarizers for different target styles or modifying the model structure.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Summarizing documents into different styles are mainly studied on news articles, where one appends style codes as extra embeddings to the encoder (Fan et al, 2018), or connects separate decoders with a shared encoder (Zhang et al, 2018). Similar to our work, Jin et al (2020) leverage large pre-trained seq2seq models, but they modify model architecture by adding extra style-specific parameters. Nonetheless, existing work requires training new summarizers for different target styles or modifying the model structure.…”
Section: Related Workmentioning
confidence: 99%
“…Generating summaries with different language styles can benefit readers of varying literacy levels (Chandrasekaran et al, 2020) or interests (Jin et al, 2020). Significant progress has been made in abstractive summarization with large pre-trained Transformers (Dong et al, 2019;Lewis et al, 2020;Zhang et al, 2019;Raffel et al, 2019;Song et al, 2019).…”
Section: Introductionmentioning
confidence: 99%
“…Current benchmarks for style transfer focus on high-level style definitions such as transfer of sentiment (Shen et al, 2017;Lample et al, 2019;, politeness (Madaan et al, 2020), formality (Rao and Tetreault, 2018;Krishna et al, 2020), writing styles (Jhamtani et al, 2017;Syed et al, 2020;Jin et al, 2020) and some other styles (Kang and Hovy, 2019). However, these only focus on only high-level styles, unlike STYLEPTB.…”
Section: Related Workmentioning
confidence: 99%
“…2) SEQ2SEQ: A Seq2Seq model (Sutskever et al, 2014) with attention trained using MLE Jin et al, 2020).…”
Section: Baseline Modelsmentioning
confidence: 99%
“…News Headlines in NLP. Headlines are popular as a challenging source for generation tasks such as summarization (Rush et al, 2015), style transfer (Jin et al, 2020), and style-preserving translation (Joshi et al, 2013). Headlines have been leveraged to detect political bias (Gangula et al, 2019), clickbait and fake news phenomena (Bourgonje et al, 2017).…”
Section: Related Workmentioning
confidence: 99%