Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.452
|View full text |Cite
|
Sign up to set email alerts
|

Discrete Optimization for Unsupervised Sentence Summarization with Word-Level Extraction

Abstract: Automatic sentence summarization produces a shorter version of a sentence, while preserving its most important information. A good summary is characterized by language fluency and high information overlap with the source sentence. We model these two aspects in an unsupervised objective function, consisting of language modeling and semantic similarity metrics. We search for a high-scoring summary by discrete optimization. Our proposed method achieves a new state-of-the art for unsupervised sentence summarizatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
31
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
3

Relationship

4
6

Authors

Journals

citations
Cited by 28 publications
(31 citation statements)
references
References 40 publications
0
31
0
Order By: Relevance
“…In this paper, we model text generation as a search algorithm, and design search objective and search actions specifically for text simplification. Concurrent work further shows the success of search-based unsupervised text generation for paraphrasing (Liu et al, 2020) and summa-rization (Schumann et al, 2020).…”
Section: Related Workmentioning
confidence: 88%
“…In this paper, we model text generation as a search algorithm, and design search objective and search actions specifically for text simplification. Concurrent work further shows the success of search-based unsupervised text generation for paraphrasing (Liu et al, 2020) and summa-rization (Schumann et al, 2020).…”
Section: Related Workmentioning
confidence: 88%
“…As pointed out by Guo et al (2020) and Huang et al (2021), a large number of samples become identical when researchers anonymize the entities in an utterance (Dong and Lapata, 2016). Schumann et al (2020) identify a problem in the summarization task where previous benchmark settings do not properly enforce summary length, allowing "stateof-the-art" models to gain performance by generating overly lengthy summaries. These highlight the importance of properly benchmarking a task for NLP research.…”
Section: Related Workmentioning
confidence: 99%
“…Miao et al (Miao et al, 2019) propose to edit a word in a sentence sequentially by Metropolis-Hastings sampling. Schumann et al (Schumann et al, 2020) search the summarization of a given sentence by introducing a swapping operator on the selected words. Kumar et al (Kumar et al, 2020) generate simplified candidate sentences by iteratively editing the given complex sentence using three simplification operations (i.e., lexical simplification, phrase extraction, deletion and reordering).…”
Section: Related Workmentioning
confidence: 99%