Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1628
|View full text |Cite
|
Sign up to set email alerts
|

Sentence Centrality Revisited for Unsupervised Summarization

Abstract: Single document summarization has enjoyed renewed interest in recent years thanks to the popularity of neural network models and the availability of large-scale datasets. In this paper we develop an unsupervised approach arguing that it is unrealistic to expect large-scale and high-quality training data to be available or created for different types of summaries, domains, or languages. We revisit a popular graph-based ranking algorithm and modify how node (aka sentence) centrality is computed in two ways: (a) … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
153
0
2

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 149 publications
(157 citation statements)
references
References 33 publications
2
153
0
2
Order By: Relevance
“…(2) Unsupervised extractive systems: TextRank (Mihalcea and Tarau, 2004), Lead-X. (3) Supervised abstractive and abstractive (models trained with groundtruths summaries): PACSUM (Zheng and Lapata, 2019), PGNet (See et al, 2017), REFRESH (Narayan et al, 2018) and SUMO (Liu et al, 2019b). TED is unsupervised abstractive and therefore not directly comparable with supervised baselines.…”
Section: Baseline and Metricsmentioning
confidence: 99%
See 1 more Smart Citation
“…(2) Unsupervised extractive systems: TextRank (Mihalcea and Tarau, 2004), Lead-X. (3) Supervised abstractive and abstractive (models trained with groundtruths summaries): PACSUM (Zheng and Lapata, 2019), PGNet (See et al, 2017), REFRESH (Narayan et al, 2018) and SUMO (Liu et al, 2019b). TED is unsupervised abstractive and therefore not directly comparable with supervised baselines.…”
Section: Baseline and Metricsmentioning
confidence: 99%
“…The centrality of a node (sentence) is computed by PageRank (Brin and Page, 1998) to decide whether a sentence should be included in the final summary. Zheng and Lapata (2019) advances upon TextRank by encoding sentences with BERT representation (Devlin et al, 2018) to compute pairs similarity and build graphs with directed edges decided by the relative positions of sentences.…”
Section: Introductionmentioning
confidence: 99%
“…Then PageRank (Page et al, 1999) is employed to determine the final ranking scores for sentences. Zheng and Lapata (2019) builds directed graph by utilizing BERT (Devlin et al, 2019) to compute sentence similarities. The importance score of a sentence is the weighted sum of all its out edges, where weights for edges between the current sentence and preceding sentences are negative.…”
Section: Related Workmentioning
confidence: 99%
“…Thus, leading sentences tend to obtain high scores. Unlike Zheng and Lapata (2019), sentence positions are not explicitly modeled in our model and therefore our model is less dependent on sentence positions (as shown in experiments).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation