Proceedings of the Workshop on New Frontiers in Summarization 2017
DOI: 10.18653/v1/w17-4507
|View full text |Cite
|
Sign up to set email alerts
|

Combining Graph Degeneracy and Submodularity for Unsupervised Extractive Summarization

Abstract: We present a fully unsupervised, extractive text summarization system that leverages a submodularity framework introduced by past research. The framework allows summaries to be generated in a greedy way while preserving near-optimal performance guarantees. Our main contribution is the novel coverage reward term of the objective function optimized by the greedy algorithm. This component builds on the graph-of-words representation of text and the k-core decomposition algorithm to assign meaningful scores to word… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
17
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 26 publications
(17 citation statements)
references
References 22 publications
0
17
0
Order By: Relevance
“…Traditional extractive summarization methods are mostly unsupervised (Radev et al, 2000;Lin and Hovy, 2002;Wan, 2008;Wan and Yang, 2008;Hirao et al, 2013;Parveen et al, 2015;Yin and Pei, 2015;Li et al, 2017;Zheng and Lapata, 2019), utilizing a notion of sentence importance based on n-gram overlap with other sentences and frequency information (Nenkova and Vanderwende, 2005), relying on graph-based methods for sentence ranking (Erkan and Radev, 2004;Mihalcea and Tarau, 2004), or performing keyword extraction combined with submodular maximization (Tixier et al, 2017;Shang et al, 2018).…”
Section: Extractive Summarizationmentioning
confidence: 99%
“…Traditional extractive summarization methods are mostly unsupervised (Radev et al, 2000;Lin and Hovy, 2002;Wan, 2008;Wan and Yang, 2008;Hirao et al, 2013;Parveen et al, 2015;Yin and Pei, 2015;Li et al, 2017;Zheng and Lapata, 2019), utilizing a notion of sentence importance based on n-gram overlap with other sentences and frequency information (Nenkova and Vanderwende, 2005), relying on graph-based methods for sentence ranking (Erkan and Radev, 2004;Mihalcea and Tarau, 2004), or performing keyword extraction combined with submodular maximization (Tixier et al, 2017;Shang et al, 2018).…”
Section: Extractive Summarizationmentioning
confidence: 99%
“…"..." refers to the omissions of context sentences due to space limitation. extractive summarization (Radev et al, 2000;Mihalcea and Tarau, 2004;Erkan and Radev, 2004;Schluter and Søgaard, 2015;Tixier et al, 2017;Zheng and Lapata, 2019;Xu et al, 2020;Dong et al, 2020). Compare with supervised ones, unsupervised methods 1).…”
Section: Introductionmentioning
confidence: 99%
“…The two datasets annotated with extractive labels will be made public. 1 2 Related work 2.1 Extractive summarization Traditional extractive summarization methods are mostly based on explicit surface features , relying on graph-based methods (Mihalcea and Tarau, 2004), or on submodular maximization (Tixier et al, 2017). Benefiting from the success of neural sequence models in other NLP tasks, Cheng and Lapata (2016) propose a novel approach to extractive summarization based on neural networks and continuous sentence features, which outperforms traditional methods on the Dai-lyMail dataset.…”
Section: Introductionmentioning
confidence: 99%