Proceedings of the 28th International Conference on Computational Linguistics 2020
DOI: 10.18653/v1/2020.coling-main.144
|View full text |Cite
|
Sign up to set email alerts
|

Biased TextRank: Unsupervised Graph-Based Content Extraction

Abstract: We introduce Biased TextRank, a graph-based content extraction method inspired by the popular TextRank algorithm that ranks text spans according to their importance for language processing tasks and according to their relevance to an input "focus." Biased TextRank enables focused content extraction for text by modifying the random restarts in the execution of TextRank. The random restart probabilities are assigned based on the relevance of the graph nodes to the focus of the task. We present two applications o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
27
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 23 publications
(27 citation statements)
references
References 27 publications
0
27
0
Order By: Relevance
“…• Biased TRank: Similar to the above described baseline, but instead of TextRank we have used Biased TextRank (Kazemi et al, 2020) to rank the extracted sentence level arguments directly.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…• Biased TRank: Similar to the above described baseline, but instead of TextRank we have used Biased TextRank (Kazemi et al, 2020) to rank the extracted sentence level arguments directly.…”
Section: Resultsmentioning
confidence: 99%
“…Given a list of argument-mentions (arg 1 , arg 2 , ..., arg m ); m being the total number of mentions in the list of type t, our objective is to rank the mentions based on which argument instance imparts greater knowledge about the document's event. To compare the informativeness of the arguments, we rank the arguments using the unsupervised Biased TextRank (Kazemi et al, 2020). Biased TextRank is formally defined as:…”
Section: Inferencementioning
confidence: 99%
“…In order to enrich the given document efficiently with the document-relevant text, such text should contain the document's key context which can appear in the summarized sentence. Earlier, Erkan and Radev (2004) and Mihalcea and Tarau (2004) proposed unsupervised models of extracting key sentences, which are adopted in various recent work for their robustness (Nikolov and Hahnloser, 2020a;Zhang et al, 2020b;Kazemi et al, 2020). In contrast, an abstractive approach aims at generating summarized sentences containing novel terms that might not exist in the given document (Zhang et al, 2020a;Yang et al, 2020).…”
Section: Document-relevant Text Generationmentioning
confidence: 99%
“…Introduced by Kazemi et al (2020) and based on the TextRank algorithm (Mihalcea and Tarau, 2004), Biased TextRank is a targeted content extraction algorithm with a range of applications in keyword and sentence extraction. The TextRank algorithm ranks text segments for their importance by running a random walk algorithm on a graph built by including a node for each text segment (e.g., sentence), and drawing weighted edges by linking the text segment using a measure of similarity.…”
Section: Extractive: Biased Textrankmentioning
confidence: 99%
“…To contribute to this line of work, our paper explores two approaches to generate supporting explanations to assist with the evaluation of news. First, we investigate how an extractive method based on Biased TextRank (Kazemi et al, 2020) can be used to generate explanations. Second, we explore an abstractive method based on GPT-2, a large generative language model (Radford et al, 2019).…”
Section: Introductionmentioning
confidence: 99%