2021
DOI: 10.48550/arxiv.2110.06388
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

HETFORMER: Heterogeneous Transformer with Sparse Attention for Long-Text Extractive Summarization

Abstract: To capture the semantic graph structure from raw text, most existing summarization approaches are built on GNNs with a pre-trained model. However, these methods suffer from cumbersome procedures and inefficient computations for long-text documents. To mitigate these issues, this paper proposes HET-FORMER, a Transformer-based pre-trained model with multi-granularity sparse attentions for long-text extractive summarization. Specifically, we model different types of semantic nodes in raw text as a potential heter… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 22 publications
0
1
0
Order By: Relevance
“…HAHSum [49] incorporates a hierarchical attention mechanism with heterogeneous graph representations to refine the summarization process across multiple document levels.…”
mentioning
confidence: 99%
“…HAHSum [49] incorporates a hierarchical attention mechanism with heterogeneous graph representations to refine the summarization process across multiple document levels.…”
mentioning
confidence: 99%