Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2022
DOI: 10.18653/v1/2022.naacl-main.301
|View full text |Cite
|
Sign up to set email alerts
|

Joint Learning-based Heterogeneous Graph Attention Network for Timeline Summarization

Abstract: Previous studies on the timeline summarization (TLS) task ignored the information interaction between sentences and dates, and adopted pre-defined unlearnable representations for them. They also considered date selection and event detection as two independent tasks, which makes it impossible to integrate their advantages and obtain a globally optimal summary. In this paper, we present a joint learning-based heterogeneous graph attention network for TLS (HeterTLS), in which date selection and event detection ar… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 22 publications
0
4
0
Order By: Relevance
“…This improved visual and text representation in medical report generation task. Furthermore, 3D shared subspace was also explored for representation improvement [19].…”
Section: Review Literaturementioning
confidence: 99%
“…This improved visual and text representation in medical report generation task. Furthermore, 3D shared subspace was also explored for representation improvement [19].…”
Section: Review Literaturementioning
confidence: 99%
“…Graph-Based Models Although graphs are commonly used to boost text summarization (Wu et al 2021b;You et al 2022;Song and King 2022), there are only a handful of models which have been proposed to use graphs to encode the documents in abstractive MDS (Li et al 2020;Jin, Wang, and Wan 2020;Li and Zhuge 2021;Cui and Hu 2021). Most of these models only leverage homogeneous graphs as they do not consider different edge types of graphs.…”
Section: Related Workmentioning
confidence: 99%
“…The authors would like to gratefully acknowledge the reviewers for their time and valuable comments. This paper is an extended version of the paper (You et al 2022) accepted for publication at the Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2022). This extended version contains the following changes:…”
Section: Acknowledgementmentioning
confidence: 99%