2022
DOI: 10.1613/jair.1.13267
|View full text |Cite
|
Sign up to set email alerts
|

Two-phase Multi-document Event Summarization on Core Event Graphs

Abstract: Succinct event description based on multiple documents is critical to news systems as well as search engines. Different from existing summarization or event tasks, Multi-document Event Summarization (MES) aims at the query-level event sequence generation, which has extra constraints on event expression and conciseness. Identifying and summarizing the key event from a set of related articles is a challenging task that has not been sufficiently studied, mainly because online articles exhibit characteristics of r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 33 publications
0
0
0
Order By: Relevance
“…Traditional natural language generation methods rely on a large corpus, which is typically expensive to train. To address this issue, researchers have explored a new paradigm of pre-trained LMs, such as BERT (Devlin et al, 2019), GPT-2 (Radford et al, 2019), and BART (Lewis et al, 2020;Koto et al, 2020;Chen et al, 2022;Vougiouklis et al, 2020). By incorporating additional domain-specific codes (Keskar et al, 2019), such as sentiment labels (Dathathri et al, 2020) or attribute vectors (Yu, Yu, & Sagae, 2021), the goal is to modify the pre-trained LM with little fine-tuning cost.…”
Section: Conditional Text Generationmentioning
confidence: 99%
“…Traditional natural language generation methods rely on a large corpus, which is typically expensive to train. To address this issue, researchers have explored a new paradigm of pre-trained LMs, such as BERT (Devlin et al, 2019), GPT-2 (Radford et al, 2019), and BART (Lewis et al, 2020;Koto et al, 2020;Chen et al, 2022;Vougiouklis et al, 2020). By incorporating additional domain-specific codes (Keskar et al, 2019), such as sentiment labels (Dathathri et al, 2020) or attribute vectors (Yu, Yu, & Sagae, 2021), the goal is to modify the pre-trained LM with little fine-tuning cost.…”
Section: Conditional Text Generationmentioning
confidence: 99%