Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer 2021
DOI: 10.18653/v1/2021.acl-long.555
|View full text |Cite
|
Sign up to set email alerts
|

Conditional Generation of Temporally-ordered Event Sequences

Abstract: Models of narrative schema knowledge have proven useful for a range of event-related tasks, but they typically do not capture the temporal relationships between events. We propose a single model that addresses both temporal ordering, sorting given events into the order they occurred, and event infilling, predicting new events which fit into an existing temporally-ordered sequence. We use a BARTbased conditional generation model that can capture both temporality and common event co-occurrence, meaning it can be… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(13 citation statements)
references
References 49 publications
0
13
0
Order By: Relevance
“…Our case study on salienceaware discourse parsing shows the advantage of combining event-level and sentence-level salience information. We plan to use these event chain patterns on other narrative understanding and generation tasks, such as constrained story generation (Peng et al, 2018), event script generation Lyu et al, 2021), and implicit event prediction (Lin et al, 2021;Zhou et al, 2021).…”
Section: Discussionmentioning
confidence: 99%
“…Our case study on salienceaware discourse parsing shows the advantage of combining event-level and sentence-level salience information. We plan to use these event chain patterns on other narrative understanding and generation tasks, such as constrained story generation (Peng et al, 2018), event script generation Lyu et al, 2021), and implicit event prediction (Lin et al, 2021;Zhou et al, 2021).…”
Section: Discussionmentioning
confidence: 99%
“…Then, story Y is completed according to plot P . We use a phrase containing a predicate to represent an event in a sentence because giving an informative representation helps models capture dependencies in the context (Lin et al, 2021). We apply dependency parsing to recognize the root and its object and retain all the words between them.…”
Section: Model Architecturementioning
confidence: 99%
“…Recently, significant works have been done on the temporal common sense (TCS) reasoning. These works include but are not limited to event duration prediction (Pan, Mulkar, and Hobbs 2006;Vashishtha, Van Durme, and White 2019), scripting learning (i.e., what happens next after the certain events) (Li, Ding, and Liu 2018), event infilling (i.e., predict the implicit event in a temporally-ordered event sequence) (Lin, Chambers, and Durrett 2021;Zhou et al 2021a) and various temporal reasoning based question answering tasks (Zhou et al 2019;Qin et al 2021). As human annotation on the TCS is costly, a surge of works harnesses cheap supervision methods to collect large amount of TCS data and learn reasoning model upon it.…”
Section: Related Workmentioning
confidence: 99%
“…As human annotation on the TCS is costly, a surge of works harnesses cheap supervision methods to collect large amount of TCS data and learn reasoning model upon it. For example, (Lin, Chambers, and Durrett 2021) utilizes narrative documents corpus to automatically construct data for temporal ordering and event infilling task. (Zhou et al 2020) jointly models three key dimensions of TCS (duration, frequency, and typical time) and the other two auxiliary dimension of TCS, the data of which is also mined from unannotated free text.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation