Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.678
|View full text |Cite
|
Sign up to set email alerts
|

Temporal Common Sense Acquisition with Minimal Supervision

Abstract: Temporal common sense (e.g., duration and frequency of events) is crucial for understanding natural language. However, its acquisition is challenging, partly because such information is often not expressed explicitly in text, and human annotation on such concepts is costly. This work proposes a novel sequence modeling approach that exploits explicit and implicit mentions of temporal common sense, extracted from a large corpus, to build TACOLM, 1 a temporal common sense language model.Our method is shown to giv… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
64
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 59 publications
(64 citation statements)
references
References 33 publications
0
64
0
Order By: Relevance
“…Following the settings in previous work (Ning et al, 2019;Han et al, 2019b), we report the micro-average of precision, recall and F1 scores on test cases. On HiEve, we use the same evaluation setting as Glavaš andŠnajder (2014) and Zhou et al (2020a), leaving 20% of the documents out for testing. The results in terms of F 1 of PARENT-CHILD and CHILD-PARENT and the micro-average of them are reported.…”
Section: Baselines and Evaluation Protocolsmentioning
confidence: 99%
See 1 more Smart Citation
“…Following the settings in previous work (Ning et al, 2019;Han et al, 2019b), we report the micro-average of precision, recall and F1 scores on test cases. On HiEve, we use the same evaluation setting as Glavaš andŠnajder (2014) and Zhou et al (2020a), leaving 20% of the documents out for testing. The results in terms of F 1 of PARENT-CHILD and CHILD-PARENT and the micro-average of them are reported.…”
Section: Baselines and Evaluation Protocolsmentioning
confidence: 99%
“…We also follow Glavaš andŠnajder (2014) F1 score Model PC CP Avg. StructLR 0.522 0.634 0.577 TACOLM (Zhou et al, 2020a) 0.485 0.494 0.489 Joint Constrained Learning (ours) 0.625 0.564 0.595 to populate the annotations by computing the transitive closure of COREF and subevent relations.…”
Section: Baselines and Evaluation Protocolsmentioning
confidence: 99%
“…In particular, aspectual (Vendler, 1957;Smith, 2013) features have been proved to be useful. Concurrent to our work, Zhou et al (2020) also utilize unlabeled data. Different from our work, they focus on temporal commonsense acquisition in a more general setting (for frequency, typical time, duration, etc.)…”
Section: Additional Related Workmentioning
confidence: 99%
“…Although (1) does not explicitly mention how long the waiting lasted, one can reasonably guess that it lasted somewhere between minutes to hoursdefinitely not months or years. Zhou et al (2020) note that common sense inference is required to come to such conclusions about an event's duration and text might even contain reporting biases when highlighting rarities (Schubert, 2002;Van Durme, 2011;Zhang et al, 2017;Tandon et al, 2018), potentially making it hard to learn using common language modeling-based methods. Popular NLI datasets contain hypotheses which are elicited by humans (Bowman et al, 2015;Williams et al, 2018).…”
Section: Motivationmentioning
confidence: 99%
“…Natural language supports various forms of temporal reasoning, including reasoning about the chronology and duration of events, and many Natural Language Understanding (NLU) tasks and models have been employed for understanding and capturing different aspects of temporal reasoning (UzZaman et al, 2013;Llorens et al, 2015;Mostafazadeh et al, 2016;Reimers et al, 2016;Tourille et al, 2017;Ning et al, 2017Ning et al, , 2018aMeng and Rumshisky, 2018;Ning et al, 2018b;Han et al, 2019;Naik et al, 2019;Vashishtha et al, 2019;Zhou et al, 2019Zhou et al, , 2020. More broadly, the ability to perform temporal reasoning is important for understanding narratives (Nakhimovsky, 1987;Jung et al, 2011;Cheng et al, 2013), answering questions (Bruce, 1972;Khashabi, 2019;, and summarizing events (Jung et al, 2011;Wang et al, 2018).…”
Section: Introductionmentioning
confidence: 99%