Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing 2017
DOI: 10.18653/v1/d17-1108
|View full text |Cite
|
Sign up to set email alerts
|

A Structured Learning Approach to Temporal Relation Extraction

Abstract: Identifying temporal relations between events is an essential step towards natural language understanding. However, the temporal relation between two events in a story depends on, and is often dictated by, relations among other events. Consequently, effectively identifying temporal relations between events is a challenging problem even for human annotators. This paper suggests that it is important to take these dependencies into account while learning to identify these relations and proposes a structured learn… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
127
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
3

Relationship

3
5

Authors

Journals

citations
Cited by 99 publications
(129 citation statements)
references
References 23 publications
2
127
0
Order By: Relevance
“…This is technically the same withDo et al (2012), or Ning et al (2017 without its structured learning component. We added gold T T to both gold and system prediction.…”
mentioning
confidence: 58%
See 1 more Smart Citation
“…This is technically the same withDo et al (2012), or Ning et al (2017 without its structured learning component. We added gold T T to both gold and system prediction.…”
mentioning
confidence: 58%
“…To get s ee (·) and s et (·), we trained classifiers using the averaged perceptron algorithm (Freund and Schapire, 1998) and the same set of features used in (Do et al, 2012;Ning et al, 2017), and then used the soft-max scores in those scoring functions. For example, that means…”
Section: Scoring Functionsmentioning
confidence: 99%
“…annotated temporal relation corpora with all events and relations fully annotated is reported to be a challenging task as annotators could easily overlook some facts (Bethard et al, 2007;Ning et al, 2017), which made both modeling and evaluation extremely difficult in previous event temporal relation research. The TB-Dense dataset mitigates this issue by forcing annotators to examine all pairs of events within the same or neighboring sentences, and it has been widely evaluated on this task Ning et al, 2017;Cheng and Miyao, 2017;Meng and Rumshisky, 2018). Recent data construction efforts such as MATRES (Ning et al, 2018a) further enhance the data quality by using a multi-axis annotation scheme and adopting a startpoint of events to improve inter-annotator agreements.…”
Section: Temporal Relation Datamentioning
confidence: 99%
“…As far as we know, all existing systems treat this task as a pipeline of two separate subtasks, i.e., event extraction and temporal relation classification, and they also assume that gold events are given when training the relation classifier (Verhagen et al, 2007(Verhagen et al, , 2010UzZaman et al, 2013;Ning et al, 2017;Meng and Rumshisky, 2018). Specifically, they built end-toend systems that extract events first and then predict temporal relations between them (Fig.…”
Section: Introductionmentioning
confidence: 99%
“…Early computational attempts to TempRel extraction include Mani et al (2006); Chambers et al (2007); Bethard et al (2007); Verhagen and Pustejovsky (2008), which aimed at building classic learning algorithms (e.g., perceptron, SVM, and logistic regression) using hand-engineered features extracted for each pair of events. The frontier was later pushed forward through continuous efforts in a series of SemEval workshops (Verhagen et al, 2007(Verhagen et al, , 2010UzZaman et al, 2013;Bethard et al, 2015Bethard et al, , 2016, and significant progresses were made in terms of data annotation (Styler IV et al, 2014;Mostafazadeh et al, 2016;O'Gorman et al, 2016), structured inference (Chambers and Jurafsky, 2008a;Do et al, 2012;Ning et al, 2018a), and structured machine learning (Yoshikawa et al, 2009;Leeuwenberg and Moens, 2017;Ning et al, 2017).…”
Section: Related Workmentioning
confidence: 99%