Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.461
|View full text |Cite
|
Sign up to set email alerts
|

Domain Knowledge Empowered Structured Neural Net for End-to-End Event Temporal Relation Extraction

Abstract: Extracting event temporal relations is a critical task for information extraction and plays an important role in natural language understanding. Prior systems leverage deep learning and pre-trained language models to improve the performance of the task. However, these systems often suffer from two shortcomings: 1) when performing maximum a posteriori (MAP) inference based on neural models, previous systems only used structured knowledge that is assumed to be absolutely correct, i.e., hard constraints; 2) biase… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 33 publications
(20 citation statements)
references
References 32 publications
0
20
0
Order By: Relevance
“…Similar to prior temporal relation extraction systems (Han et al, 2019(Han et al, , 2020, we use a BiLSTM layer to encode the token embeddings from the BERT model. The encoded sentences are then passed into four independent Multilayer Perceptrons (MLP), which we will describe below.…”
Section: Sentence Encodermentioning
confidence: 99%
See 2 more Smart Citations
“…Similar to prior temporal relation extraction systems (Han et al, 2019(Han et al, , 2020, we use a BiLSTM layer to encode the token embeddings from the BERT model. The encoded sentences are then passed into four independent Multilayer Perceptrons (MLP), which we will describe below.…”
Section: Sentence Encodermentioning
confidence: 99%
“…Structured learning approaches are also proposed to enhance performance by incorporating more knowledge (Ning et al, 2017;Han et al, 2019). Recent research efforts further explore the power of pre-trained Transformer-based models (Vaswani et al, 2017) on temporal relation extraction by proposing ex-tensions and variants (Wang et al, 2019;Yang et al, 2019;Han et al, 2019Han et al, , 2020. Latest state-of-the-art performance has been achieved by end-to-end, multi-task approaches that conduct both event annotation and temporal relations extraction jointly (Han et al, 2019(Han et al, , 2020Lin et al, 2020).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Thereinto, R H = {SUBSUPER, SUPERSUB, COSUPER 1 } denotes a set of relation labels defined in the subevent relation extraction task (Wang et al, 2020a;Yao et al, 2020). R T = {BEFORE, AFTER, EQUAL 2 } denotes a set of temporal relations (Han et al, 2020). R C = {CAUSE, CAUSEDBY} denotes a set of causal relations (Ning et al, 2018).…”
Section: Problem Formulationmentioning
confidence: 99%
“…For event extraction, some systems only provide results within a certain defined ontology such as AIDA (Li et al, 2019), there are also some works utilizing data from multiple modalities (Li et al, 2020a,b). Some works could handle novel events (Xiang and Wang, 2019;Ahmad et al, 2021;Han et al, 2020b;, but they are either restricted to a certain domain (Yang et al, 2018) or lack of performance superiority because of their lexico-syntactic rule-based algorithm (Valenzuela-Escárcega et al, 2015). For temporal information detection, proposes a neural-based temporal relation extraction system with knowledge injection.…”
Section: Related Workmentioning
confidence: 99%