Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence 2019
DOI: 10.24963/ijcai.2019/753
|View full text |Cite
|
Sign up to set email alerts
|

Extracting Entities and Events as a Single Task Using a Transition-Based Neural Model

Abstract: The task of event extraction contains subtasks including detections for entity mentions, event triggers and argument roles. Traditional methods solve them as a pipeline, which does not make use of task correlation for their mutual benefits. There have been recent efforts towards building a joint model for all tasks. However, due to technical challenges, there has not been work predicting the joint output structure as a single task. We build a first model to this end using a neural transition-based framework, i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
38
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 54 publications
(39 citation statements)
references
References 17 publications
1
38
0
Order By: Relevance
“…In particular, event argument extraction generally consists of two steps -first identifying entities and their general semantic class with trained models (Wadden et al, 2019) or a parser (Sha et al, 2018), then assigning argument roles (or no role) to each entity. Although joint models (Yang and Mitchell, 2016;Nguyen and Nguyen, 2019;Zhang et al, 2019a;Lin et al, 2020) have been proposed to mitigate this issue, error propagation (Li et al, 2013) still occurs during event argument extraction.…”
Section: Event Typementioning
confidence: 99%
“…In particular, event argument extraction generally consists of two steps -first identifying entities and their general semantic class with trained models (Wadden et al, 2019) or a parser (Sha et al, 2018), then assigning argument roles (or no role) to each entity. Although joint models (Yang and Mitchell, 2016;Nguyen and Nguyen, 2019;Zhang et al, 2019a;Lin et al, 2020) have been proposed to mitigate this issue, error propagation (Li et al, 2013) still occurs during event argument extraction.…”
Section: Event Typementioning
confidence: 99%
“…methods include integer linear programming models [46,47], feature-based structured learning models [7,48] and neural network models [13,40,49]. Methods considering all subtask simultaneously includes contextualized span representations [19], interactive two-channel neural networks [20].…”
Section: Plos Onementioning
confidence: 99%
“…Previous studies have showed that joint learnings of entities and relations [7][8][9][10], entities and events [11][12][13] can lead to better extraction performance than pipelined methods [14][15][16][17]. Due to joint learnings are effective at integrating interactive information between tasks and alleviating the problem of error propagation, there has been work inferencing all subtasks using one single model such as perceptron-based structural predictions [18], contextualized span representations [19], two-channel neural networks [20].…”
Section: Introductionmentioning
confidence: 99%
“…EAE is one of the two subtasks in EE (the other one is ED) that has been approached early by the feature-based models (Ahn, 2006;Ji and Grishman, 2008;Patwardhan and Riloff, 2009;Liao and Grishman, 2010a,b;Riedel and McCallum, 2011;Hong et al, 2011;McClosky et al, 2011;Li et al, 2013;Miwa et al, 2014;Yang and Mitchell, 2016). The recent work on EE has focused on deep learning to improve the models' performance (Chen et al, 2015;Sha et al, 2018;Zhang et al, 2019;Yang et al, 2019;Nguyen and Nguyen, 2019;Zhang et al, 2020). Among the two subtasks of EE, while ED has been studied extensively by the recent deep learning work (Nguyen and Grishman, 2015;Chen et al, 2015;Nguyen et al, 2016g;Liu et al, , 2018aZhao et al, 2018;Wang et al, 2019a;Lai et al, 2020c), EAE has been relatively less explored.…”
Section: Related Workmentioning
confidence: 99%