Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1522
|View full text |Cite
|
Sign up to set email alerts
|

Exploring Pre-trained Language Models for Event Extraction and Generation

Abstract: Traditional approaches to the task of ACE event extraction usually depend on manually annotated data, which is often laborious to create and limited in size. Therefore, in addition to the difficulty of event extraction itself, insufficient training data hinders the learning process as well. To promote event extraction, we first propose an event extraction model to overcome the roles overlap problem by separating the argument prediction in terms of roles. Moreover, to address the problem of insufficient trainin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
165
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 262 publications
(165 citation statements)
references
References 18 publications
0
165
0
Order By: Relevance
“…Thanks to the powerful language expression capabilities of the BERT model, a few layers of simple fully connected networks can achieve good results in the trigger extraction subtask. We follow previous research work [13] to obtain triggers, and then assign argument roles via EE-DGCNN proposed in this paper.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Thanks to the powerful language expression capabilities of the BERT model, a few layers of simple fully connected networks can achieve good results in the trigger extraction subtask. We follow previous research work [13] to obtain triggers, and then assign argument roles via EE-DGCNN proposed in this paper.…”
Section: Methodsmentioning
confidence: 99%
“…After the pre-trained language model appeared, its ability to express semantic information attracted the attention of researchers in the field of event extraction. The PLMEE [13] method proposed by Yang et al applied a pre-trained language model as a method to capture word features directly and obtained a large performance improvement.…”
Section: Related Work a Event Extractionmentioning
confidence: 99%
See 1 more Smart Citation
“…[1] describes a novel training setup called matching the blanks, and couple it with BERT [6] to produce useful relation representations, particularly effective in low-resource regimes. [33] proposes a method to automatically generate labeled data by editing prototypes and screen out generated samples by ranking the quality.…”
Section: Related Work 21 Sample Shortage Problems In Ed Tasksmentioning
confidence: 99%
“…• PLMEE [46]: a pipelined method using BERT as backbones. • Joint3EE [47]: a joint method that perform predictions for entity mentions, event triggers and argument based on the sharing hidden representations.…”
Section: Overall Performancementioning
confidence: 99%