Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.713
|View full text |Cite
|
Sign up to set email alerts
|

A Joint Neural Model for Information Extraction with Global Features

Abstract: Most existing joint neural models for Information Extraction (IE) use local task-specific classifiers to predict labels for individual instances (e.g., trigger, relation) regardless of their interactions. For example, a VICTIM of a DIE event is likely to be a VICTIM of an AT-TACK event in the same sentence. In order to capture such cross-subtask and cross-instance inter-dependencies, we propose a joint neural framework, ONEIE, that aims to extract the globally optimal IE result as a graph from an input sentenc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
275
1

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 269 publications
(276 citation statements)
references
References 23 publications
0
275
1
Order By: Relevance
“…In particular, event argument extraction generally consists of two steps -first identifying entities and their general semantic class with trained models (Wadden et al, 2019) or a parser (Sha et al, 2018), then assigning argument roles (or no role) to each entity. Although joint models (Yang and Mitchell, 2016;Nguyen and Nguyen, 2019;Zhang et al, 2019a;Lin et al, 2020) have been proposed to mitigate this issue, error propagation (Li et al, 2013) still occurs during event argument extraction.…”
Section: Event Typementioning
confidence: 99%
See 1 more Smart Citation
“…In particular, event argument extraction generally consists of two steps -first identifying entities and their general semantic class with trained models (Wadden et al, 2019) or a parser (Sha et al, 2018), then assigning argument roles (or no role) to each entity. Although joint models (Yang and Mitchell, 2016;Nguyen and Nguyen, 2019;Zhang et al, 2019a;Lin et al, 2020) have been proposed to mitigate this issue, error propagation (Li et al, 2013) still occurs during event argument extraction.…”
Section: Event Typementioning
confidence: 99%
“…DYGIE++ (Wadden et al, 2019) is a BERT-based framework that models text spans and captures within-sentence and cross-sentence context. OneIE (Lin et al, 2020) is a joint neural model for extraction with global features. 2 In Table 2, we present the comparison of models' performance on trigger detection.…”
Section: Evaluation On Ace Event Extractionmentioning
confidence: 99%
“…We use Automatic Content Extraction (ACE) 2005 dataset 2 , the widely used dataset with annotated instances of 7 entity types, 6 relation types, 33 event types, and 22 argument roles. We follow our recent work on ACE IE (Lin et al, 2020) to split the data. We consider the training set as historical data to train the LM, and the test set as our target data to induce schema for target scenarios.…”
Section: Datasetmentioning
confidence: 99%
“…The instance graphs of the target data set are constructed from manual annotations. For historical data, we construct event instance graphs from both manual annotations (Historical ann ) and system extraction results (Historical sys ) from the state-ofthe-art IE model (Lin et al, 2020). We perform cross-document entity coreference resolution by applying an entity linker (Pan et al, 2017) for both annotated and system generated instance graphs.…”
Section: Datasetmentioning
confidence: 99%
See 1 more Smart Citation