Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1030
|View full text |Cite
|
Sign up to set email alerts
|

Cross-lingual Structure Transfer for Relation and Event Extraction

Abstract: The identification of complex semantic structures such as events and entity relations, already a challenging Information Extraction task, is doubly difficult from sources written in under-resourced and under-annotated languages. We investigate the suitability of crosslingual structure transfer techniques for these tasks. We exploit relation-and event-relevant language-universal features, leveraging both symbolic (including part-of-speech and dependency path) and distributional (including type representation an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
79
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
5

Relationship

1
9

Authors

Journals

citations
Cited by 63 publications
(79 citation statements)
references
References 52 publications
0
79
0
Order By: Relevance
“…For the future, instead of using the RoBERTa baseline model for the self-training experiments, we could run several iterations by retraining on the data produced by our best self-trained model(s); this could be a good avenue for further improvements. In addition we plan to extend our work by moving to other languages beyond English (we currently have not tried this due to lack of data) using cross-lingual models, (Subburathinam et al, 2019), applying other architectures like CNNs (Nguyen and Grishman, 2015), incorporating tree structure in our models (Miwa and Bansal, 2016) and/or by handling jointly performing event recognition and temporal ordering (Li and Ji, 2014;Katiyar and Cardie, 2017).…”
Section: Discussionmentioning
confidence: 99%
“…For the future, instead of using the RoBERTa baseline model for the self-training experiments, we could run several iterations by retraining on the data produced by our best self-trained model(s); this could be a good avenue for further improvements. In addition we plan to extend our work by moving to other languages beyond English (we currently have not tried this due to lack of data) using cross-lingual models, (Subburathinam et al, 2019), applying other architectures like CNNs (Nguyen and Grishman, 2015), incorporating tree structure in our models (Miwa and Bansal, 2016) and/or by handling jointly performing event recognition and temporal ordering (Li and Ji, 2014;Katiyar and Cardie, 2017).…”
Section: Discussionmentioning
confidence: 99%
“…Specifically, for data-driven extraction methods, we will present how constrained learning and structured prediction are incorporated to improve the tasks by enforcing logic consistency among different categories of event-event relations . We will also cover various cross-domain (Huang et al, 2018), cross-lingual (Subburathinam et al, 2019) and cross-media (Li et al, 2020a) structure transfer approaches for event extraction. This part is estimated to be 40 minutes.…”
Section: Event-centric Information Extraction [40min]mentioning
confidence: 99%
“…Encoding Syntax for Language Transfer Universal language syntax, e.g., part-of-speech (POS) tags, dependency parse structure, and relations are shown to be helpful for cross-lingual transfer (Kozhevnikov and Titov, 2013;Pražák and Konopík, 2017;Wu et al, 2017;Subburathinam et al, 2019;Liu et al, 2019;Xie et al, 2020;Ahmad et al, 2021). Many of these prior works utilized graph neural networks (GNN) to encode the dependency graph structure of the input sequences.…”
Section: Related Workmentioning
confidence: 99%