Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.319
|View full text |Cite
|
Sign up to set email alerts
|

Alignment-free Cross-lingual Semantic Role Labeling

Abstract: Cross-lingual semantic role labeling (SRL) aims at leveraging resources in a source language to minimize the effort required to construct annotations or models for a new target language. Recent approaches rely on word alignments, machine translation engines, or preprocessing tools such as parsers or taggers. We propose a cross-lingual SRL model which only requires annotations in a source language and access to raw text in the form of a parallel corpus. The backbone of our model is an LSTM-based semantic role l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 30 publications
0
5
0
Order By: Relevance
“…Instead of dealing with heterogeneous linguistic theories, another line of research consists in actively studying the effect of using a single formalism across multiple languages through annotation projection or other transfer techniques (Akbik et al, 2015(Akbik et al, , 2016Daza and Frank, 2019;Cai and Lapata, 2020;Daza and Frank, 2020). However, such approaches often rely on word aligners and/or automatic translation tools which may introduce a considerable amount of noise, especially in lowresource languages.…”
Section: Introductionmentioning
confidence: 99%
“…Instead of dealing with heterogeneous linguistic theories, another line of research consists in actively studying the effect of using a single formalism across multiple languages through annotation projection or other transfer techniques (Akbik et al, 2015(Akbik et al, , 2016Daza and Frank, 2019;Cai and Lapata, 2020;Daza and Frank, 2020). However, such approaches often rely on word aligners and/or automatic translation tools which may introduce a considerable amount of noise, especially in lowresource languages.…”
Section: Introductionmentioning
confidence: 99%
“…The second type of zero-shot CLT is to expose certain target languages directly in the training process, and many techniques have been proposed within this line of work. In the task of MRC, Hsu, Liu, and Lee (2019); Lee et al (2019); Cui et al (2019) obtain training corpus for target languages by utilizing translation and projecting silver labels; similar techniques are also used in other cross-lingual tasks such as SRL (Cai and Lapata 2020), POS tagging (Eskander, Muresan, and Collins 2020) and Abstract Meaning Representation (AMR) parsing (Blloshmi, Tripodi, and Navigli 2020). Other techniques such as self-learning (Xu et al 2021) and meta-learning (Li et al 2020;Nooralahzadeh et al 2020) are also proposed for CLT.…”
Section: Related Workmentioning
confidence: 99%
“…One method requires parallel corpora to extract alignments between source and target languages using machine translation (Padó and Lapata, 2005;Damonte and Cohen, 2017;Zhang et al, 2018;, often followed by projection of semantic representations (Reddy et al, 2017). The other method is to use parameter-shared models based on cross-lingual representations such as cross-lingual word embeddings (Duong et al, 2017;Susanto and Lu, 2017;Mulcaire et al, 2018;Hershcovich et al, 2019;Cai and Lapata, 2020), pretrained multilingual models (Zhu et al, 2020;Oepen et al, 2020), and universal POS tags (Blloshmi et al, 2020). Recently, Ozaki et al (2020); Samuel and Straka (2020); Dou et al (2020) conducted supervised German DRS parsing with pretrained multilingual models, but they did not explore zero-shot cross-lingual semantic parsing.…”
Section: Related Workmentioning
confidence: 99%