Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2022
DOI: 10.18653/v1/2022.acl-long.582
|View full text |Cite
|
Sign up to set email alerts
|

Meta-Learning for Fast Cross-Lingual Adaptation in Dependency Parsing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(9 citation statements)
references
References 17 publications
0
8
0
Order By: Relevance
“…This has been adapted to the multilingual setting by optimising a model to be able to quickly adapt to new languages: by using meta-learning to fine-tune a multilingual model on a small set of (higher-resource) languages, the model can then be adapted to a new language using only a few examples (Nooralahzadeh et al, 2020). In this work, we use the Model-Agnostic Meta-Learning algorithm (MAML) (Finn et al, 2017), which has already proven useful for cross-lingual transfer of NLP tasks (Nooralahzadeh et al, 2020;Wu et al, 2020;Gu et al, 2020), including being applied to dependency parsing by Langedijk et al (2022), whose approach we follow for our own experiments.…”
Section: Meta-learningmentioning
confidence: 99%
See 1 more Smart Citation
“…This has been adapted to the multilingual setting by optimising a model to be able to quickly adapt to new languages: by using meta-learning to fine-tune a multilingual model on a small set of (higher-resource) languages, the model can then be adapted to a new language using only a few examples (Nooralahzadeh et al, 2020). In this work, we use the Model-Agnostic Meta-Learning algorithm (MAML) (Finn et al, 2017), which has already proven useful for cross-lingual transfer of NLP tasks (Nooralahzadeh et al, 2020;Wu et al, 2020;Gu et al, 2020), including being applied to dependency parsing by Langedijk et al (2022), whose approach we follow for our own experiments.…”
Section: Meta-learningmentioning
confidence: 99%
“…In total, our test data contains 233 possible arc labels. We use 8 high-resource languages for fine-tuning, based on the selection used by Langedijk et al (2022) and Tran and Bisazza (2019): English, Arabic, Czech, Estonian, Hindi, Italian, Norwegian, and Russian.…”
Section: Datamentioning
confidence: 99%
“…that works well on the NLP problem of any language given the support set of the language. Cross-language settings have been applied to NLI and QA in X-MAML (Nooralahzadeh et al, 2020), documentation classification (van der Heijden et al, 2021, dependency parsing (Langedijk et al, 2021), MT (Gu et al, 2018), and ASR (Hsu et al, 2020;Winata et al, 2020a;Chen et al, 2020d;Xiao et al, 2021).…”
Section: Cross-lingual Transfermentioning
confidence: 99%
“…Learn-to-init is chosen to be the last stage because its training objective is closest to the target of looking for good initial parameters, but it is the most computationally intensive method, and thus it is only used to change the model a little bit. The three-stage initialization has been tested in several works (Nooralahzadeh et al, 2020;Wu et al, 2021b;van der Heijden et al, 2021;Langedijk et al, 2021), but it does not always improve the performance (Wu et al, 2021b;van der Heijden et al, 2021).…”
Section: Learning To Initializementioning
confidence: 99%
See 1 more Smart Citation