Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL) 2019
DOI: 10.18653/v1/k19-1035
|View full text |Cite
|
Sign up to set email alerts
|

Cross-Lingual Dependency Parsing with Unlabeled Auxiliary Languages

Abstract: Cross-lingual transfer learning has become an important weapon to battle the unavailability of annotated resources for low-resource languages. One of the fundamental techniques to transfer across languages is learning language-agnostic representations, in the form of word embeddings or contextual encodings. In this work, we propose to leverage unannotated sentences from auxiliary languages to help learning language-agnostic representations. Specifically, we explore adversarial training for learning contextual … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 22 publications
(13 citation statements)
references
References 37 publications
0
13
0
Order By: Relevance
“…This finding means that if the target language has a closely related high-resource language, it may be better to transfer from that language as the source and use PPT for adaptation. Against AHMAD (Ahmad et al, 2019b), PPT performs better on 4 out of 6 distant languages. On nearby languages, the average UAS of PPT is higher, and the average LAS is on par.…”
Section: Resultsmentioning
confidence: 96%
See 3 more Smart Citations
“…This finding means that if the target language has a closely related high-resource language, it may be better to transfer from that language as the source and use PPT for adaptation. Against AHMAD (Ahmad et al, 2019b), PPT performs better on 4 out of 6 distant languages. On nearby languages, the average UAS of PPT is higher, and the average LAS is on par.…”
Section: Resultsmentioning
confidence: 96%
“…First, HE is a neural lexicalised DMV parser with normalising flow that uses a language modelling objective when fine-tuning on the unlabelled target language data (He et al, 2019). Second, AHMAD is an adversarial training method that attempts to learn language-agnostic representations (Ahmad et al, 2019b). Lastly, MENG is a constrained inference method that derives constraints from the target corpus statistics to aid inference (Meng et al, 2019).…”
Section: Comparisonsmentioning
confidence: 99%
See 2 more Smart Citations
“…Model transfer does not require parallel corpora or word alignment tools; nevertheless, it relies on accurate features such as POS tags (McDonald et al, 2013) or syntactic parse trees (Kozhevnikov and Titov, 2013) to enhance the ability to generalize across languages. Adversarial training is commonly used to extract language-agnostic features thereby improving the performance of cross-lingual systems (Chen et al, 2019;Ahmad et al, 2019b).…”
Section: Related Workmentioning
confidence: 99%