Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019) 2019
DOI: 10.18653/v1/d19-6102
|View full text |Cite
|
Sign up to set email alerts
|

A Comparative Analysis of Unsupervised Language Adaptation Methods

Abstract: To overcome the lack of annotated resources in less-resourced languages, recent approaches have been proposed to perform unsupervised language adaptation. In this paper, we explore three recent proposals: Adversarial Training, Sentence Encoder Alignment and Shared-Private Architecture. We highlight the differences of these approaches in terms of unlabeled data requirements and capability to overcome additional domain shift in the data. A comparative analysis in two different tasks is conducted, namely on Senti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(10 citation statements)
references
References 26 publications
1
9
0
Order By: Relevance
“…DANNs have been applied in many NLP tasks in the last few years, mainly to sentiment classification (e.g., Ganin et al (2016), Li et al (2018a), Shen et al (2018), Rocha andLopes Cardoso (2019), Ghoshal et al (2020), to name a few), but recently to many other tasks as well: language identification (Li et al, 2018a), natural language inference (Rocha and Lopes Cardoso, 2019), POS tagging (Yasunaga et al, 2018), parsing (Sato et al, 2017), trigger identification (Naik and Rose, 2020), relation extraction Fu et al, 2017;Rios et al, 2018), and other (binary) text classification tasks like relevancy identification (Alam et al, 2018a), machine reading comprehension , stance detection (Xu et al, 2019), and duplicate question detection (Shah et al, 2018). This makes DANNs the most widely used UDA approach in NLP, as illustrated in Table 1.…”
Section: Domain Adversariesmentioning
confidence: 99%
“…DANNs have been applied in many NLP tasks in the last few years, mainly to sentiment classification (e.g., Ganin et al (2016), Li et al (2018a), Shen et al (2018), Rocha andLopes Cardoso (2019), Ghoshal et al (2020), to name a few), but recently to many other tasks as well: language identification (Li et al, 2018a), natural language inference (Rocha and Lopes Cardoso, 2019), POS tagging (Yasunaga et al, 2018), parsing (Sato et al, 2017), trigger identification (Naik and Rose, 2020), relation extraction Fu et al, 2017;Rios et al, 2018), and other (binary) text classification tasks like relevancy identification (Alam et al, 2018a), machine reading comprehension , stance detection (Xu et al, 2019), and duplicate question detection (Shah et al, 2018). This makes DANNs the most widely used UDA approach in NLP, as illustrated in Table 1.…”
Section: Domain Adversariesmentioning
confidence: 99%
“…The goal of the NLI task is to determine whether the meaning of the text fragment "Hypothesis" (H) is in an entailment, contradiction or neither (neutral ) relation to the text fragment "Text" (T ) [1]. To address NLI in a crosslingual setting, unsupervised language adaptation (ULA) techniques have been explored [7,3]. One the largest available resources, with data annotated in 15 languages, to study language adaptation approaches for the NLI task is the Cross-Lingual Natural Language Inference corpus (XNLI) [7].…”
Section: Related Workmentioning
confidence: 99%
“…In this work, we employ Adversarial Training, a promising method for ULA across different languages and tasks [2,3]. Given the advantages of the method (a single encoder for many languages and no requirements on the availability of parallel sentences) compared to other proposed approaches (Encoder Alignment and Shared-Private), Adversarial Training can have a high impact in less-resourced languages.…”
Section: Adversarial Training For Cross-lingual Nlimentioning
confidence: 99%
See 2 more Smart Citations