Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021) 2021
DOI: 10.18653/v1/2021.repl4nlp-1.1
|View full text |Cite
|
Sign up to set email alerts
|

Improving Cross-lingual Text Classification with Zero-shot Instance-Weighting

Abstract: Cross-lingual text classification (CLTC) is a challenging task made even harder still due to the lack of labeled data in low-resource languages. In this paper, we propose zero-shot instance-weighting, a general model-agnostic zero-shot learning framework for improving CLTC by leveraging source instance weighting. It adds a module on top of pre-trained language models for similarity computation of instance weights, thus aligning each source instance to the target language. During training, the framework utilize… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(8 citation statements)
references
References 31 publications
0
8
0
Order By: Relevance
“…In the future, we will investigate more EHR-NLP tasks including machine translation for more languages, multi-document summarization and question answering (Li et al, 2021b). Besides, we plan to investigate better-performed NLP models for these tasks, for example, BERTbased models (Lee et al, 2020;Li et al, 2021c) and graph-based models (Li et al, , 2021d.…”
Section: Discussionmentioning
confidence: 99%
“…In the future, we will investigate more EHR-NLP tasks including machine translation for more languages, multi-document summarization and question answering (Li et al, 2021b). Besides, we plan to investigate better-performed NLP models for these tasks, for example, BERTbased models (Lee et al, 2020;Li et al, 2021c) and graph-based models (Li et al, , 2021d.…”
Section: Discussionmentioning
confidence: 99%
“…The decoder architecture, relying on the Transformer models, has found widespread application in various natural language generation tasks. Thus, following previous works (Li et al 2023), our method employs a lightweight Transformerbased decoder as the generation model. Both image features and auxiliary object features should be involved in decoding.…”
Section: Prefix Decodingmentioning
confidence: 99%
“…We follow Karpathy (Karpathy and Fei-Fei 2015) Evaluation Metrics. Following common settings (Tewel et al 2022;Li et al 2023), there are several metrics considered to evaluate the generated caption, including BLEU-4 (B@4) (Papineni et al 2002), METEOR (M) (Banerjee and Lavie 2005), ROUGE-L (R-L) (Lin 2004), CIDEr (C) (Vedantam, Lawrence Zitnick, and Parikh 2015), and SPICE (S) (Anderson et al 2016).…”
Section: Experimentation Experimental Settingsmentioning
confidence: 99%
See 2 more Smart Citations