Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer 2021
DOI: 10.18653/v1/2021.acl-long.447
|View full text |Cite
|
Sign up to set email alerts
|

A Closer Look at Few-Shot Crosslingual Transfer: The Choice of Shots Matters

Abstract: Few-shot crosslingual transfer has been shown to outperform its zero-shot counterpart with pretrained encoders like multilingual BERT. Despite its growing popularity, little to no attention has been paid to standardizing and analyzing the design of few-shot experiments. In this work, we highlight a fundamental risk posed by this shortcoming, illustrating that the model exhibits a high degree of sensitivity to the selection of few shots. We conduct a largescale experimental study on 40 sets of sampled few shots… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

3
41
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 29 publications
(44 citation statements)
references
References 56 publications
3
41
0
Order By: Relevance
“…Like GPT3, discrete prompting uses natural language to describe NLU tasks. Schick and Schütze (2021), Tam et al (2021), Le Scao and Rush (2021) use human-designed prompts. Gao et al (2020) leverage T5 (Raffel et al, 2020) to generate prompts.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…Like GPT3, discrete prompting uses natural language to describe NLU tasks. Schick and Schütze (2021), Tam et al (2021), Le Scao and Rush (2021) use human-designed prompts. Gao et al (2020) leverage T5 (Raffel et al, 2020) to generate prompts.…”
Section: Related Workmentioning
confidence: 99%
“…Qin and Eisner (2021) and Zhong et al (2021) learn soft prompts eliciting more knowledge (Petroni et al, 2019) from PLMs than discrete prompts. Similar to soft prompting but with the PLM being frozen, Li and Liang (2021) propose prefix-tuning to encourage PLMs to solve generation tasks with high parameter-efficiency (Houlsby et al, 2019;Zhao et al, 2020). Lester et al (2021) demonstrate that soft prompting benefits from scaling up the number of PLM parameters.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations