Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021
DOI: 10.18653/v1/2021.emnlp-main.32
|View full text |Cite
|
Sign up to set email alerts
|

Few-Shot Text Generation with Natural Language Instructions

Abstract: Providing pretrained language models with simple task descriptions in natural language enables them to solve some tasks in a fully unsupervised fashion. Moreover, when combined with regular learning from examples, this idea yields impressive few-shot results for a wide range of text classification tasks. It is also a promising direction to improve data efficiency in generative settings, but there are several challenges to using a combination of task descriptions and example-based learning for text generation. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
28
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
5

Relationship

1
9

Authors

Journals

citations
Cited by 62 publications
(28 citation statements)
references
References 36 publications
0
28
0
Order By: Relevance
“…Prompt engineering. Constructing effective discrete prompts for language models to perform NLP tasks is an active area of research (Schick and Schütze, 2021;Reynolds and McDonell, 2021;Liu et al, 2021). Such prompts are often extremely short and may not include a complete definition of complex tasks.…”
Section: Related Workmentioning
confidence: 99%
“…Prompt engineering. Constructing effective discrete prompts for language models to perform NLP tasks is an active area of research (Schick and Schütze, 2021;Reynolds and McDonell, 2021;Liu et al, 2021). Such prompts are often extremely short and may not include a complete definition of complex tasks.…”
Section: Related Workmentioning
confidence: 99%
“…Previous work has shown that mixing multiple prompting templates can improve few-shot performance for both classification (Schick and Schütze, 2021a,c;Gao et al, 2021) and generation (Schick and Schütze, 2021b). We argue that such ensembling could produce more regularized path scores by alleviating prompt sensitivity (Zhao et al, 2021).…”
Section: Instruction Ensemblingmentioning
confidence: 66%
“…Few-shot learning can also be accomplished by combining textual templates ("prompts") and various forms of model finetuning, either fully updating a model's parameters, e.g. for classification (Schick & Schütze, 2021a;Schick & Schutze, 2021;Gao et al, 2021;Tam et al, 2021) or generation (Schick & Schütze, 2021b). Prompts themselves can be optimized, for example by search (Jiang et al, 2020;Shin et al, 2020) or by only updating parts of the model (Logan et al, 2021), or learning "soft-prompts" (Lester et al, 2021;Li & Liang, 2021).…”
Section: Few-shot Learningmentioning
confidence: 99%