2022
DOI: 10.48550/arxiv.2205.12548
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

RLPrompt: Optimizing Discrete Text Prompts with Reinforcement Learning

Abstract: Prompting has shown impressive success in enabling large pretrained language models (LMs) to perform diverse NLP tasks, especially when only few downstream data are available. Automatically finding the optimal prompt for each task, however, is challenging. Most existing work resorts to tuning soft prompt (e.g., embeddings) which falls short of interpretability, reusability across LMs, and applicability when gradients are not accessible. Discrete prompt, on the other hand, is difficult to optimize, and is often… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
28
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(29 citation statements)
references
References 48 publications
0
28
0
1
Order By: Relevance
“…5.The importance of fluid exchange between artificial and human intelligence in this paradigm is evinced by the rapidly growing interest in prompt engineering , i.e., an increasingly self-aware and theory-driven approach to the role that prompts play in co-creating the outputs of these types of systems (Liu et al, 2022), which has recently been extended to the optimization of text prompts by distinct AI agents (Deng et al, 2022). …”
mentioning
confidence: 99%
“…5.The importance of fluid exchange between artificial and human intelligence in this paradigm is evinced by the rapidly growing interest in prompt engineering , i.e., an increasingly self-aware and theory-driven approach to the role that prompts play in co-creating the outputs of these types of systems (Liu et al, 2022), which has recently been extended to the optimization of text prompts by distinct AI agents (Deng et al, 2022). …”
mentioning
confidence: 99%
“…Furthermore, our work can also be viewed from the perspective of learning discrete prompts for language models. Past work propose to generate knowledge pieces (Liu et al, 2022) or arbitrary textual snippets (Deng et al, 2022) which they append to the input via reinforcement learning. These works are different than ours in that their policy is conditioned solely on the input x whereas in our case we sample critiques of machine-generated predictions based on x and ŷ.…”
Section: Adapters and Discrete Prompt Learningmentioning
confidence: 99%
“…For example, CLIP [81] adopts linear probing [12,31,32,109] and full-finetuning [25,31,48,99,101,109] when transferring to downstream tasks. Prompt adaptation of CLIP [63,81,105,112,114] is motivated by the success of prefix-tuning for language models [16,22,30,45,61,78,84,85,89]. Similarly, CLIP-Adapter [21] and Tip-Adapter [111] are inspired by parameter-efficient finetuning methods [39,44,110] that optimize lightweight MLPs while freezing the encoder.…”
Section: Related Workmentioning
confidence: 99%