2022
DOI: 10.48550/arxiv.2203.07281
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
29
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 13 publications
(29 citation statements)
references
References 0 publications
0
29
0
Order By: Relevance
“…In particular, the optimized prompts, though inducing strong task performance, tend to be gibberish text without clear human-understandable meaning, echoing the recent research (Webson and Pavlick, 2021;Zhao et al, 2021;Prasad et al, 2022) that LMs making use of prompts do not necessarily follow human language patterns. Perhaps surprisingly, those gibberish prompts learned with one LM can be used in other LMs for significant performance, indicating that those different pretrained LMs have grasped shared structures for prompting.…”
Section: Introductionmentioning
confidence: 66%
See 4 more Smart Citations
“…In particular, the optimized prompts, though inducing strong task performance, tend to be gibberish text without clear human-understandable meaning, echoing the recent research (Webson and Pavlick, 2021;Zhao et al, 2021;Prasad et al, 2022) that LMs making use of prompts do not necessarily follow human language patterns. Perhaps surprisingly, those gibberish prompts learned with one LM can be used in other LMs for significant performance, indicating that those different pretrained LMs have grasped shared structures for prompting.…”
Section: Introductionmentioning
confidence: 66%
“…The optimization above, however, can be intractable because the discrete tokens of z are not amenable to gradient-based optimization, while the brute-force search space grows exponentially in the order of O(V L ). Previous work either approximate gradients over z using their continuous LM embeddings (Shin et al, 2020) or tweak human-written prompts with heuristics (Jiang et al, 2020;Mishra et al, 2021a;Prasad et al, 2022), to some success.…”
Section: The Discrete Prompt Optimization Problemmentioning
confidence: 99%
See 3 more Smart Citations