2020
DOI: 10.48550/arxiv.2010.15980
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
156
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 116 publications
(158 citation statements)
references
References 11 publications
2
156
0
Order By: Relevance
“…Therefore, an approach that does not require tuning the large model is highly desired, which is termed as promptbased learning. Based on the format of prompts, the prompt-based learning can be categorized into two kinds: discrete prompt (Jiang et al, 2020;Yuan et al, 2021;Haviv et al, 2021;Wallace et al, 2019;Shin et al, 2020;Gao et al, 2021;Ben-David et al, 2021;Davison et al, 2019) and continuous prompt (Zhong et al, 2021;Qin and Eisner, 2021;Hambardzumyan et al, 2021;Liu et al, 2021b;. The discrete prompt is usually a sequence of tokens or natural language phrases while the continuous prompt is designed as a sequence of vectors (embedding).…”
Section: Prompts For Pre-trained Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, an approach that does not require tuning the large model is highly desired, which is termed as promptbased learning. Based on the format of prompts, the prompt-based learning can be categorized into two kinds: discrete prompt (Jiang et al, 2020;Yuan et al, 2021;Haviv et al, 2021;Wallace et al, 2019;Shin et al, 2020;Gao et al, 2021;Ben-David et al, 2021;Davison et al, 2019) and continuous prompt (Zhong et al, 2021;Qin and Eisner, 2021;Hambardzumyan et al, 2021;Liu et al, 2021b;. The discrete prompt is usually a sequence of tokens or natural language phrases while the continuous prompt is designed as a sequence of vectors (embedding).…”
Section: Prompts For Pre-trained Modelsmentioning
confidence: 99%
“…It is well known that prompt-based methods are sensitive to the many aspects of prompts including contexts (Jiang et al, 2020;Shin et al, 2020), orders (Lu et al, 2021) and length , and inappropriate prompts will cause bad performance.…”
Section: Effects Of Prompt Lengthmentioning
confidence: 99%
“…Different from the traditional approaches that encode the sentence into a set of vectors and then classify their sentiment through a fully connected layer, the prompt-based method will construct a set of templates, for example: ("I am always happy to see you, the sentence's sentiment is [MASK]"), and then ask the model to predict the token [mask] according to the original training task for the PLM. This approach has gone through various stages, from manual template construction [Jiang et al 2020], to automated search for discrete tokens [Shin et al 2020], to continuous virtual Tokon representations [Lester et al 2021;Li and Liang 2021]. It has achieved a great success in few-shot scenarios.…”
Section: Fine-tuningmentioning
confidence: 99%
“…Most works use human-written verbalizers (Schick and Schütze, 2020a) ased towards personal vocabulary and do not have enough coverage. Some other studies (Gao et al, 2020;Shin et al, 2020;Schick et al, 2020) design automatic verbalizer searching methods for better verbalizer choices, however, their methods require adequate training set and validation set for optimization. Moreover, the automatically determined verbalizers are usually synonym of the class name, which differs from our intuition of expanding the verbalizer with a set of diverse and comprehensive label words using external KB.…”
Section: Related Workmentioning
confidence: 99%