2022
DOI: 10.5715/jnlp.29.991
|View full text |Cite
|
Sign up to set email alerts
|

Are Prompt-based Models Clueless?

Abstract: Pretrained language models have achieved remarkable performance on most of the natural language benchmarks. Until recently, the dominant approach of adapting these models to downstream tasks has been through finetuning a task-specific head. But, previous work has found that these models learn to exploit spurious correlations between inputs and the labels (Gururangan et al. 2018;Poliak et al. 2018;Kavumba et al. 2019). These spurious correlations may exist in the form of unique input tokens or style or annotati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 8 publications
0
1
0
Order By: Relevance
“…An example of this category would be a test that investigates how well one pretrained model generalises with respect to an o.o.d. finetuning train-test split(Damonte and Monti, 2021;Kavumba et al, 2022;Ludwig et al, 2022). The parts of the modelling pipeline that studies with a finetune train-test locus can evaluate are the same as studies with a train-test locus, although studies that investigate the generalisation abilities of a single finetuned model instance are rare.…”
mentioning
confidence: 99%
“…An example of this category would be a test that investigates how well one pretrained model generalises with respect to an o.o.d. finetuning train-test split(Damonte and Monti, 2021;Kavumba et al, 2022;Ludwig et al, 2022). The parts of the modelling pipeline that studies with a finetune train-test locus can evaluate are the same as studies with a train-test locus, although studies that investigate the generalisation abilities of a single finetuned model instance are rare.…”
mentioning
confidence: 99%