Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2022
DOI: 10.18653/v1/2022.naacl-main.84
|View full text |Cite
|
Sign up to set email alerts
|

Probing via Prompting

Abstract: Probing is a popular method to discern what linguistic information is contained in the representations of pre-trained language models. However, the mechanism of selecting the probe model has recently been subject to intense debate, as it is not clear if the probes are merely extracting information or modeling the linguistic property themselves. To address this challenge, this paper introduces a novel model-free approach to probing, by formulating probing as a prompting task. We conduct experiments on five prob… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 35 publications
0
5
0
Order By: Relevance
“…Understanding "how linguistic concepts that were common as features in NLP systems are captured in neural networks" (Belinkov and Glass, 2019) has been the focus of many studies in the recent NLP research. It has been extensively shown that pretrained Neural Language Models (NLMs) are able to capture syntax-and semantic-sensitive phenomena (Hewitt and Manning, 2019;Pimentel et al, 2020;Li et al, 2022) and that there is a correlation between the degree of linguistic knowledge and its ability to solve correctly a downstream task (Miaschi et al, 2020;Sarti et al, 2021), although it is still highly debated (Ravichander et al, 2021). However, it has also been demonstrated that introducing additional linguistic information (Wang et al, 2019b;Glavaš and Vulić, 2021) during the pre-training phase can enhance models' performances.…”
Section: Introductionmentioning
confidence: 99%
“…Understanding "how linguistic concepts that were common as features in NLP systems are captured in neural networks" (Belinkov and Glass, 2019) has been the focus of many studies in the recent NLP research. It has been extensively shown that pretrained Neural Language Models (NLMs) are able to capture syntax-and semantic-sensitive phenomena (Hewitt and Manning, 2019;Pimentel et al, 2020;Li et al, 2022) and that there is a correlation between the degree of linguistic knowledge and its ability to solve correctly a downstream task (Miaschi et al, 2020;Sarti et al, 2021), although it is still highly debated (Ravichander et al, 2021). However, it has also been demonstrated that introducing additional linguistic information (Wang et al, 2019b;Glavaš and Vulić, 2021) during the pre-training phase can enhance models' performances.…”
Section: Introductionmentioning
confidence: 99%
“…The authors test a range of probes and downstream tasks at dozens of checkpoints. Li et al (2022) argue that prompting acts as a model-free probe, thus eliminating the distinction between what the model knows and what the probe learns. They compare prompting to linear regression and MLP probing.…”
Section: Related Workmentioning
confidence: 99%
“…Work in this area includes tools to facilitate development of handcrafted prompts (Strobelt et al, 2022;Bach et al, 2022); algorithms to find optimal prompts through gradient-guided search (Shin et al, 2020) or exhaustive search through labels (Schick and Schütze, 2021) or both labels and templates (Gao et al, 2021); as well as studies on the effect of example order (Kumar and Talukdar, 2021;Lu et al, 2022). Hard prompts have also been used to analyze model capabilities (Garg et al, 2022;Li et al, 2022a), the role of data (Singh et al, 2022), and the nature of prompting itself (Min et al, 2022).…”
Section: Related Workmentioning
confidence: 99%