Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP 2016
DOI: 10.18653/v1/w16-2524
|View full text |Cite
|
Sign up to set email alerts
|

Probing for semantic evidence of composition by means of simple classification tasks

Abstract: We propose a diagnostic method for probing specific information captured in vector representations of sentence meaning, via simple classification tasks with strategically constructed sentence sets. We identify some key types of semantic information that we might expect to be captured in sentence composition, and illustrate example classification tasks for targeting this information.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

2
105
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 122 publications
(107 citation statements)
references
References 16 publications
2
105
0
Order By: Relevance
“…An active line of work focuses on "probing" neural representations of language. Ettinger et al (2016Ettinger et al ( , 2017; Zhu et al (2018), i.a., use a task-based approach similar to ours, where tasks that require a specific subset of linguistic knowledge are used to perform qualitative evaluation. Gulordava et al (2018), Giulianelli et al (2018), Rønning et al (2018), and Jumelet and Hupkes (2018) make a focused contribution towards a particular linguistic phenomenon (agreement, ellipsis, negative polarity).…”
Section: Related Workmentioning
confidence: 99%
“…An active line of work focuses on "probing" neural representations of language. Ettinger et al (2016Ettinger et al ( , 2017; Zhu et al (2018), i.a., use a task-based approach similar to ours, where tasks that require a specific subset of linguistic knowledge are used to perform qualitative evaluation. Gulordava et al (2018), Giulianelli et al (2018), Rønning et al (2018), and Jumelet and Hupkes (2018) make a focused contribution towards a particular linguistic phenomenon (agreement, ellipsis, negative polarity).…”
Section: Related Workmentioning
confidence: 99%
“…Many of them employ the Transformer architecture (Vaswani et al, 2017) that uses multi-head self-attention to capture context. To assess the linguistic knowledge learned by pre-trained LMs, probing task methodology suggest training supervised models on top of the word representations (Ettinger et al, 2016;Hupkes et al, 2018;Belinkov and Glass, 2019;Hewitt and Liang, 2019). Investigated linguistic aspects span across morphology (Shi et al, 2016;Belinkov et al, 2017;Liu et al, 2019a), syntax (Tenney et al, 2019;Hewitt and Manning, 2019), and semantics (Conneau et al, 2018;Liu et al, 2019a).…”
Section: Related Workmentioning
confidence: 99%
“…We show that selectivity can be a guide in designing probes and interpreting probing results, complementary to random representation baselines; as of now, there is little consensus on how to design probes. Early probing papers used linear functions (Shi et al, 2016;Ettinger et al, 2016;Alain and Bengio, 2016), which are still used (Bisazza and Tump, 2018;Liu et al, 2019), but multi-layer perceptron (MLP) probes are at least as popular Conneau et al, 2018;Adi et al, 2017;Ettinger et al, 2018). Arguments have been made for "simple" probes, e.g., that we want to find easily accessible information in a representation (Liu et al, 2019;Alain and Bengio, 2016).…”
Section: Introductionmentioning
confidence: 99%