Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.177
|View full text |Cite
|
Sign up to set email alerts
|

Probing Linguistic Systematicity

Abstract: Recently, there has been much interest in the question of whether deep natural language understanding models exhibit systematicitygeneralizing such that units like words make consistent contributions to the meaning of the sentences in which they appear. There is accumulating evidence that neural models often generalize non-systematically. We examined the notion of systematicity from a linguistic perspective, defining a set of probes and a set of metrics to measure systematic behaviour. We also identified ways … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
38
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 46 publications
(39 citation statements)
references
References 44 publications
1
38
0
Order By: Relevance
“…Defining disjoint train/test splits is enough to foil truly unsystematic models (e.g., simple look-up tables). However, building on much previous work (Lake and Baroni, 2018;Hupkes et al, 2019;Yanaka et al, 2020;Bahdanau et al, 2018;Goodwin et al, 2020;Geiger et al, 2019), we contend that a randomly constructed disjoint train/test split only diag-noses the most basic level of systematicity. More difficult systematic generalization tasks will only be solved by models exhibiting more complex compositional structures.…”
Section: A Systematic Generalization Taskmentioning
confidence: 92%
See 1 more Smart Citation
“…Defining disjoint train/test splits is enough to foil truly unsystematic models (e.g., simple look-up tables). However, building on much previous work (Lake and Baroni, 2018;Hupkes et al, 2019;Yanaka et al, 2020;Bahdanau et al, 2018;Goodwin et al, 2020;Geiger et al, 2019), we contend that a randomly constructed disjoint train/test split only diag-noses the most basic level of systematicity. More difficult systematic generalization tasks will only be solved by models exhibiting more complex compositional structures.…”
Section: A Systematic Generalization Taskmentioning
confidence: 92%
“…There are often strong intuitions that certain generalization tasks are only solved by models with systematic structures. These tasks are referred to as systematic generalization tasks (Lake and Baroni, 2018;Hupkes et al, 2019;Yanaka et al, 2020;Bahdanau et al, 2018;Geiger et al, 2019;Goodwin et al, 2020).…”
Section: Related Workmentioning
confidence: 99%
“…There is also a lively debate in cognitive science as to how important rule-based reasoning is for human cognition (Politzer, 2007). Yanaka et al (2020); Goodwin et al (2020) are concurrent studies of systematicity in PLMs. The first shows that monotonicity inference is feasible for syntactic structures close to the ones observed during training.…”
Section: Limitationsmentioning
confidence: 99%
“…Our work builds upon a large body of research intended to probe which aspects of language and meaning are being captured by large LMs. Most closely related is work that assesses whether models can perform symbolic reasoning about language e.g., quantifiers or negation (Talmor et al, 2020;Ettinger, 2020;Wang et al, 2018) or by measuring the systematicity of models' inferences (Goodwin et al, 2020;Kim and Linzen, 2020;Yanaka et al, 2020;Warstadt et al, 2019). Such work has tended to find that LMs reason primarily contextually as opposed to abstractly.…”
Section: Related Workmentioning
confidence: 99%