Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics 2019
DOI: 10.18653/v1/w19-2913
|View full text |Cite
|
Sign up to set email alerts
|

Frequency vs. Association for Constraint Selection in Usage-Based Construction Grammar

Abstract: A usage-based Construction Grammar (CxG) posits that slot-constraints generalize from common exemplar constructions. But what is the best model of constraint generalization? This paper evaluates competing frequencybased and association-based models across eight languages using a metric derived from the Minimum Description Length paradigm. The experiments show that association-based models produce better generalizations across all languages by a significant margin.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
34
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
4

Relationship

2
6

Authors

Journals

citations
Cited by 17 publications
(34 citation statements)
references
References 36 publications
0
34
0
Order By: Relevance
“…This paper evaluates two alternate CxGs for dialectometry, alongside function words and lexical features: CxG-1 (Dunn, 2018a , b ) and CxG-2 (Dunn, 2019a ). As described and evaluated elsewhere (Dunn, 2019a ), CxG-1 relies on frequency to select candidate slot-constraints while CxG-2 relies on an association-based search algorithm. The differences between the two competing discovery-device grammars as implementations of different theories of language learning are not relevant here.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…This paper evaluates two alternate CxGs for dialectometry, alongside function words and lexical features: CxG-1 (Dunn, 2018a , b ) and CxG-2 (Dunn, 2019a ). As described and evaluated elsewhere (Dunn, 2019a ), CxG-1 relies on frequency to select candidate slot-constraints while CxG-2 relies on an association-based search algorithm. The differences between the two competing discovery-device grammars as implementations of different theories of language learning are not relevant here.…”
Section: Methodsmentioning
confidence: 99%
“…CxG itself is a usage-based paradigm that views grammar as a set of overlapping constructions made up of slot-fillers defined by syntactic, semantic, and sometimes lexical constraints (Goldberg, 2006;Langacker, 2008). This paper draws on recent approaches to computational modeling of CxGs (Dunn, 2017(Dunn, , 2018b(Dunn, , 2019a, including previous applications of a discovery-device CxG to dialectometry for English (Dunn, 2018a(Dunn, , 2019b.…”
Section: Finding Syntactic Variantsmentioning
confidence: 99%
“…For the former approach, we should mention the works of Dunn (2017Dunn ( , 2019) that aim at automatically inducing a set of grammatical units (Cxs) from a large corpus. On the one hand, Dunn's contributions provide a method for extracting Cxs from corpora, but on the other hand they are mainly concerned with the formal side of the constructions, and especially with the problem of how syntactic constraints are learned.…”
Section: Related Workmentioning
confidence: 99%
“…This paper combines grammar induction (Dunn, 2018a(Dunn, , 2018b(Dunn, , 2019) and text classification (Joachims, 1998) to model syntactic variation across national varieties of English. This classification-based approach is situated within the task of dialect identification (Section 2) and evaluated against other baselines for the task (Sections 7 and 8).…”
Section: Syntactic Variation Around the Worldmentioning
confidence: 99%
“…Past approaches to syntactic representation for this kind of task used part-of-speech n-grams (c.f., Hirst & Feiguina, 2007) or lists of function words (c.f., Argamon & Koppel, 2013) to indirectly represent grammatical patterns. Recent work (Dunn, 2018c), however, has introduced the use of a full-scale syntactic representations based on grammar induction (Dunn, 2017(Dunn, , 2018a(Dunn, , 2019 within the Construction Grammar paradigm (CxG: Langacker, 2008;Goldberg, 2006). The idea is that this provides a replicable syntactic representation.…”
Section: Learning the Syntactic Feature Spacementioning
confidence: 99%