2017
DOI: 10.1515/cllt-2014-0022
|View full text |Cite
|
Sign up to set email alerts
|

A multimodel inference approach to categorical variant choice: construction, priming and frequency effects on the choice between full and contracted forms of am, are and is

Abstract: A multimodel inference approach to categorical variant choice: construction, priming and frequency effects on the choice between full and contracted forms of am, are and is Abstract: The present paper presents a multimodel inference approach to lin guistic variation, expanding on prior work by Kuperman and Bresnan (2012). We argue that corpus data often present the analyst with high model selection uncer tainty. This uncertainty is inevitable given that language is highly redundant: ev ery feature is predictab… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
27
0

Year Published

2018
2018
2019
2019

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 42 publications
(29 citation statements)
references
References 69 publications
(132 reference statements)
2
27
0
Order By: Relevance
“…Any given feature seems predictable from many other features. Because of this redundancy, an utterance can be produced in (unobservably) different ways, which explains how individual differences and uniformity across the community can co-exist (Hurford 2000;Barth and Kapatsinski 2014;Dąbrowska 2013Dąbrowska , 2014. Thus, while multicollinearity can be a major headache for statistical modelling (but see Harrell 2001), it may be a blessing for language learners, in that it enables speakers to behave in a way that is broadly similar to that of other speakers even when they all have different underlying grammars.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Any given feature seems predictable from many other features. Because of this redundancy, an utterance can be produced in (unobservably) different ways, which explains how individual differences and uniformity across the community can co-exist (Hurford 2000;Barth and Kapatsinski 2014;Dąbrowska 2013Dąbrowska , 2014. Thus, while multicollinearity can be a major headache for statistical modelling (but see Harrell 2001), it may be a blessing for language learners, in that it enables speakers to behave in a way that is broadly similar to that of other speakers even when they all have different underlying grammars.…”
Section: Resultsmentioning
confidence: 99%
“…This, combined with the considerable differences in the performance of human participants, suggests that rather than trying to find the single "best" model, it may be more productive to develop a range of models reflecting the range of human performance (as already suggested by Lauri Carlson, cf. Arppe 2008: 208); for a practical implementation, see Barth and Kapatsinski 2014).…”
Section: Resultsmentioning
confidence: 99%
“…The few studies examining function words show that they are less affected by repeated mention (Bell et al 2009) and more affected by predictability given the preceding context (Jurafsky et al 2001), but both content and function words reduce in cases of high following context probability (Bell et al 2009). Studies of contraction of the function words HAVE and BE indicate that contraction is more likely when BE or HAVE is highly probable given the context (Krug 1998;Frank and Jaeger 2008;Bresnan and Spencer 2016;Barth and Kapatsinski 2017). In all its meanings BE is a function word, allowing a controlled look at expression differences among grammatical meanings of the same word.…”
Section: Content Words Vs Function Wordsmentioning
confidence: 99%
“…Woodward ( 2006 ) and Hoover and Perez ( 2004 ) note that ensuring structural robustness is a hard problem, especially since results may be sensitive not just to the set of control variables, but to the particular combination of control variables, causing an exponential explosion of possible control models. Barth and Kapatsinski ( 2017 ) suggest that this is a real problem for linguistics because aspects of language are highly redundant and inter-related. Instead of committing to one “best” model for the final results, Barth and Kapatsinski ( 2017 ) suggest a “multimodel inference” approach, which assesses the hypothesized relationship in a wide range of models.…”
Section: Robustness In Cross-cultural Statistical Researchmentioning
confidence: 99%
“…Barth and Kapatsinski ( 2017 ) suggest that this is a real problem for linguistics because aspects of language are highly redundant and inter-related. Instead of committing to one “best” model for the final results, Barth and Kapatsinski ( 2017 ) suggest a “multimodel inference” approach, which assesses the hypothesized relationship in a wide range of models.…”
Section: Robustness In Cross-cultural Statistical Researchmentioning
confidence: 99%