2019
DOI: 10.3765/amp.v7i0.4495
|View full text |Cite
|
Sign up to set email alerts
|

Learning a Frequency-Matching Grammar together with Lexical Idiosyncrasy: MaxEnt versus Hierarchical Regression

Abstract: Experimental research has uncovered language learners’ ability to frequency-match to statistical generalizations across the lexicon, while also acquiring the idiosyncratic behavior of individual attested words. How can we model the learning of a frequency-matching grammar together with lexical idiosyncrasy? A recent approach based in the single-level regression model Maximum Entropy Harmonic Grammar makes use of general constraints that putatively capture statistical generalizations across the lexicon, as well… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
14
0

Year Published

2019
2019
2033
2033

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(15 citation statements)
references
References 17 publications
1
14
0
Order By: Relevance
“…In this weighted constraint version of CBP, the R specification consists of constraint weight adjustments, rather than subrankings. Constraint resolution reduces to summing the weights of the morpheme-specific R specifications within a phase 8 As Zymet (2018Zymet ( , 2019 shows, lexically-specific weights, or multiple weights for a single constraint in the same language, may be best modeled in a hierarchical regression model, rather than the single-level logistic regression model of traditional MaxEnt. We expect that future work might show that a hierarchical regression model would be a better fit for the data presented here than the output of the MaxEnt Grammar Tool.…”
Section: Constraint Resolutionmentioning
confidence: 99%
“…In this weighted constraint version of CBP, the R specification consists of constraint weight adjustments, rather than subrankings. Constraint resolution reduces to summing the weights of the morpheme-specific R specifications within a phase 8 As Zymet (2018Zymet ( , 2019 shows, lexically-specific weights, or multiple weights for a single constraint in the same language, may be best modeled in a hierarchical regression model, rather than the single-level logistic regression model of traditional MaxEnt. We expect that future work might show that a hierarchical regression model would be a better fit for the data presented here than the output of the MaxEnt Grammar Tool.…”
Section: Constraint Resolutionmentioning
confidence: 99%
“…Methodologically we demonstrate that the precise nature of the identity avoidance effect can be revealed using hierarchical regression and statistical model comparisons (Graff and Jaeger, 2009;Zymet, 2019).…”
Section: Introductionmentioning
confidence: 92%
“…This is a subject of ongoing research (e.g. Moore-Cantwell & Pater 2016, Zymet 2018), and is beyond the scope of the current paper.…”
Section: Analysis With Indexed Constraintsmentioning
confidence: 99%
“…Two further conclusions can be drawn from an examination of deletion rates in these contexts. Firstly, although deletion is by far the most common pattern, each of the elicited suffixes seems to have its own rate of deletion, in what Zymet (2018) terms ‘lexical propensities’: 78% for the predicative, 91% for the accusative and 93% for the possessive. Secondly, a given root does not always behave in the same way across morphological contexts.…”
Section: Analysis With Indexed Constraintsmentioning
confidence: 99%