2016
DOI: 10.3765/amp.v3i0.3659
|View full text |Cite
|
Sign up to set email alerts
|

Morphologically-conditioned tonotactics in multilevel Maximum Entropy grammar

Abstract: This paper presents a novel approach to probabilistic morphologically-conditioned tonotactics, featuring a case study of Mende, in which tonotactics vary by lexical category. This variation in surface tone patterns is modeled via indexed weight adjustments (i.e., varying slopes) for each constraint in a Maximum Entropy Harmonic Grammar, quantifying the degree to which each lexical class follows basic tonotactic principles in a common base grammar. Approaching morphologically-conditioned phonotactics as indexed… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 19 publications
(7 citation statements)
references
References 22 publications
0
7
0
Order By: Relevance
“…I call this model HIERARCHICAL MAXENT. Hierarchical regression is used widely to capture statistical generalizations together with idiosyncrasies in variable datasets: linguists in particular have employed random intercepts to measure by-word/lexical class idiosyncrasy (Fruehwald 2012, Zuraw & Hayes 2017, Smith & Moore-Cantwell 2017, inter alia); Shih & Inkelas (2016) and Shih (2018) even adopt the hierarchical model as a theory of learning and competence for their data.…”
Section: Hierarchical Regression Backgroundmentioning
confidence: 99%
“…I call this model HIERARCHICAL MAXENT. Hierarchical regression is used widely to capture statistical generalizations together with idiosyncrasies in variable datasets: linguists in particular have employed random intercepts to measure by-word/lexical class idiosyncrasy (Fruehwald 2012, Zuraw & Hayes 2017, Smith & Moore-Cantwell 2017, inter alia); Shih & Inkelas (2016) and Shih (2018) even adopt the hierarchical model as a theory of learning and competence for their data.…”
Section: Hierarchical Regression Backgroundmentioning
confidence: 99%
“…I will employ a scaling system in which scaling involves the addition of a penalty to the basic weight w of a constraint (Hsu & Jesney 2016, Shih & Inkelas 2016). This amount is determined by a constraint-specific scaling factor s , whose value is multiplied by a numerical value d , which is determined by the corresponding point on the hierarchy of the constraint violation.…”
Section: Scalar Constraint Analysis Of Restrictions On ṽXmentioning
confidence: 99%
“…Phonological exceptions are a subset of lexical class-conditioned phonological phenomena, but differ from similar phenomena (e.g. morphosyntactic classes (Smith, 2016); ideophones (Dingemanse, 2012;Shih & Inkelas, 2016), inter alia) in terms of arbitrariness. Exceptions are only distinguishable from non-exceptions by virtue of their idiosyncratic behavior, as opposed to membership in a lexical class that is distinguishable on other grounds (e.g.…”
Section: What Is An Exception?mentioning
confidence: 99%