In Magri 2009a, I argue that a sentence such as #Some Italians come from a warm country sounds odd because it triggers the scalar implicature that not all Italians come from a warm country, which mismatches with the piece of common knowledge that all Italians come from the same country. If this proposal is on the right track, then oddness can be used as a diagnostic for scalar implicatures. In this paper, I use this diagnostic to provide one more argument that scalar implicatures are computed not only at the matrix level but also in embedded position. The argument is based on a puzzling pattern of oddness in downward entailing environments. Some apparently unrelated facts about restrictions on temporal modification with individuallevel predicates are shown to fit into the pattern.
According to the OT error-driven ranking model of language acquisition, the learner performs a sequence of slight re-rankings triggered by mistakes on the incoming stream of data, until it converges to a ranking that makes no more mistakes. Two classical examples are Tesar & Smolensky's (1998) Error-Driven Constraint Demotion (EDCD) and Boersma's (1998) Gradual Learning Algorithm (GLA). Yet EDCD only performs constraint demotion, and is thus shown to predict a ranking dynamics which is too simple from a modelling perspective. The GLA performs constraint promotion too, but has been shown not to converge. This paper develops a complete theory of convergence of error-driven ranking algorithms that perform both constraint demotion and promotion. In particular, it shows that convergent constraint promotion can be achieved (with an error-bound that compares well to that of EDCD) through a proper calibration of the amount by which constraints are promoted.
Various authors have recently endorsed Harmonic Grammar (HG) as a replacement for Optimality Theory (OT). One argument for this move is that OT seems not to have close correspondents within machine learning while HG allows methods and results from machine learning to be imported into computational phonology. Here, I prove that this argument in favor of HG and against OT is wrong. In fact, I show that any algorithm for HG can be turned into an algorithm for OT. Hence, HG has no computational advantages over OT. This result allows tools from machine learning to be systematically adapted to OT. As an illustration of this new toolkit for computational OT, I prove convergence for a slight variant of Boersma’s (1998) (nonstochastic) Gradual Learning Algorithm.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.