While published linguistic judgments sometimes differ from the judgments found in large-scale formal experiments with naive participants, there is not a consensus as to how often these errors occur nor as to how often formal experiments should be used in syntax and semantics research. In this article, we first present the results of a large-scale replication of the Sprouse et al. 2013 study on 100 English contrasts randomly sampled from Linguistic Inquiry 2001-2010 and tested in both a forced-choice experiment and an acceptability rating experiment. Like Sprouse, Schütze, and Almeida, we find that the effect sizes of published linguistic acceptability judgments are not uniformly large or consistent but rather form a continuum from very large effects to small or nonexistent effects. We then use this data as a prior in a Bayesian framework to propose a small n acceptability paradigm for linguistic acceptability judgments (SNAP Judgments). This proposal makes it easier and cheaper to obtain meaningful quantitative data in syntax and semantics research. Specifically, for a contrast of linguistic interest for which a researcher is confident that sentence A is better than sentence B, we recommend that the researcher should obtain judgments from at least five unique participants, using at least five unique sentences of each type. If all participants in the sample agree that sentence A is better than sentence B, then the researcher can be confident that the result of a full forced-choice experiment would likely be 75% or more agreement in favor of sentence A (with a mean of 93%). We test this proposal by sampling from the existing data and find that it gives reliable performance.*
This article presents an argument from ellipsis parallelism that traces of all types of movement receive a bound variable interpretation at LF. MaxElide, a constraint on ellipsis, is used to probe the size of parallelism domains and detect the semantic contribution of a variety of traces. The data examined reveal a detailed interaction between wh-movement from various positions, T-to-C movement, and movement of subjects. I offer an analysis based on the overlapping variablebinder relationships created by these movements. The theoretical conclusion is that Ā -, A-, and head movement all produce traces that feed interpretation. This conclusion argues directly against several proposals that deprive non-Ā movements of (certain) semantic effects-for example, proposals that head movement occurs at PF, or that A-movement does not leave traces.
This article explores ‘defective intervention’ effects in a range of A-movement constructions in English. Moving beyond an old observation that English lacks intervention in standard subject-to-subject raising constructions, I present new data showing that English does in fact display intervention in a variety of other NP-raising contexts. I explore the consequences of this expanded data set, and propose an account of intervention that aims to capture both the cross-linguistic variation between English and other languages, and the cross-constructional variation within English. Keywords: intervention; raising; tough-movement; raising-to-object; passivization; PP-reanalysis; parallel movement; reconstruction; A-movement
This case explores how and why GM became a major user of private equity and hedge fund capital, as well as the risks and rewards of these new relationships. The Cerberus transaction, audacious in both its size and complexity, is explored in detail. What were the alternatives for GM, and what risks and opportunities lay ahead for both parties? This case investigates the benefits, disadvantages, and potential conflicts of interest that evolved as GM's many suppliers increasingly embraced low-cost, nontraditional financing from hedge funds.To analyze the significant role that private equity and hedge funds play in providing capital to corporations, especially those in distressed industries.
APPENDIX A: RATING STUDY RESULTS 'z-bad' is the average z-score for the hypothesized 'bad' option. 'z-good' is the average z-score for the hypothesized good option. 'Z.diff' is the difference between z-good and z-bad and is the effect size. Beta is the estimate from the linear mixed-effects model, which has a standard error 'SE' and a t-value 't'. 'χ 2 ' is the chi-squared value comparing the full model to an intercept-only model, and 'χ 2 p' is the p-value obtained by that comparison. Simple 'p' is just the p-value calculated using the t-value. Pred is TRUE if the effect goes in the significant direction. Sig is TRUE if there is a significant effect. Rows in yellow are rows in which the effect goes in the predicted direction but is not significant. EXPERIMENT z-BAD z-GOOD Z.DIFF BETA SE t χ 2 χ 2 p p PRED SIG
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.