While sentence processing is generally a highly incremental and predictive process, negation seems to present an exception to this generalization. Two-step models of negation processing claim that predicate negation is computed only after the meaning of the core proposition has been computed. Several ERP studies eliciting the N400 (an index of semantic integration or lexical expectation) have found a "negationblind" pattern of N400 results, suggesting that the negation has not been integrated into the overall sentence meaning by the time the critical word for the N400 is encountered. Recent research, however, showed that the N400 was sensitive to the negation-modulated truth value of the sentence when negation was pragmatically licensed. We investigate the possibility that negation-blind N400 is due to underinformativeness of stimuli in past experiments. We found that ERPs to simple classexclusion statements ("A hammer is not a bird") still exhibit negation blindness, even when negation is presented in a more meaningful context. The current findings provide new support for late/non-incremental interpretation of negation even when negation is pragmatically licensed.
This study investigates how filler-gap dependencies associated with subject position are formed in online sentence comprehension. Since Crain and Fodor (1985), “filled-gap” studies have provided evidence that the parser actively seeks to associate a wh-filler with a gap in direct object position of a sentence wherever possible; the evidence that this same process applies for subject position, is, however, more limited (Stowe, 1986; Lee, 2004). We examine the processing of complement clauses, finding that wh dependency formation is actively attempted at embedded subject position (e.g., Kate in Who did Lucy think Kate could drive us home to?), unless, however, the embedded clause contains a complementizer (e.g., Who did Lucy think that Kate … .?). The absence of the dependency formation in the latter case demonstrates that the complementizer-trace effect (cf., ∗Who did Lucy think that could drive us home to mom?; Perlmutter, 1968) is, like syntactic island constraints (Ross, 1967; Keshev and Meltzer-Asscher, 2017), immediately operative in online structure building.
This study investigates pseudo-sluicing constructions in Turkish and argues that they can be best accounted for by a pro-form analysis. The explanation rests on the properties of pseudo-sluicing in Turkish such as lack of case connectivity, presence of copula in the pseudo-sluice, lack of island effect and the ungrammaticality with sprouting. All these characteristics significantly challenge a possible elliptical cleft approach, and provide evidence for a pro-form analysis where the wh-word is preceded by a null e-type pronoun, as originally suggested for sluicing-like constructions in Mandarin Chinese (cf. Adams 2004, Adams and Tomioka 2012).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.