Corpus-based semantic space models, which primarily rely on lexical co-occurrence statistics, have proven effective in modeling and predicting human behavior in a number of experimental paradigms that explore semantic memory representation. The most widely studied extant models, however, are strongly influenced by orthographic word frequency (e.g., Shaoul & Westbury, Behavior Research Methods, 38, 190-195, 2006). This has the implication that high-frequency closed-class words can potentially bias co-occurrence statistics. Because these closed-class words are purported to carry primarily syntactic, rather than semantic, information, the performance of corpus-based semantic space models may be improved by excluding closedclass words (using stop lists) from co-occurrence statistics, while retaining their syntactic information through other means (e.g., part-of-speech tagging and/or affixes from inflected word forms). Additionally, very little work has been done to explore the effect of employing morphological decomposition on the inflected forms of words in corpora prior to compiling cooccurrence statistics, despite (controversial) evidence that humans perform early morphological decomposition in semantic processing. In this study, we explored the impact of these factors on corpus-based semantic space models. From this study, morphological decomposition appears to significantly improve performance in word-word co-occurrence semantic space models, providing some support for the claim that sublexical information-specifically, word morphologyplays a role in lexical semantic processing. An overall decrease in performance was observed in models employing stop lists (e.g., excluding closed-class words). Furthermore, we found some evidence that weakens the claim that closed-class words supply primarily syntactic information in word-word cooccurrence semantic space models. Human language, and the semantic representation it facilitates, is a complex behavior. To understand language, one needs to know the meaning of words, and retain knowledge regarding the grammatical application of words. The former requirement is addressed by lexical semantics, or the study of individual word meanings as constrained by morphology. Here, meaning is defined by context that is likely derived from statistical redundancies in multisensory elements perceived in environment-that is, more than those found in analyzing text alone. Using text alone is not likely to ever provide a comprehensive basis for modeling language comprehension, yet, it has been shown that many aspects of perception and cognition can be understood in isolation by modeling specific capacities as computational problems (Anderson, 1990;Marr, 1982). One such approach in acquiring an understanding of semantic representation involves using simple mechanism(s) operating on large scale. This approach has yielded a rich history of both high level and derived mechanistic memory models for lexical semantic representations. Many of these mechanistic models