Although most of the research on word identification has been on monomorphemic words, there has been an increasing number of studies in which the processing of multimorphemic words has been explored since the pioneering work of Taft (1979) and Forster (1975, 1976). The major issue in this research has been the role that component morphemes play in the identification of multimorphemic words. There are a variety of models that have been proposed for the access and storage of complex words in the mental lexicon. Most fit into three basic categories: (1) direct access, (2) decomposition, and (3) dual-route (or dual-access) models involving both full-form storage and some form of decomposition within a dual-process system. (Models with a parallel distributed processing architecture, such as that in Seidenberg & McClelland, 1989, are somewhat harder to characterize but probably are most similar to the dual-route models, in that they typically posit that morpheme-like entities are involved in parallel with letter entities in word recognition.) In the direct access model (e.g., Butterworth, 1983;Giraudo & Grainger, 2000), a complex word has an individual full-form representation stored in the lexicon, and it is this full-form representation that is involved in initial access (i.e., a complex word is accessed no differently than a simple, monomorphemic word). These models posit that morphological components are activated only after initial access but can influence postlexical processing.Decompositional models, in contrast, posit that a morphemically complex word is encoded by a process in which the whole word is necessarily decomposed into its parts. The purpose of the decomposition is the extraction of the word's morphemes, and these models often include multiple levels (e.g., word and morpheme) within the lexicon. There are two basic types of decompositional models. One type, which might be termed a fully decompositional model, is one in which each morphemic component is accessed and then the full-form representation is constructed from them. Clearly, such a mechanism is almost necessary to understand a novel complex word, such as mouseball; however, it seems unlikely as an explanation for understanding the meaning of any complex words in the lexicon, with the possible exception of words that are completely orthographically and semantically transparent, such as uncover. There are few, if any, compound words that are completely transparent. For example, cowboy, which is often given as an example of a transparent word in English, could be a synonym for calf. That is, it is only transparent in the sense that the meaning of cowboy is related to cows and to males. One possible way to maintain a fully decompositional model is to posit that the meaning is not necessarily constructed from the parts. Such a model is Taft's (2004) model. The model has three levels of nodes. The first contains the form code, the word's orthography and phonology. The second is the lemma level, and a distinction is made at this level between the a...