Connectionist accounts of quasiregular domains, such as spelling-sound correspondences in English, represent exception words (e.g., pint) amidst regular words (e.g., mint) via a graded "warping" mechanism. Warping allows the model to extend the dominant pronunciation to nonwords (regularization) with minimal interference (spillover) from the exceptions. We tested for a behavioral marker of warping by investigating the degree to which participants generalized from newly learned made-up words, which ranged from sharing the dominant pronunciation (regulars), a subordinate pronunciation (ambiguous), or a previously non-existent (exception) pronunciation. The new words were learned over two days, and generalization was assessed 48 hours later using nonword neighbors of the new words in a tempo naming task. The frequency of regularization (a measure of generalization) was directly related to degree of warping required to learn the pronunciation of the new word. Simulations using the Plaut et al. (1996) model further support a warping interpretation. Our findings highlight the need to develop theories of representation that are integrally tied to how those representations are learned and generalized.Keywords: quasiregularity; connectionist models; word learning; tempo naming.
Warping 3Generalization from newly learned words reveals structural properties of the human reading system Mastery of reading in alphabetic languages, such as English, requires an individual to learn correspondences between strings of letters and their pronunciations. Typically, this mapping is consistent, whereby a string of letters is pronounced in one way across many words (int in mint, hint, print). Although such regularities can simplify reading acquisition, there are also exceptions that are inconsistent with these regularities (e.g., pint) that must also be learned, and these clearly pose a challenge. Nonetheless, most people become proficient readers. Why does learning an exception word like pint not disrupt reading orthographically similar words like mint? This paper investigates how skilled readers represent both regular words and exceptions to the rule, and yet exhibit only minimal interference between these types of items.Considerable computational works has been devoted to understanding how quasiregularity is represented in memory in a way that enables accurate reading of both regulars and exceptions. For example, the DRC model (Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001) has maintained that two qualitatively different mechanisms are required to do so: a rulebased system to deal with regulars and new words, and a memory-based system to accommodate exceptions. Others have argued that it is possible for a single mechanism to represent both regularities and the various degrees of inconsistency characteristic of natural language (e.g., Plaut, Seidenberg, Patterson, & McClelland, 1996). A connectionist, parallel distributed processing (henceforth PDP) network that maps between spelling and sound via an intermediate pool ...