Do speakers of all languages use segmental speech sounds when they produce words? Existing models of language production generally assume a mental representation of individual segmental units, or phonemes, but the bulk of evidence comes from speakers of European languages in which the orthographic system codes explicitly for speech sounds. By contrast, in languages with nonalphabetical scripts, such as Mandarin Chinese, individual speech sounds are not orthographically represented, raising the possibility that speakers of these languages do not use phonemes as fundamental processing units. We used event-related potentials (ERPs) combined with behavioral measurement to investigate the role of phonemes in Mandarin production. Mandarin native speakers named colored line drawings of objects using color adjective-noun phrases; color and object name either shared the initial phoneme or were phonologically unrelated. Whereas naming latencies were unaffected by phoneme repetition, ERP responses were modulated from 200 ms after picture onset. Our ERP findings thus provide strong support for the claim that phonemic segments constitute fundamental units of phonological encoding even for speakers of languages that do not encode such units orthographically.
Is the production of written words affected by their phonological properties? Most researchers agree that orthographic codes can be accessed directly from meaning, but the contribution of phonological codes to written word production remains controversial, mainly because studies have focused on languages with alphabetic scripts, and it is difficult to dissociate sound from spelling in such languages. We report results from a picture-word interference task in which Chinese participants wrote the names of pictures while attempting to ignore written distractor words. On some trials, the distractors were phonologically and orthographically related to the picture names; on other trials, the distractors were only phonologically related to the picture names; and on still other trials, the distractors and picture names were unrelated. Priming effects were found for both types of related distractors relative to unrelated distractors. This result constitutes clear evidence that phonological properties constrain orthographic output. Additionally, the results speak to the nature of Chinese orthography, suggesting subsemantic correspondences between sound and spelling.
General rightsThis document is made available in accordance with publisher policies. Please cite only the published version using the reference above. Full terms of use are available: http://www.bristol.ac.uk/pure/about/ebr-terms AbstractPrevious studies of spoken picture naming using event-related potentials (ERPs) have shown that speakers initiate lexical access within 200 ms after stimulus onset. In the present study, we investigated the time course of lexical access in written, rather than spoken, word production. Chinese participants wrote target object names which varied in word frequency, and written naming times and ERPs were measured. Writing latencies exhibited a classical frequency effect (faster responses for high-than for low-frequency names). More importantly, ERP results revealed that electrophysiological activity elicited by high-and low frequency target names started to diverge as early as 168 ms post picture onset. We conclude that lexical access during written word production is initiated within 200 ms after picture onset. This estimate is compatible with previous studies on spoken production which likewise showed a rapid onset of lexical access (i.e., within 200 ms after stimuli onset). We suggest that written and spoken word production share the lexicalization stage.
To what extent is handwritten word production based on phonological codes? A few studies conducted in Western languages have recently provided evidence showing that phonology contributes to the retrieval of graphemic properties in written output tasks. Less is known about how orthographic production works in languages with non-alphabetic scripts such as written Chinese. We report a Stroop study in which Chinese participants wrote the color of characters on a digital graphic tablet; characters were either neutral, or homophonic to the target (congruent), or homophonic to an alternative (incongruent). Facilitation was found from congruent homophonic distractors, but only when the homophone shared the same tone with the target. This finding suggests a contribution of phonology to written word production. A second experiment served as a control experiment to exclude the possibility that the effect in Experiment 1 had an exclusively semantic locus. Overall, the findings offer new insight into the relative contribution of phonology to handwriting, particularly in non-Western languages.
General rightsThis document is made available in accordance with publisher policies. Please cite only the published version using the reference above. Full terms of use are available: http://www.bristol.ac.uk/pure/about/ebr-terms Orthography in spoken word recognition 2 Abstract Extensive evidence from alphabetic languages demonstrates a role of orthography in the processing of spoken words. Because alphabetic systems explicitly code speech sounds, such effects are perhaps not surprising. However, it is less clear whether orthographic codes are involuntarily accessed from spoken words in languages with non-alphabetic systems, in which the sound-spelling correspondence is largely arbitrary. We investigated the role of orthography via a semantic relatedness judgment task: native Mandarin speakers judged whether or not spoken word pairs were related in meaning. Word pairs were either semantically related, orthographically related, or unrelated. Results showed that relatedness judgments were made faster for word pairs that were semantically related than for unrelated word pairs. Critically, orthographic overlap on semantically unrelated word pairs induced a significant increase in response latencies. These findings indicate that orthographic information is involuntarily accessed in spoken-word recognition, even in a nonalphabetic language such as Chinese.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.