Common processing systems involved during reading and listening were investigated. Semantic, phonological, and physical systems were examined using an experimental procedure that involved simultaneous presentation of two words: one visual and one auditory. Subjects were instructed to attend to only one modality and to make responses on the basis of words presented in that modality. Influence of unattended words on semantic and phonological decisions indicated that these processing systems are common to the two modalities. Decisions in the physical task were based on modality-specific codes operating prior to the convergence of information from the two modalities.Studies concerned with word processing have tended to approach the topic by investigating either reading or listening, exclusively, largely ignoring the question of common processing systems for the two tasks. Yet, at some level of stimulus analysis, printed words and spoken words must share processing. The present research examines the relationship between the internal codes involved during reading and during listening and investigates at what level! of stimulus analysis it might be possible to merge the research from these two areas.Evidence from bisensory tasks bears upon the question of common semantic coding of visually and auditorily presented words. Lewis (1972) and Sen and Posner (1979) have found facilitation in pronunciation latency for attended words in either modality when the same word is simultaneously presented on the unattended modality. Similarly, when an unattended auditory digit is the same as the attended visual digit, pronunciation latency for the digit is facilitated (Greenwald, 1970;Mynatt, 1977). It has also been shown that unattended visual (Lewis, 1972) and auditory (Greenwald, 1970;Mynatt, 1977) words and digits interfere with pronunciation of items presented to the attended modality when the attended and unattended items are semantically related. This influence of semantically related unattended words indicates automatic activation of a semantic code that is shared by the two modalities. This paper is based on a dissertation submitted to the graduate school of the University of Oregon in partial fulfillment of degree requirements. I am grateful to Michael Posner for his guidance throughout this project. I also wish to thank Ursula Bellugi and Eleanor Gibson for their helpful comments on earlier drafts of the manuscript and Fred Chang for his assistance with data analysis. The investigation and writing of this report were supported by National Institute of General Medical Sciences Training Grant 5 TOI GM 02165 BHSand by National Institutes of Health National Research Service Award I F32 NS06l09-{)1 from the Division of Neurosciences and Communicative Disorders and Stroke. Requests for reprints should be sent to Haskins Laboratories, 270 Crown Street, New Haven, CT 06510.Copyright 1981 Psychonomic Society, Inc. 93During reading and listening tasks, is there similarly automatic activation of a shared phonological code? For the pr...