The purpose of this study was to explore hemispheric involvement in stop-consonant discrimination. Two experimental designs were used. In the first design, averaged evoked responses (AERs) to stop-consonant-vowel (CV) syllables were combined with AERs to nonspeech stimuli, in a paradigm similar to earlier studies, and were submitted to a principal components analysis and analyses of variance. In the second design, only the CV-syllable AERs were analyzed, in the same manner. When the responses to both CV and nonspeech stimuli were included in the analysis, the results were in agreement with those of earlier studies. However, when the nonspeech-stimuli AERs were removed from the analysis, the unilateral effects observed in prior studies were not replicated. The results of this research indicate the importance of considering experimental design and task variables before generalizing AER results to speech perception.It has been assumed since the late 1800s that the left hemisphere of the brain is somehow specialized for language. Indeed, the pervasiveness of various types of aphasia following injury to or disease of the left hemisphere in right-handed individuals gives credence to this view. But what exactly is the left hemisphere's role in speech perception? This question has not been an easy one to investigate. "Speech" is composed of many acoustically diverse elements, and hemispheric involvement in processing these elements has been difficult to determine.Results of dichotic listening studies have revealed a right-ear advantage (REA)-corresponding to a presumed left-hemisphere superiority-for certain types of phonetic stimuli (Kimura, 1961). Shankweiler and Studdert-Kennedy (1967) and Cutting (1974) found that an REA existed for stop-eonsonant-vowel (CV) stimuli. Stop consonants elicited the most marked REA, whereas the liquids Irl and III elicited a smaller REA (Cutting, 1974). Steady-state vowels did not appear to elicit a significant REA in either study. Molfese (1978) attempted to replicate some of Cutting's results but employed averaged evoked responses (AERs) rather than dichotic listening to demonstrate the differential hemispheric responses. Neuroelectrlc activity was measured by electrodes at T3 and T4 of the 10-20 electrode system (Jasper, 1958), referenced to linked earlobes. In a paradigm similar to Cutting's (1974) investigation, Molfese used CV syllables with normal (phonetic) transitions, CV syllables with inverted (nonphonetic) transitions, and a bandwidth variable-sine-wave-formant CV analogs-which had both phonetic and nonphonetic transitions. It should be noted that these stimuli, particularly those with sine-wave formants, are not readily compreThis research was based on the author's dissertation under the direction of Harry Hollien at the University of Florida. The author gratefully acknowledges his support of this project. The author's address is Department of Speech and Hearing Sciences, Indiana University, Bloomington, IN 47405. hended as spoken syllables, although with training they c...