The physiological basis of human cerebral asymmetry for language remains mysterious. We have used simultaneous physiological and anatomical measurements to investigate the issue. Concentrating on neural oscillatory activity in speech-specific frequency bands and exploring interactions between gestural (motor) and auditoryevoked activity, we find, in the absence of language-related processing, that left auditory, somatosensory, articulatory motor, and inferior parietal cortices show specific, lateralized, speech-related physiological properties. With the addition of ecologically valid audiovisual stimulation, activity in auditory cortex synchronizes with left-dominant input from the motor cortex at frequencies corresponding to syllabic, but not phonemic, speech rhythms. Our results support theories of language lateralization that posit a major role for intrinsic, hardwired perceptuomotor processing in syllabic parsing and are compatible both with the evolutionary view that speech arose from a combination of syllable-sized vocalizations and meaningful hand gestures and with developmental observations suggesting phonemic analysis is a developmentally acquired process.EEG/functional MRI | natural stimulation | resting state | oscillation A uditory asymmetry (1) and hand preference are traits humans share with other primate and nonprimate species (2-5). Both have been proposed as the functional origin of human cerebral dominance in speech and language (6, 7). The motor theory of language evolution argues that speech evolved from a preexisting manual language (8) involving lateralized hand/ mouth gestures. Such asymmetric control of gesture or pharyngeal musculature could have led to left lateralization of speech and language (9). Conversely, if auditory preceded motor asymmetry in evolution, the alignment of vocalization to gestures (10) might have gradually led to left-lateralized motor and executive language functions (11). It remains unknown and controversial which of these scenarios accounts for asymmetry in speech and language processing, so we set out to find empirical evidence in favor of one or the other.We obtained simultaneous functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) recordings at rest and while watching an ecologically valid stimulus (movie) to identify where brain activity correlates with electrophysiological oscillations in frequency bands related to syllabic and phonemic components of speech. We tested for evidence of lateralization of component-associated frequencies. Our experimental approach was based on two assumptions that have received recent experimental support (12-15): The first is that there are two intrinsic hardwired auditory speech sampling mechanisms, working in parallel, at rates that are optimal for syllabic and phonemic parsing of the input (delta-theta (∼4 Hz) and gamma (∼40 Hz) oscillations, respectively) (6, 16). They shape neuronal firing elicited by auditory stimulation (17) with fast phonemic gamma modulated by slower syllabic theta oscillation...