1Visual speech is an integral part of communication. Yet it remains unclear whether semantic 2 information carried by movements of the lips or tongue is represented in the same brain regions 3 that mediate acoustic speech representations. Behaviourally, our ability to understand 4 acoustic speech seems independent from that to understand visual speech, but neuroimaging 5 studies suggest that acoustic and visual speech representations largely overlap. To resolve 6 this discrepancy, and to understand whether acoustic and lip-reading speech comprehension 7 are mediated by the same cerebral representations, we systematically probed where the brain 8represents acoustically and visually conveyed word identities in a human MEG study. We 9 designed a single-trial classification paradigm to dissociate where cerebral representations 10 merely reflect the sensory stimulus and where they are predictive of the participant's percept. 11In general, those brain regions allowing for the highest word classification were distinct from 12 those in which cerebral representations were predictive of participant's percept. Across the 13 brain, word representations were largely modality-specific and auditory and visual 14 comprehension were mediated by distinct left-lateralised ventral and dorsal fronto-temporal 15 regions, respectively. Only within the inferior frontal gyrus and the anterior temporal lobe did 16 auditory and visual representations converge. These results provide a neural explanation for 17 why acoustic speech comprehension is a poor predictor of lip-reading skills and suggests that 18 those cerebral speech representations that encode word identity may be more modality-19 specific than often upheld. 20 21 Words abstract: 226. 22 23 24 25