“…Findings from other areas of research have suggested the existence of a mechanism or representation common to the processing of speech input from the auditory and visual modalities (Campbell, 1987;Watson, Qiu, Chamberlain, & Li, 1996). Cross-modal interaction at some level has demonstrated that information from different sensory modalities can be combined in perception, as in the McGurk effect (e.g., McGurk & MacDonald, 1976), and that input to one modality can influence processing in another (see, e.g., Robert-Ribes, Schwartz, & Escudier, 1995, for a review). Using magnetoencephalographic recordings, visual input specifically from lip movements was found to influence auditory cortical activity (Sams, Aulanko, Hämäläinen, Hari, Lounasmaa, Lu, & Simola, 1991).…”