1991
DOI: 10.3758/bf03207536
|View full text |Cite
|
Sign up to set email alerts
|

Integrating speech information across talkers, gender, and sensory modality: Female faces and male voices in the McGurk effect

Abstract: Studies of the McGurk effect have shown that when discrepant phonetic information is delivered to the auditory and visual modalities, the information is combined into a new percept not originally presented to either modality. In typical experiments, the auditory and visual speech signals are generated by the same talker. The present experiment examined whether a discrepancy in the gender of the talker between the auditory and visual signals would influence the magnitude of the McGurk effect. A male talker's vo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

16
165
4

Year Published

2006
2006
2021
2021

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 190 publications
(185 citation statements)
references
References 58 publications
(67 reference statements)
16
165
4
Order By: Relevance
“…1983; Green et al. 1991; Green and Gerdeman 1995; Massaro and Cohen 1996; Rosenblum and Saldana 1996; Brancazio and Miller 2005). However, it does not always occur and subject's percept may be consistent with the auditory input with no apparent effect of the visual input (Nath and Beauchamp, 2012; Basu Mallick et al.…”
Section: Introductionmentioning
confidence: 99%
“…1983; Green et al. 1991; Green and Gerdeman 1995; Massaro and Cohen 1996; Rosenblum and Saldana 1996; Brancazio and Miller 2005). However, it does not always occur and subject's percept may be consistent with the auditory input with no apparent effect of the visual input (Nath and Beauchamp, 2012; Basu Mallick et al.…”
Section: Introductionmentioning
confidence: 99%
“…The combinatorial rules and processes involved have exercised psychologists for many years (see Bernstein et al 2004a for review), but this work has focused on processing at the phoneme level. The findings that phonetic context perceived by eye can shift phonemic category boundaries (Green et al 1991), and that prelinguistic babies are sensitive to McGurk effects (Burnham & Dodd 2004), demonstrate that audio-visual integration can occur 'pre-phonemically'. That said, the phonemic level of linguistic structure offers the most approachable entry point for examining many aspects of the perception of seen speech in the absence of hearingthat is, silent speech-reading.…”
Section: What Does Vision Deliver? the Art Of 'Hearing By Eye'mentioning
confidence: 99%
“…Green and colleagues performed some of the most convincing of these. Among other things, Green et al (1991) showed that the visual impression of a talker's gender could shift the perception of a clearly heard but ambiguous auditory consonant from 'sh' to 's'. 's' is produced with the tongue immediately behind the teeth, while for 'sh' the place of articulation is more posterior.…”
Section: The Source-filter Model Of Speech: Some Applications To Speementioning
confidence: 99%
“…Synchrony. In the McGurk effect, when the two sources are synchronized to within 180 ms (Munhall, Gribble, Sacco, & Ward, 1996), the two sources appear to trigger an identity decision, and the phenomenon is robust: (a) It is unaffected by manipulations of word meaning or sentence context (Sams, Manninen, & Surakka, 1998), (b) it is insensitive to discrepancy between the gender of the face and the voice (Green et al, 1991), and (c) it requires only a minimum of acoustic information (Remez, Fellowes, & Pisoni, 1998). However, beyond 180 ms, the worse the lip-speech synchronization, the weaker the effect (Munhall et al, 1996;Soto-Faraco & Alsius, 2009).…”
Section: Causality and Unitymentioning
confidence: 99%
“…This identity cue does not seem to be triggered under all conditions. For example, the McGurk effect (in which visible lip movements alter a listener's perception of spoken syllables; McGurk & MacDonald, 1976) is insensitive to such discrepancies (Green, Kuhl, Meltzoff, & Stevens, 1991;S. Walker, Bruce, & O'Malley, 1995), as long as the lip movements and the syllables are fairly well synchronized.…”
Section: Causality and Unitymentioning
confidence: 99%