2004
DOI: 10.1207/s15326969eco1603_1
|View full text |Cite
|
Sign up to set email alerts
|

Crossmodal Source Identification in Speech Perception

Abstract: Four experiments examined the nature of multisensory speech information. In Experiment 1, participants were asked to match heard voices with dynamic visual-alone video clips of speakers' articulating faces. This cross-modal matching task was used to examine whether vocal source matching can be accomplished across sensory modalities. The results showed that observers could match speaking faces and voices, indicating that information about the speaker was available for cross-modal comparisons. In a series of fol… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

17
107
7
1

Year Published

2006
2006
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 58 publications
(132 citation statements)
references
References 53 publications
17
107
7
1
Order By: Relevance
“…In fact, initial support for this prediction has been provided by four very recent articles reporting research conducted in parallel with our own study (discussed below; Kamachi, Hill, Lander, & Vatikiotis-Bateson, 2003;Lachs & Pisoni, 2004a, 2004b, 2004c. These four articles reported that, with an XAB methodology, observers were able to successfully match voices to faces and faces to voices.…”
mentioning
confidence: 55%
See 1 more Smart Citation
“…In fact, initial support for this prediction has been provided by four very recent articles reporting research conducted in parallel with our own study (discussed below; Kamachi, Hill, Lander, & Vatikiotis-Bateson, 2003;Lachs & Pisoni, 2004a, 2004b, 2004c. These four articles reported that, with an XAB methodology, observers were able to successfully match voices to faces and faces to voices.…”
mentioning
confidence: 55%
“…They proposed that in both auditory and visible speech signals, there is speaker-specific articulatory information that specifies both phonetic message and speaker (Kamachi et al, 2003;Lachs & Pisoni, 2004a, 2004b, 2004c. It was thought that this articulatory information conveys an idiosyncratic speaking style that can be specified in the timevarying dimensions of both auditory and visual signals (see also Rosenblum, 2004Rosenblum, , 2005.…”
mentioning
confidence: 99%
“…However, it is not so trivial to put forward a specific hypothesis about the degree to which visual and auditory information affect the perception of trustworthiness. There exists a large body of research on cross-modal integration in person identification ( [11,12] and many others), but not so much on face-voice information interaction in the case of person evaluation. But since trustworthiness has been shown to be closely related to emotional valence [13], we expected that previous findings on cross-modal integration in the case of emotion expression would apply to the perception of trustworthiness as well.…”
Section: Predictionsmentioning
confidence: 99%
“…Visual information about speech influences what listeners hear (Desjardins et al, 1997;Lachs and Pisoni, 2004;McGurk and MacDonald, 1976;MacDonald and McGurk, 1978;MacDonald et al, 2000;Reisberg et al, 1987). This visible articulatory information on a speaker's face is thought to be a central part of typical perceptual development and to foster native language acquisition (Legerstee, 1990) and has been demonstrated in infancy (Burnham and Dodd, 1998;Meltzoff and Kuhl, 1994;Rosenblum et al, 1997).…”
Section: Introductionmentioning
confidence: 99%