2004
DOI: 10.1037/0096-1523.30.2.378
|View full text |Cite
|
Sign up to set email alerts
|

Cross-Modal Source Information and Spoken Word Recognition.

Abstract: In a cross-modal matching task, participants were asked to match visual and auditory displays of speech based on the identity of the speaker. The present investigation used this task with acoustically transformed speech to examine the properties of sound that can convey cross-modal information. Word recognition performance was also measured under the same transformations. The authors found that cross-modal matching was only possible under transformations that preserved the relative spectral and temporal patter… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

6
32
1

Year Published

2006
2006
2021
2021

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 25 publications
(39 citation statements)
references
References 48 publications
6
32
1
Order By: Relevance
“…These auditory tests involved either deleting the phonetically extraneous information from the signal (e.g., sine wave speech; Kamachi et al, 2003;Lachs & Pisoni, 2004b, 2004c or testing whether acoustic distortions that hinder phonetic perception also hinder cross-modal matching (Lachs & Pisoni, 2004a). The results of these tests support the conclusion that the idiosyncratic phonetic information contained in the auditory signal is salient for cross-modal matching.…”
supporting
confidence: 71%
See 2 more Smart Citations
“…These auditory tests involved either deleting the phonetically extraneous information from the signal (e.g., sine wave speech; Kamachi et al, 2003;Lachs & Pisoni, 2004b, 2004c or testing whether acoustic distortions that hinder phonetic perception also hinder cross-modal matching (Lachs & Pisoni, 2004a). The results of these tests support the conclusion that the idiosyncratic phonetic information contained in the auditory signal is salient for cross-modal matching.…”
supporting
confidence: 71%
“…In fact, initial support for this prediction has been provided by four very recent articles reporting research conducted in parallel with our own study (discussed below; Kamachi, Hill, Lander, & Vatikiotis-Bateson, 2003;Lachs & Pisoni, 2004a, 2004b, 2004c. These four articles reported that, with an XAB methodology, observers were able to successfully match voices to faces and faces to voices.…”
mentioning
confidence: 73%
See 1 more Smart Citation
“…A person's facial identity and vocal identity during speech share the same general dynamic temporal patterns. Videos of a person's face talking without sounds can be reliably matched above chance to audio of a person's speech, and vice versa (Lachs & Pisoni, 2004). This cross-modal identity matching of speech can also occur when different utterances are used for each modality (Kamachi et al, 2003).…”
Section: Nih-pa Author Manuscriptmentioning
confidence: 98%
“…Individuals can estimate body size, age and gender from both faces and voices. There are also strong correlations between the voice and the dynamic face and thus it is not surprising that identity can be matched cross-modally (Lachs & Pisoni, 2004;Kamachi, Hill, Lander & Vatikiotis-Bateson, 2003). Beyond the obvious use in recognizing individuals, voices and faces contain indexical information that influences other tasks such as speech perception (Nygaard, Sommers & Pisoni, 1994;Nygaard & Pisoni, 1998;Yakel, Rosenblum, & Fortier, 2000).…”
Section: Nih Public Accessmentioning
confidence: 99%