2021
DOI: 10.3758/s13421-021-01198-7
|View full text |Cite
|
Sign up to set email alerts
|

Unimodal and cross-modal identity judgements using an audio-visual sorting task: Evidence for independent processing of faces and voices

Abstract: Unimodal and cross-modal information provided by faces and voices contribute to identity percepts. To examine how these sources of information interact, we devised a novel audio-visual sorting task in which participants were required to group video-only and audio-only clips into two identities. In a series of three experiments, we show that unimodal face and voice sorting were more accurate than cross-modal sorting: While face sorting was consistently most accurate followed by voice sorting, cross-modal sortin… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 52 publications
0
1
0
Order By: Relevance
“…We argue that this phenomenon is probably related to the difference in the major modality for identity processing between the two groups. For sighted listeners, identity processing is more stable and accurate in the visual modality than that in the auditory modality ( Stevenage et al, 2013 ; Lavan et al, 2022 ). Therefore, to compensate for the relative disadvantage of identity processing in the auditory modality, we need more support from speech information.…”
Section: Discussionmentioning
confidence: 99%
“…We argue that this phenomenon is probably related to the difference in the major modality for identity processing between the two groups. For sighted listeners, identity processing is more stable and accurate in the visual modality than that in the auditory modality ( Stevenage et al, 2013 ; Lavan et al, 2022 ). Therefore, to compensate for the relative disadvantage of identity processing in the auditory modality, we need more support from speech information.…”
Section: Discussionmentioning
confidence: 99%