2021
DOI: 10.3758/s13414-021-02290-5
|View full text |Cite
|
Sign up to set email alerts
|

Explaining face-voice matching decisions: The contribution of mouth movements, stimulus effects and response biases

Abstract: Previous studies have shown that face-voice matching accuracy is more consistently above chance for dynamic (i.e. speaking) faces than for static faces. This suggests that dynamic information can play an important role in informing matching decisions. We initially asked whether this advantage for dynamic stimuli is due to shared information across modalities that is encoded in articulatory mouth movements. Participants completed a sequential face-voice matching task with (1) static images of faces, (2) dynamic… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 38 publications
0
2
0
Order By: Relevance
“…example, for a number of trait judgements (Mileva et al, 2020; but see Rezlescu et al, 2015) and independent face and voice spaces, indexing the distinctiveness of identities (Tatz et al, 2020). Similarly, studies probing cross-modal face-voice identity matching report that perceivers can determine with low but above-chance accuracy whether an unfamiliar face and voice belong to the same person or two different people, especially if the face stimulus is dynamic (Mavica & Barenholtz, 2013;Smith, Dunn, Baguley, & Stacey, 2016a;Smith, Dunn, Baguley, & Stacey, 2016b;Stevenage, Hamlin, & Ford, 2017; but see Lavan, Smith, Smith, Jiang, & McGettigan, 2021 showing chance-level performance). As a result, it has been speculated that there is overlapping identity information between the auditory and visual modality, potentially encoded in the dynamic articulatory movements (Kamachi, Hill, Lander, & Vatikiotis-Bateson, 2003;Lachs & Pisoni, 2004;Lander, Hill, Kamachi, & Vatikiotis-Bateson, 2007; but see Lavan et al, 2021) or in concordant cues for person-related judgements, such as a person's gender, age, height, or weight (Smith et al, 2016b).…”
Section: Discussionmentioning
confidence: 99%
“…example, for a number of trait judgements (Mileva et al, 2020; but see Rezlescu et al, 2015) and independent face and voice spaces, indexing the distinctiveness of identities (Tatz et al, 2020). Similarly, studies probing cross-modal face-voice identity matching report that perceivers can determine with low but above-chance accuracy whether an unfamiliar face and voice belong to the same person or two different people, especially if the face stimulus is dynamic (Mavica & Barenholtz, 2013;Smith, Dunn, Baguley, & Stacey, 2016a;Smith, Dunn, Baguley, & Stacey, 2016b;Stevenage, Hamlin, & Ford, 2017; but see Lavan, Smith, Smith, Jiang, & McGettigan, 2021 showing chance-level performance). As a result, it has been speculated that there is overlapping identity information between the auditory and visual modality, potentially encoded in the dynamic articulatory movements (Kamachi, Hill, Lander, & Vatikiotis-Bateson, 2003;Lachs & Pisoni, 2004;Lander, Hill, Kamachi, & Vatikiotis-Bateson, 2007; but see Lavan et al, 2021) or in concordant cues for person-related judgements, such as a person's gender, age, height, or weight (Smith et al, 2016b).…”
Section: Discussionmentioning
confidence: 99%
“…For identity perception, this account thus strongly assumes that there is meaningful shared and complementary identity-related information across the auditory and visual modalities. While there are studies that suggest that there is limited shared information, others, however, report low or close to chance-level accuracy for any cross-modal identity matching (see Lavan, Smith, et al, 2021;Smith et al, 2016aSmith et al, , 2016b.…”
Section: Introductionmentioning
confidence: 99%