2021
DOI: 10.1111/bjop.12531
|View full text |Cite
|
Sign up to set email alerts
|

Audiovisual identity perception from naturally‐varying stimuli is driven by visual information

Abstract: Identity perception often takes place in multimodal settings, where perceivers have access to both visual (face) and auditory (voice) information. Despite this, identity perception is usually studied in unimodal contexts, where face and voice identity perception are modelled independently from one another. In this study, we asked whether and how much auditory and visual information contribute to audiovisual identity perception from naturally-varying stimuli. In a between-subjects design, participants completed… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(7 citation statements)
references
References 41 publications
0
7
0
Order By: Relevance
“…That is, when forming clusters to represent individual identities, unfamiliar and familiar viewers and listeners rarely combine 2 different identities into the same perceived identity cluster (for voices, see J. Johnson et al, 2020;Lavan, Burston, & Garrido, 2019a;Lavan, Burston, Ladwa, et al, 2019b;Lavan, Collins, & Miah, 2021a;Lavan, Smith, & McGettigan, 2022;Stevenage et al, 2020; for faces, see Jenkins et al, 2011;J. Johnson et al, 2018;Lavan, Collins, & Miah, 2021a, Lavan, Smith, & McGettigan, 2022Redfern & Benton, 2017).…”
Section: Discussionmentioning
confidence: 99%
See 4 more Smart Citations
“…That is, when forming clusters to represent individual identities, unfamiliar and familiar viewers and listeners rarely combine 2 different identities into the same perceived identity cluster (for voices, see J. Johnson et al, 2020;Lavan, Burston, & Garrido, 2019a;Lavan, Burston, Ladwa, et al, 2019b;Lavan, Collins, & Miah, 2021a;Lavan, Smith, & McGettigan, 2022;Stevenage et al, 2020; for faces, see Jenkins et al, 2011;J. Johnson et al, 2018;Lavan, Collins, & Miah, 2021a, Lavan, Smith, & McGettigan, 2022Redfern & Benton, 2017).…”
Section: Discussionmentioning
confidence: 99%
“…Johnson et al, 2020;Lavan, Burston, & Garrido, 2019a;Lavan, Burston, Ladwa, et al, 2019b;Lavan, Collins, & Miah, 2021a;Lavan, Smith, & McGettigan, 2022;Stevenage et al, 2020; for faces, see Jenkins et al, 2011;J. Johnson et al, 2018;Lavan, Collins, & Miah, 2021a, Lavan, Smith, & McGettigan, 2022Redfern & Benton, 2017). Where we have previously seen increased "telling apart" errors in voice sorting studies, these have emerged in direct comparisons of different stimulus sets (i.e., highly expressive clips including whispering, shouting, and emotional speech vs. low expressive conversational speech; Lavan, Burston, Ladwa, et al, 2019b) or task instructions (i.e., when unfamiliar listeners have been instructed to sort the sounds into a two-identity solution; Lavan et al, 2020).…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations