“…example, for a number of trait judgements (Mileva et al, 2020; but see Rezlescu et al, 2015) and independent face and voice spaces, indexing the distinctiveness of identities (Tatz et al, 2020). Similarly, studies probing cross-modal face-voice identity matching report that perceivers can determine with low but above-chance accuracy whether an unfamiliar face and voice belong to the same person or two different people, especially if the face stimulus is dynamic (Mavica & Barenholtz, 2013;Smith, Dunn, Baguley, & Stacey, 2016a;Smith, Dunn, Baguley, & Stacey, 2016b;Stevenage, Hamlin, & Ford, 2017; but see Lavan, Smith, Smith, Jiang, & McGettigan, 2021 showing chance-level performance). As a result, it has been speculated that there is overlapping identity information between the auditory and visual modality, potentially encoded in the dynamic articulatory movements (Kamachi, Hill, Lander, & Vatikiotis-Bateson, 2003;Lachs & Pisoni, 2004;Lander, Hill, Kamachi, & Vatikiotis-Bateson, 2007; but see Lavan et al, 2021) or in concordant cues for person-related judgements, such as a person's gender, age, height, or weight (Smith et al, 2016b).…”