2011
DOI: 10.1016/j.cortex.2010.03.003
|View full text |Cite
|
Sign up to set email alerts
|

Cross-modal interactions between human faces and voices involved in person recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

3
70
4
1

Year Published

2012
2012
2021
2021

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 103 publications
(78 citation statements)
references
References 67 publications
3
70
4
1
Order By: Relevance
“…This information may either be associated directly, in an information-driven manner (e.g., through shared, redundant information) that is reinforced through brief learning (e.g., von Kriegstein et al, 2008), or indirectly through feedback via a common resource (e.g., Naci, Taylor, Cusack, & Tyler, 2012;ten Oever et al, 2016). For example, based on the results from a neuroimaging study, Joassin et al (2011) reported that interactions between voices and faces (static images) is underpinned by activation in supramodal regions of the brain, such as the angular gyrus and hippocampus, that can influence processing in unimodal regions of the brain associated with face and voice perception. It is interesting to speculate, however, at what point during information processing for person recognition these interactions between faces and voices arise.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…This information may either be associated directly, in an information-driven manner (e.g., through shared, redundant information) that is reinforced through brief learning (e.g., von Kriegstein et al, 2008), or indirectly through feedback via a common resource (e.g., Naci, Taylor, Cusack, & Tyler, 2012;ten Oever et al, 2016). For example, based on the results from a neuroimaging study, Joassin et al (2011) reported that interactions between voices and faces (static images) is underpinned by activation in supramodal regions of the brain, such as the angular gyrus and hippocampus, that can influence processing in unimodal regions of the brain associated with face and voice perception. It is interesting to speculate, however, at what point during information processing for person recognition these interactions between faces and voices arise.…”
Section: Discussionmentioning
confidence: 99%
“…For example, using probabilistic tractography, Blank, Anwander, and von Kriegstein (2011) reported evidence of direct structural connections between the fusiform face area and voice-sensitive regions of the STS. Furthermore, Joassin et al (2011) used fMRI to measure cortical activation to voices, faces and combinations of voices and faces. Their findings, that voice-face interactions result in greater activation in regions of the brain including the fusiform gyrus than either voice or face alone, are consistent with those of Blank et al (2011).…”
mentioning
confidence: 99%
“…For instance, we are able to integrate the auditory information of what is said and the visual information of who is saying it, so that we can attribute a particular speech to a particular person (Kerlin et al, 2010) and thus take part in a conversation. In view of the obvious importance of these crossmodal processes, many studies have investigated their behavioural and cerebral correlates among healthy participants, notably leading to the identification of several brain areas dedicated to multisensory integration (Joassin et al, 2011a(Joassin et al, , 2011bLove et al, 2011). The exploration of crossmodal mechanisms thus constitutes an established field in the experimental psychology and neuroscience domains (Amedi et al, 2005;Calvert et al, 2001;De Gelder & Bertelson, 2003) and has now come to maturity, as illustrated by the proposal of integrative models (e.g.…”
mentioning
confidence: 99%
“…It is now well established that the auditory-visual integration of human faces and voices during the multimodal processing of identity and gender is associated with the activation of a specific network of cortical and subcortical regions. This network includes several regions devoted to the different cognitive processing implied in face and voice categorization task, notably (a) the unimodal visual and auditory regions processing the perceived faces and voices, which are inter-connected via a subcortical relay located in the striatum, (b) the left superior parietal gyrus, part of a larger parieto-motor network dispatching the attentional resources to the visual and auditory modalities, and (c) the right inferior frontal gyrus sustaining the integration of the semantically congruent information into a coherent multimodal representation (Joassin et al, 2011a;2011b).…”
mentioning
confidence: 99%
“…Lesion and neuroimaging studies have suggested potential candidate cortical areas for the PINs, including the precuneus 8 , parietal and hippocampal regions [9][10][11] , posterior superior temporal sulcus (pSTS) 12,13 or the anterior temporal lobes 14 . However a PIN, as defined in Bruce & Young (1986), would correspond to a patient with a brain Scientific RepoRts | 6:37494 | DOI: 10.1038/srep37494 lesion preserving recognition and feeling of familiarity based on single modalities separately but who could not retrieve semantic information on the person, and not associate the face and voice of the person; such a patient has not yet been identified 1 .…”
mentioning
confidence: 99%