2016
DOI: 10.1038/srep37494
|View full text |Cite
|
Sign up to set email alerts
|

“Hearing faces and seeing voices”: Amodal coding of person identity in the human brain

Abstract: Recognizing familiar individuals is achieved by the brain by combining cues from several sensory modalities, including the face of a person and her voice. Here we used functional magnetic resonance (fMRI) and a whole-brain, searchlight multi-voxel pattern analysis (MVPA) to search for areas in which local fMRI patterns could result in identity classification as a function of sensory modality. We found several areas supporting face or voice stimulus classification based on fMRI responses, consistent with previo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
16
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 34 publications
(16 citation statements)
references
References 37 publications
0
16
0
Order By: Relevance
“…Previous studies using face or voice stimuli have found amodal representations of person identity in unimodal and multimodal face-processing regions, including the FFA, superior temporal sulcus, and the ATL (5,16,42,45). In the present paradigm, the only region consistently demonstrating classification of identity across all stimulus types was the ATL.…”
Section: Resultsmentioning
confidence: 54%
See 1 more Smart Citation
“…Previous studies using face or voice stimuli have found amodal representations of person identity in unimodal and multimodal face-processing regions, including the FFA, superior temporal sulcus, and the ATL (5,16,42,45). In the present paradigm, the only region consistently demonstrating classification of identity across all stimulus types was the ATL.…”
Section: Resultsmentioning
confidence: 54%
“…Research in nonhuman primates has shown that cells in anterior-ventral temporal cortex are highly sensitive to particular facial identities as well as to facial familiarity (56,57). Previous studies in humans using intracranial recording (16) or fMRI analyses (15,42,45) have suggested that the ATL can distinguish between different people using their faces, voices, or names. Our study extends these findings by using sophisticated multivariate analyses and a wider range of stimulus categories.…”
Section: Discussionmentioning
confidence: 99%
“…The MTG has been shown to respond to identity “cross-classification”, i.e. to both facial and vocal identity processing (Awwad Shiekh Hasan, Valdes-Sosa, Gross, & Belin, 2016), and it is also engaged during self versus other face recognition (Verosky & Todorov, 2010). Methodological differences likely play a role in our results, as limbic responses to negative expressions in depression may depend on the nature of the task.…”
Section: Discussionmentioning
confidence: 99%
“…This could be related to the complex functional organization in the region encompassing the pSTS and the temporo-parietal junction (TPJ) (Patel, Sestieri, et Corbetta 2019). On the right hemisphere, this region was shown to hold high-level social functions such as the representation of identity by integrating both facial and vocal information (Watson et al 2014;Hasan et al 2016;Davies-Thompson et al 2018;Tsantani et al 2019). In the left hemisphere, this region was mainly associated to the dorsal pathway of language implicated in articulatory and production mechanisms (Hickok et Poeppel 2004).…”
Section: Functional Implicationsmentioning
confidence: 99%