2016
DOI: 10.1073/pnas.1614763114
|View full text |Cite
|
Sign up to set email alerts
|

Spatiotemporal dynamics of similarity-based neural representations of facial identity

Abstract: Humans' remarkable ability to quickly and accurately discriminate among thousands of highly similar complex objects demands rapid and precise neural computations. To elucidate the process by which this is achieved, we used magnetoencephalography to measure spatiotemporal patterns of neural activity with high temporal resolution during visual discrimination among a large and carefully controlled set of faces. We also compared these neural data to lower level "image-based" and higher level "identitybased" model-… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

13
48
1

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 57 publications
(62 citation statements)
references
References 51 publications
13
48
1
Order By: Relevance
“…In ROI analyses we similarly found an early peak in fMRI-EEG correlations in V1, and a later peak in OFA and FFA. Our time trajectories are similar to those reported in a previous MEG study (Vida et al, 2017), which found first peaks for V1, OFA and FFA at around 150 ms and later peaks ~250 ms. Both in our and their results, information in V1 clearly dominated early in time, and later in time, the information from OFA and FFA was relatively more prominent, though mainly not surpassing correlations with V1.…”
Section: Discussionsupporting
confidence: 89%
See 2 more Smart Citations
“…In ROI analyses we similarly found an early peak in fMRI-EEG correlations in V1, and a later peak in OFA and FFA. Our time trajectories are similar to those reported in a previous MEG study (Vida et al, 2017), which found first peaks for V1, OFA and FFA at around 150 ms and later peaks ~250 ms. Both in our and their results, information in V1 clearly dominated early in time, and later in time, the information from OFA and FFA was relatively more prominent, though mainly not surpassing correlations with V1.…”
Section: Discussionsupporting
confidence: 89%
“…In left OFA, the correlations remained significant until 480 ms. The overall spatio-temporal structure found here was quite similar to that reported earlier in a MEG study (Vida et al, 2017) looking at the coding of face identities in the brain, finding accurate classifications in the left V1 around 150 ms, and relatively higher decoding accuracy for the right LO and the right FG around 250 ms. To ensure that our results were not due to selecting the five most informative voxels, the ROI analyses were repeated using all the voxels within a given BALSA-area. The results were similar except that the right FFA did not correlate significantly with EEG ( Supplementary Figure 3).…”
Section: Combined Eeg-fmri Reveals Spatiotemporal Pattern Of Face Prosupporting
confidence: 86%
See 1 more Smart Citation
“…Fast decoding of object category was achieved at 100 ms from small neuronal populations in primates (Hung & Poggio, 2005) and from invasively recorded responses in human visual cortex (Li & Lu, 2009). Furthermore, recent applications of MVPA to electrophysiological data have resolved face identity processing to early latencies (50-70 ms after stimulus onset; Davidesco et al, 2014;Nemrodov et al, 2016;Vida, Nestor, Plaut, & Behrmann, 2017). In addition to revealing the temporal dynamics of visual processing, multivariate methods have furthered our understanding of the transformations performed by cells in macaque face patches to encode face identity (Chang & Tsao, 2017) and have allowed face reconstruction based on non-invasive neural data in humans (Nemrodov et al, 2018;Nestor, Plaut, & Behrmann, 2016).…”
Section: Introductionmentioning
confidence: 99%
“…Six dissimilarity matrix (DMs) models representing three types of information were used to this end: visual properties, character identity, and social relationships (Figure 2). All models were compared to neural distance across two different facial expressions within each character identity (Vida et al, 2016). The primary model of visual properties was the Euclidean distance between responses of the C2 layer of HMAX, which simulates the complex visual cell response to an input image (Riesenhuber and Poggio, 1999).…”
Section: Representational Similarity Analysismentioning
confidence: 99%