2022 Conference on Cognitive Computational Neuroscience 2022
DOI: 10.32470/ccn.2022.1218-0
|View full text |Cite
|
Sign up to set email alerts
|

Modeling naturalistic face processing in humans with deep convolutional neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
13
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 21 publications
(14 citation statements)
references
References 0 publications
1
13
0
Order By: Relevance
“…the P100 (Luck et al, 1990), N170 (Bentin et al, 1996) and N400 components (Kutas & Federmeier, 2000), respectively (Figure 5a). Within neurotypicals, we found EEG representations peaking in similarity with the visual CNN at mid-layers (fourth and fifth; Jiahui et al, 2022) around mid-level temporal windows. Similarity with semantic computations also peaked around mid-latencies.…”
Section: Relationship With Electrophysiological Brain Componentsmentioning
confidence: 88%
See 1 more Smart Citation
“…the P100 (Luck et al, 1990), N170 (Bentin et al, 1996) and N400 components (Kutas & Federmeier, 2000), respectively (Figure 5a). Within neurotypicals, we found EEG representations peaking in similarity with the visual CNN at mid-layers (fourth and fifth; Jiahui et al, 2022) around mid-level temporal windows. Similarity with semantic computations also peaked around mid-latencies.…”
Section: Relationship With Electrophysiological Brain Componentsmentioning
confidence: 88%
“…Crucially, by associating electrophysiological signal and computational models, we found that the underlying neural computations of PS differed most with respect to the higher-level visual and semantic computations of deep neural networks models (DNNs). The late layers of visual DNNs have been previously linked to processing in human infero-temporal cortex (hIT; Güçlü & van Gerven, 2015;Jiahui et al, 2022.;Khaligh-Razavi & Kriegeskorte, 2014), peaking in the FFA (Khaligh-Razavi & Kriegeskorte, 2014), and functionally to higher-level visual feature representations such as parts of objects, whole objects and viewpoint invariant representations (Güçlü & van Gerven, 2015). These observations are consistent with the impaired whole face (Ramon et al, 2016) and feature representations (Caldara et al, 2005;Fiset et al, 2017) previously described in patient PS.…”
Section: Discussionmentioning
confidence: 99%
“…After obtaining these estimated maps, we calculated correlations between the target participant’s category-selective maps based on his/her own localizer data and the maps estimated from other participants’ data (hyperaligned or anatomically-aligned). We also calculated Cronbach’s alpha values (Feilong et al, 2018; Jiahui et al, 2020, 2022) across the multiple runs to measure the reliability of the category-selective maps for each participant and compared the correlations to the reliability values. To measure the local estimation performance and compare that to local reliabilities, we calculated correlations and Cronbach’s alphas in searchlights with a radius of 15 mm.…”
Section: Methodsmentioning
confidence: 99%
“…We trained our VAE models using the TensorFlow DisentanglementLib package [40]. To identify the best disentangled model for our fMRI analyses, we performed a hyperparameter search over model architectures (including beta-VAE [14] and FactorVAE [17]), number of latent dimensions (24, 32, 48, and 64), and architecture-specific disentanglement parameters (beta-VAE β ϵ [1,2,4,6,8,16], FactorVAE γ ϵ [10,20,30,40,50,100]). For every hyperparameter combination, we performed 10 random initializations.…”
Section: M1 Neural Net Architecture and Trainingmentioning
confidence: 99%
“…Recently, deep convolutional neural networks (DCNNs) trained on face recognition have been shown to learn effective face representations that provide a good match to human behavior [8], but such discriminatively trained models are difficult to interpret [9] and provide a poor match to human neural data [10]. Alternatively, deep generative models have been shown to provide a good match to human fMRI face processing data [11].…”
Section: Introductionmentioning
confidence: 99%