2022
DOI: 10.1101/2022.07.13.499969
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

High-performing neural network models of visual cortex benefit from high latent dimensionality

Abstract: Geometric descriptions of deep neural networks (DNNs) have the potential to uncover core principles of computational models in neuroscience, while abstracting over the details of model architectures and training paradigms. Here we examined the geometry of DNN models of visual cortex by quantifying the latent dimensionality of their natural image representations. The prevailing view holds that optimal DNNs compress their representations onto low-dimensional manifolds to achieve invariance and robustness, which … Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 21 publications
(21 citation statements)
references
References 91 publications
(466 reference statements)
0
21
0
Order By: Relevance
“…Another problem is that DNNs that vary substantially in their architectures support similar levels of predictions (Storrs et al, 2021). Indeed, even untrained networks (models that cannot identify any images) often support relatively good predictions on these datasets (Truzzi, & Cusack, 2020), and this may simply reflect the fact that good predictions can be made from many predictors regardless of the similarity of DNNs and brains (Elmoznino & Bonner, 2022). Furthermore, when rank ordering models in terms of their (often similar) predictions, different outcomes are obtained with different datasets.…”
Section: The Practical Problems With Prediction When Comparing Humans...mentioning
confidence: 99%
See 1 more Smart Citation
“…Another problem is that DNNs that vary substantially in their architectures support similar levels of predictions (Storrs et al, 2021). Indeed, even untrained networks (models that cannot identify any images) often support relatively good predictions on these datasets (Truzzi, & Cusack, 2020), and this may simply reflect the fact that good predictions can be made from many predictors regardless of the similarity of DNNs and brains (Elmoznino & Bonner, 2022). Furthermore, when rank ordering models in terms of their (often similar) predictions, different outcomes are obtained with different datasets.…”
Section: The Practical Problems With Prediction When Comparing Humans...mentioning
confidence: 99%
“…Another factor that may contribute to the neural predictivity score is the effective latent dimensionality of DNNs – that is, the number of principal components needed to explain most of the variance in an internal representation of DNNs. Elmoznino and Bonner (2022) have shown that effective latent dimensionality of DNNs significantly correlates with the extent to which they predict evoked neural responses in both the macaque IT cortex and human visual cortex. Importantly, the authors controlled for other properties of DNNs, such as number of units in a layer, layer depth, pretraining, training paradigm, and so on and found that prediction of neural data increases with an increase in effective dimensionality, irrespective of any of these factors.…”
Section: The Problem With Benchmarksmentioning
confidence: 99%
“…Discussions of auditory cortical functional organization commonly revolve around two proposed principles. The first is that the cortex is organized hierarchically into a sequence of stages corresponding to cortical regions [49][50][51]17 . Much of the evidence for hierarchy is associated with speech processing, in that speech-specific responses only emerge outside of primary cortical areas [52][53][54][55][56][57]32,58,40 .…”
Section: Discussionmentioning
confidence: 99%
“…Within the large dimensionality of the overall neural space (equal to the number of relevant neurons), only a much smaller vector subspace is actually used for encoding (Ebitz & Hayden, 2021). To quantify this intrinsic dimensionality (ID), we used a previously reported method based on Principal Component Analysis (PCA) (Gao et al, 2017; Elmoznino & Bonner, 2022), sometimes called the participation ratio (Sorscher et al, 2022). This method quantifies intrinsic dimensionality as follows: where λ i are the eigenvalues of the neural covariance matrix (i.e., the eigenvalues whose corresponding eigenvectors are the principal components of the dataset), and M is the number of channels (electrodes or magnetometers).…”
Section: Methodsmentioning
confidence: 99%