2022
DOI: 10.1371/journal.pcbi.1009739
|View full text |Cite
|
Sign up to set email alerts
|

Increasing neural network robustness improves match to macaque V1 eigenspectrum, spatial frequency preference and predictivity

Abstract: Task-optimized convolutional neural networks (CNNs) show striking similarities to the ventral visual stream. However, human-imperceptible image perturbations can cause a CNN to make incorrect predictions. Here we provide insight into this brittleness by investigating the representations of models that are either robust or not robust to image perturbations. Theory suggests that the robustness of a system to these perturbations could be related to the power law exponent of the eigenspectrum of its set of neural … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

5
18
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 22 publications
(23 citation statements)
references
References 33 publications
5
18
0
Order By: Relevance
“…One recent research [ 33 ] measured the preferred spatial frequency of individual model units via in-silico electrophysiology experiments, and showed that representation of robust models are more aligned with macaque V1 neurons in terms of the distribution of preferred spatial frequencies compared with non-robust models. Based on the eigenspectrum analysis, they suggested that the robustness is due to smaller portion of high-frequency tuning units in the robust models.…”
Section: Discussionmentioning
confidence: 99%
“…One recent research [ 33 ] measured the preferred spatial frequency of individual model units via in-silico electrophysiology experiments, and showed that representation of robust models are more aligned with macaque V1 neurons in terms of the distribution of preferred spatial frequencies compared with non-robust models. Based on the eigenspectrum analysis, they suggested that the robustness is due to smaller portion of high-frequency tuning units in the robust models.…”
Section: Discussionmentioning
confidence: 99%
“…Such regularizers are common in neural network visualizations, and although we view them as side-stepping the goal of human-model comparison, it was nonetheless of interest to assess their effect. We implemented the regularizer used in a well-known visualization paper (64) which biases the solution to the metamer optimization to have low total pixel variation (encouraging smoothness). As shown in Figure 4e, adding smoothness regularization to the metamer generation procedure for the standard-trained AlexNet model improved the recognizability of its metamers, but not as much as did adversarial training (and did not come close to generating metamers as recognizable as natural images; see Supplementary Figure 8 for examples generated with different regularization coefficients).…”
Section: Resultsmentioning
confidence: 99%
“…The methods we use to synthesize model metamers are not new. Previous neural network visualizations have also used gradient descent on the input to visualize representations (76), in some cases matching the activations at individual stages as we do here (64). However, the significance of these visualizations for evaluating neural network models of biological sensory systems has received relatively little attention.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations