2020
DOI: 10.1167/jov.20.7.21
|View full text |Cite
|
Sign up to set email alerts
|

Deep neural networks capture texture sensitivity in V2

Abstract: Deep convolutional neural networks (CNNs) trained on visual objects have shown intriguing ability to predict some response properties of visual cortical neurons. However, the factors (e.g., if the model is trained or not, receptive field size) and computations (e.g., convolution, rectification, pooling, normalization) that give rise to such ability, at what level, and the role of intermediate processing stages in explaining changes that develop across areas of the cortical hierarchy are poorly understood. We f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1

Relationship

2
5

Authors

Journals

citations
Cited by 17 publications
(26 citation statements)
references
References 55 publications
0
26
0
Order By: Relevance
“…A comparison with human vision can be made by analyzing the responses of these models to textures of varying classes, such as the textures of the second classification experiment and their spectrally matched noise versions that preserve the amplitude spectrums of the original textures but have randomized phase. Secondary visual cortex shows sensitivity to texture that is absent in V1 ( Freeman et al., 2013 ; Kohler et al., 2016 ; Ziemba et al., 2016 ; Laskar et al., 2020 ). For instance, in an fMRI experiment, the modulation index (see Methods ) for textures versus noise was much larger in V2 than in V1 with an average modulation index of about 0.13 across subjects for V2 ( Freeman et al., 2013 ).…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…A comparison with human vision can be made by analyzing the responses of these models to textures of varying classes, such as the textures of the second classification experiment and their spectrally matched noise versions that preserve the amplitude spectrums of the original textures but have randomized phase. Secondary visual cortex shows sensitivity to texture that is absent in V1 ( Freeman et al., 2013 ; Kohler et al., 2016 ; Ziemba et al., 2016 ; Laskar et al., 2020 ). For instance, in an fMRI experiment, the modulation index (see Methods ) for textures versus noise was much larger in V2 than in V1 with an average modulation index of about 0.13 across subjects for V2 ( Freeman et al., 2013 ).…”
Section: Resultsmentioning
confidence: 99%
“…There is also interest in examining connections to deep convolutional neural networks, which have been shown to capture various cortical neural response properties ( Kriegeskorte, 2015 ; Yamins & DiCarlo, 2016 ; Pospisil et al., 2018 ; Cadena et al., 2019 ; Kindel et al., 2019 ; Laskar et al., 2020 ). Such networks can perform a form of sparse coding by thresholding (setting to zero) responses with the ReLU activation function depending on the values of the bias weights (characterized by Bowren, 2021 ).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…A comparison to human vision can be made by analyzing the responses of these models to textures of varying classes, such as the textures of the second classification experiment, and spectrally matched noise (here referred as "noise") that preserves the amplitude spectrum of the original textures but has randomized phase. Secondary visual cortex shows sensitivity to textures that is absent in V1 (Freeman et al, 2013;Ziemba et al, 2016;Kohler et al, 2016;Laskar et al, 2020). For instance, in an fMRI experiment, the modulation index (see Methods) for textures versus noise was much larger in V2 than in V1 with an average modulation index of about 0.13 across subjects for V2 (Freeman et al, 2013).…”
Section: Texture Sensitivitymentioning
confidence: 96%
“…One approach to modeling cortical neurons, denoted as goal-oriented (or supervised learning), is based on optimizing model goals such as image classification (see e.g., review papers, Geisler, 2008;Yamins and DiCarlo, 2016;Turner et al, 2019). In recent years, deep neural network models optimized for image classification (e.g., Krizhevsky et al, 2012;Dapello et al, 2020) have captured neural processing in cortical visual areas (Kriegeskorte, 2015;Yamins and DiCarlo, 2016), including low and mid level visual cortex (e.g., Cadena et al, 2019;Kindel et al, 2019;Pospisil et al, 2018;Laskar et al, 2020).…”
Section: Introductionmentioning
confidence: 99%