2022
DOI: 10.1101/2022.08.04.502850
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Low-level tuning biases in higher visual cortex reflect the semantic informativeness of visual features

Abstract: Representations of visual and semantic information can overlap in human visual cortex, with the same neural populations exhibiting sensitivity to low-level features (orientation, spatial frequency, retinotopic position), and high-level semantic categories (faces, scenes). It has been hypothesized that this relationship between low-level visual and high-level category neural selectivity reflects natural scene statistics, such that neurons in a given category-selective region are tuned for low-level features or … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

2
0

Authors

Journals

citations
Cited by 2 publications
(6 citation statements)
references
References 101 publications
(161 reference statements)
0
6
0
Order By: Relevance
“…The pRF estimates are then used to compute a single feature vector for each voxel and each image, by taking a weighted sum between the voxel’s pRF (a 2D Gaussian) and a stack of feature maps corresponding to the image of interest (see St-Yves and Naselaris (2017) for details on this approach). For the Gabor model, these maps are computed at 12 orientations and 8 spatial frequencies, for a total of 96 spatial feature maps (see Henderson et al (2022) for more details on model construction).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The pRF estimates are then used to compute a single feature vector for each voxel and each image, by taking a weighted sum between the voxel’s pRF (a 2D Gaussian) and a stack of feature maps corresponding to the image of interest (see St-Yves and Naselaris (2017) for details on this approach). For the Gabor model, these maps are computed at 12 orientations and 8 spatial frequencies, for a total of 96 spatial feature maps (see Henderson et al (2022) for more details on model construction).…”
Section: Methodsmentioning
confidence: 99%
“…For the Gabor model, these maps are computed at 12 orientations and 8 spatial frequencies, for a total of 96 spatial feature maps (see Henderson et al (2022) for more details on model construction).…”
Section: Introductionmentioning
confidence: 99%
“…Similarly, when implementing the Gabor model, we also used 4 orientations and 4 spatial frequencies (0.36, 1.03, 2.97, and 8.57 cycles/deg). Each Gabor model feature was computed by filtering the image with two sinusoids that were 90° out of phase, squaring the output of these two filters, summing the two outputs, and taking the square root (see Henderson et al, 2022 for details on construction of a similar model). To extract features from the Alexnet model, we used the pretrained model weights for Alexnet available from the PyTorch model zoo, and we extracted activations from layers Conv1 and Conv2 (following the rectifying nonlinear activation function).…”
Section: Methodsmentioning
confidence: 99%
“…For both the Gabor and Alexnet models, we incorporated spatial pRF parameters into the construction of the model, similar to how the pRF was incorporated into the texture statistics model. Essentially, this procedure consists of extracting features in each pRF of the grid by taking a dot product of the relevant feature maps with each pRF (for similar approaches, see: Henderson et al, 2022; St-Yves and Naselaris, 2017). To simplify the fitting procedure, as well as making the models more comparable to the texture statistics encoding model, we used the same pRF parameters that had already been estimated using the texture statistics encoding model (see Methods: Model fitting procedure ).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation