Surface visualizations of fMRI provide a comprehensive view of cortical activity. However, surface visualizations are difficult to generate and most common visualization techniques rely on unnecessary interpolation which limits the fidelity of the resulting maps. Furthermore, it is difficult to understand the relationship between flattened cortical surfaces and the underlying 3D anatomy using tools available currently. To address these problems we have developed pycortex, a Python toolbox for interactive surface mapping and visualization. Pycortex exploits the power of modern graphics cards to sample volumetric data on a per-pixel basis, allowing dense and accurate mapping of the voxel grid across the surface. Anatomical and functional information can be projected onto the cortical surface. The surface can be inflated and flattened interactively, aiding interpretation of the correspondence between the anatomical surface and the flattened cortical sheet. The output of pycortex can be viewed using WebGL, a technology compatible with modern web browsers. This allows complex fMRI surface maps to be distributed broadly online without requiring installation of complex software.
It has been argued that scene-selective areas in the human brain represent both the 3D structure of the local visual environment and low-level 2D features (such as spatial frequency) that provide cues for 3D structure. To evaluate the degree to which each of these hypotheses explains variance in scene-selective areas, we develop an encoding model of 3D scene structure and test it against a model of lowlevel 2D features. We fit the models to fMRI data recorded while subjects viewed visual scenes. The fit models reveal that scene-selective areas represent the distance to and orientation of large surfaces, at least partly independent of low-level features. Principal component analysis of the model weights reveals that the most important dimensions of 3D structure are distance and openness. Finally, reconstructions of the stimuli based on the model weights demonstrate that our model captures unprecedented detail about the local visual environment from scene-selective areas.
Perception of natural visual scenes activates several functional areas in the human brain, including the Parahippocampal Place Area (PPA), Retrosplenial Complex (RSC), and the Occipital Place Area (OPA). It is currently unclear what specific scene-related features are represented in these areas. Previous studies have suggested that PPA, RSC, and/or OPA might represent at least three qualitatively different classes of features: (1) 2D features related to Fourier power; (2) 3D spatial features such as the distance to objects in a scene; or (3) abstract features such as the categories of objects in a scene. To determine which of these hypotheses best describes the visual representation in scene-selective areas, we applied voxel-wise modeling (VM) to BOLD fMRI responses elicited by a set of 1386 images of natural scenes. VM provides an efficient method for testing competing hypotheses by comparing predictions of brain activity based on encoding models that instantiate each hypothesis. Here we evaluated three different encoding models that instantiate each of the three hypotheses listed above. We used linear regression to fit each encoding model to the fMRI data recorded from each voxel, and we evaluated each fit model by estimating the amount of variance it predicted in a withheld portion of the data set. We found that voxel-wise models based on Fourier power or the subjective distance to objects in each scene predicted much of the variance predicted by a model based on object categories. Furthermore, the response variance explained by these three models is largely shared, and the individual models explain little unique variance in responses. Based on an evaluation of previous studies and the data we present here, we conclude that there is currently no good basis to favor any one of the three alternative hypotheses about visual representation in scene-selective areas. We offer suggestions for further studies that may help resolve this issue.
There are two dominant models for the functional organization of brain regions underlying object recognition. One model postulates category-specific modules while the other proposes a distributed representation of objects with generic visual features. Functional imaging techniques relying on metabolic signals, such as fMRI and optical intrinsic signal imaging (OISI), have been used to support both models, but due to the indirect nature of the measurements in these techniques, the existing data for one model cannot be used to support the other model. Here, we used large-scale multielectrode recordings over a large surface of anterior inferior temporal (IT) cortex, and densely mapped stimulus-evoked neuronal responses. We found that IT cortex is subdivided into distinct domains characterized by similar patterns of responses to the objects in our stimulus set. Each domain spanned several millimeters on the cortex. Some of these domains represented faces ("face" domains) or monkey bodies ("monkey-body" domains). We also identified domains with low responsiveness to faces ("anti-face" domains). Meanwhile, the recording sites within domains that displayed category selectivity showed heterogeneous tuning profiles to different exemplars within each category. This local heterogeneity was consistent with the stimulusevoked feature columns revealed by OISI. Taken together, our study revealed that regions with common functional properties (domains) consist of a finer functional structure (columns) in anterior IT cortex. The "domains" and previously proposed "patches" are rather like "mosaics" where a whole mosaic is characterized by overall similarity in stimulus responses and pieces of the mosaic correspond to feature columns.
Viewing a sequence of faces of two different people results in a greater Blood Oxygenation Level Dependent (BOLD) response in FFA compared to a sequence of identical faces. Changes in identity, however, necessarily involve changes in the image. Is the release from adaptation a result of a change in face identity, per se, or could it be an effect that would arise from any change in the image of a face? Subjects viewed a sequence of two faces that could be of the same or different person, and in the same or different orientation in depth. Critically, the physical similarity of view changes of the same person was scaled, by Gabor-jet differences, to be equivalent to that produced by an identity change. Both person and orientation changes produced equivalent releases from adaptation in FFA (relative to identical faces) suggesting that FFA is sensitive to the physical similarity of faces rather than to the individuals depicted in the images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.