Neural activity in early visual cortex is modulated by luminance contrast. Cortical depth (i.e., laminar) contrast responses have been studied in monkey early visual cortex, but not in humans. In addition to the high spatial resolution needed and the ensuing low signal-to-noise ratio, laminar studies in humans using fMRI are hampered by the strong venous vascular weighting of the fMRI signal. In this study, we measured luminance contrast responses in human V1 and V2 with high-resolution fMRI at 7 T. To account for the effect of intracortical ascending veins, we applied a novel spatial deconvolution model to the fMRI depth profiles. Before spatial deconvolution, the contrast response in V1 showed a slight local maximum at mid cortical depth, whereas V2 exhibited a monotonic signal increase toward the cortical surface. After applying the deconvolution, both V1 and V2 showed a pronounced local maximum at mid cortical depth, with an additional peak in deep grey matter, especially in V1. Moreover, we found a difference in contrast sensitivity between V1 and V2, but no evidence for variations in contrast sensitivity as a function of cortical depth. These findings are in agreement with results obtained in nonhuman primates, but further research will be needed to validate the spatial deconvolution approach.
High-resolution (functional) magnetic resonance imaging (MRI) at ultra high magnetic fields (7 Tesla and above) enables researchers to study how anatomical and functional properties change within the cortical ribbon, along surfaces and across cortical depths. These studies require an accurate delineation of the gray matter ribbon, which often suffers from inclusion of blood vessels, dura mater and other non-brain tissue. Residual segmentation errors are commonly corrected by browsing the data slice-by-slice and manually changing labels. This task becomes increasingly laborious and prone to error at higher resolutions since both work and error scale with the number of voxels. Here we show that many mislabeled, non-brain voxels can be corrected more efficiently and semi-automatically by representing three-dimensional anatomical images using two-dimensional histograms. We propose both a uni-modal (based on first spatial derivative) and multi-modal (based on compositional data analysis) approach to this representation and quantify the benefits in 7 Tesla MRI data of nine volunteers. We present an openly accessible Python implementation of these approaches and demonstrate that editing cortical segmentations using two-dimensional histogram representations as an additional post-processing step aids existing algorithms and yields improved gray matter borders. By making our data and corresponding expert (ground truth) segmentations openly available, we facilitate future efforts to develop and test segmentation algorithms on this challenging type of data.
Human visual surface perception has neural correlates in early visual cortex, but the role of feedback during surface segmentation in human early visual cortex remains unknown. Feedback projections preferentially enter superficial and deep anatomical layers, which provides a hypothesis for the cortical depth distribution of fMRI activity related to feedback. Using ultra-high field fMRI, we report a depth distribution of activation in line with feedback during the (illusory) perception of surface motion. Our results fit with a signal re-entering in superficial depths of V1, followed by a feedforward sweep of the re-entered information through V2 and V3. The magnitude and sign of the BOLD response strongly depended on the presence of texture in the background, and was additionally modulated by the presence of illusory motion perception compatible with feedback. In summary, the present study demonstrates the potential of depth-resolved fMRI in tackling biomechanical questions on perception.
High-resolution (functional) magnetic resonance imaging (MRI) at ultra high magnetic fields (7 Tesla and above) enables researchers to study how anatomical and functional properties change within the cortical ribbon, along surfaces and across cortical depths. These studies require an accurate delineation of the gray matter ribbon, which often suffers from inclusion of blood vessels, dura mater and other non-brain tissue. Residual segmentation errors are commonly corrected by browsing the data slice-by-slice and manually changing labels. This task becomes increasingly laborious and prone to error at higher resolutions since both work and error scale with the number of voxels. Here we show that mislabeled, non-brain voxels can be corrected more efficiently and semi-automatically by representing three-dimensional anatomical images using two-dimensional histograms. We propose both a uni-modal (based on first spatial derivative) and multi-modal (based on compositional data analysis) approach to this representation and quantify the benefits in 7 Tesla MRI data of nine volunteers. We present an openly accessible Python implementation of these approaches and demonstrate that editing cortical segmentations using two-dimensional histogram representations aids existing algorithms and yields improved gray matter borders. By making our data and corresponding expert (ground truth) segmentations available, we facilitate future efforts to develop and test segmentation algorithms on this challenging type of data.
Motion signals can bias the perceived position of visual stimuli. While the apparent position of a stimulus is biased in the direction of motion, electro-physiological studies have shown that the receptive field (RF) of neurons is shifted in the direction opposite to motion, at least in cats and macaque monkeys. In humans, it remains unclear how motion signals affect population RF (pRF) estimates. We addressed this question using psychophysical measurements and functional magnetic resonance imaging (fMRI) at 7 Tesla. We systematically varied two factors: the motion direction of the carrier pattern (inward, outward and flicker motion) and the contrast of the mapping stimulus (low and high stimulus contrast). We observed that while physical positions were identical across all conditions, presence of low-contrast motion, but not high-contrast motion, shifted perceived stimulus position in the direction of motion. Correspondingly, we found that pRF estimates in early visual cortex were shifted against the direction of motion for low-contrast stimuli but not for high stimulus contrast. We offer an explanation in form of a model for why apertures are perceptually shifted in the direction of motion even though pRFs shift in the opposite direction.Keywords visual neuroscience · position perception · population receptive fields · visual field projections 1 IntroductionAn important task of the visual system is to infer the location of objects in our environment. A wide range of psychophysical studies shows that motion signals lead to systematic localisation biases [1,2,3,4,5,6,7,8,9]. In illusions called motion-induced position shifts (MIPS), a coherent motion signal shifts the apparent location of a stimulus [1]. For example, when drifting Gabor patches are presented within a stationary aperture, the stimulus appears shifted in the direction of motion [2,6,7]. Such illusions raise the question how our visual system encodes location and how, in the case of MIPS, the apparent position shift can be explained. Furthermore, they offer a dissociation between the physical and the perceived position of a stimulus that can clarify which neuronal processes correspond to the apparent position of the stimulus. MOTION DISPLACES POPULATION RECEPTIVE FIELDSThe magnitude of MIPS is known to depend on spatial and temporal properties of the stimulus. MIPS are larger when the stimulus is shown for longer duration (tested up to 453 ms; [6]), presented at higher speed [6,9] or at higher eccentricities [10,6,9]. The magnitude of MIPS furthermore depends on spatial blurring of the presented stimulus. Blurred stimulus edges lead to larger perceptual displacements than sharp edges [4,9] and increasing the size of the Gaussian envelope of a Gabor stimulus yields larger MIPS [4]. Arnold et al. [7] have suggested that MIPS are driven by modulation of apparent contrast of the stimulus. Supporting this suggestion, they reported perceived position shifts when observers were asked to match the extremities of two contrast envelopes (low-contrast region),...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.