When humans perceive a sensation, their brains integrate inputs from sensory receptors and process them based on their expectations. The mechanisms of this predictive coding in the human somatosensory system are not fully understood. We fill a basic gap in our understanding of the predictive processing of somatosensation by examining the layer-specific activity in sensory input and predictive feedback in the human primary somatosensory cortex (S1). We acquired submillimeter functional magnetic resonance imaging data at 7T (n = 10) during a task of perceived, predictable, and unpredictable touching sequences. We demonstrate that the sensory input from thalamic projects preferentially activates the middle layer, while the superficial and deep layers in S1 are more engaged for cortico-cortical predictive feedback input. These findings are pivotal to understanding the mechanisms of tactile prediction processing in the human somatosensory cortex.
To explore the timing and the underlying neural dynamics of visual perception, we analyzed the relationship between the manual reaction time (RT) to the onset of a visual stimulus and the time course of the evoked neural response simultaneously measured by magnetoencephalography (MEG). The visual stimuli were a transition from incoherent to coherent motion of random dots and an onset of a chromatic grating from a uniform field, which evoke neural responses in different cortical sites. For both stimuli, changes in median RT with changing stimulus strength (motion coherence or chromatic contrast) were accurately predicted, with a stimulus-independent postdetection delay, from the time that the temporally integrated MEG response crossed a threshold (integrator model). In comparison, the prediction of RT was less accurate from the peak MEG latency, or from the time that the nonintegrated MEG response crossed a threshold (level detector model). The integrator model could also account for, at least partially, intertrial changes in RT or in perception (hit/miss) to identical stimuli. Although we examined MEG-RT relationships mainly for data averaged over trials, the integrator model could show some correlations even for single-trial data. The model predictions deteriorated when only early visual responses presumably originating from the striate cortex were used as the input to the integrator model. Our results suggest that the perceptions for visual stimulus appearances are established in extrastriate areas [around MT (middle temporal visual area) for motion and around V4 (fourth visual area) for color] ϳ150 -200 ms before subjects manually react to the stimulus.
Occlusion is a primary challenge facing the visual system in perceiving object shapes in intricate natural scenes. Although behavior, neurophysiological, and modeling studies have shown that occluded portions of objects may be completed at the early stage of visual processing, we have little knowledge on how and where in the human brain the completion is realized. Here, we provide functional magnetic resonance imaging (fMRI) evidence that the occluded portion of an object is indeed represented topographically in human V1 and V2. Specifically, we find the topographic cortical responses corresponding to the invisible object rotation in V1 and V2. Furthermore, by investigating neural responses for the occluded target rotation within precisely defined cortical subregions, we could dissociate the topographic neural representation of the occluded portion from other types of neural processing such as object edge processing. We further demonstrate that the early topographic representation in V1 can be modulated by prior knowledge of a whole appearance of an object obtained before partial occlusion. These findings suggest that primary "visual" area V1 has the ability to process not only visible or virtually (illusorily) perceived objects but also "invisible" portions of objects without concurrent visual sensation such as luminance enhancement to these portions. The results also suggest that low-level image features and higher preceding cognitive context are integrated into a unified topographic representation of occluded portion in early areas.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.