Bodily training typically evokes behavioral and perceptual gains, enforcing neuroplastic processes and affecting neural representations. We investigated the effect on somatosensory perception of a three-day Zen meditation exercise, a purely mental intervention. Tactile spatial discrimination of the right index finger was persistently improved by only 6 hours of mental–sensory focusing on this finger, suggesting that intrinsic brain activity created by mental states can alter perception and behavior similarly to external stimulation.
A popular model for sensory processing, known as predictive coding, proposes that incoming signals are iteratively compared with top-down predictions along a hierarchical processing scheme. At each step, error signals arising from differences between actual input and prediction are forwarded and recurrently minimized by updating internal models to finally be "explained away". However, the neuronal mechanisms underlying such computations and their limitations in processing speed are largely unknown. Further, it remains unclear at which step of cortical processing prediction errors are explained away, if at all. In the present study, human subjects briefly viewed the superposition of two orthogonally oriented gratings followed by abrupt removal of one orientation after either 33 or 200 milliseconds. Instead of strictly seeing the remaining orientation, observers report rarely but highly significantly an illusory percept of the arithmetic difference between previous and actual orientations. Previous findings in cats using the identical paradigm suggest that such difference signals are inherited from first steps of visual cortical processing. In light of early modeling accounts of predictive coding, in which visual neurons were interpreted as residual error detectors signaling the difference between actual input and its temporal prediction based on past input, our data may indicate continued access to residual errors. Such strategy permits time-critical perceptual decision making across a spectrum of competing internal signals up to the highest levels of processing. Thus, the occasional appearance of a prediction error-like illusory percept may uncover maintained flexibility at perceptual decision stages when subjects cope with highly dynamic and ambiguous visual stimuli.
Movement planning based on visual information requires a transformation from a retina-centered into a head-or body-centered frame of reference. It has been shown that such transformations can be achieved via basis function networks [1,2]. We investigated whether basis functions for coordinate transformations can be learned by a biologically plausible neural network. We employed a model network of spiking neurons that learns invariant representations based on spatio-temporal stimulus correlations [3]. The model consists of a three-stage network of leaky integrate-and-fire neurons with biologically realistic conductances. The network has two input layers, corresponding to neurons representing the retinal image and neurons representing the direction of gaze. These inputs are represented in the map layer via excitatory or modulatory connections, respectively, that exhibit Hebbian-like spiketiming dependent plasticity (STDP). Neurons within the map layer are connected via short-range lateral excitatory connections and unspecific lateral inhibition. We trained the network with stimuli corresponding to typical viewing situations when a visual scene is explored by saccadic eye movements, with gaze direction changing on a faster time scale than object positions in space. After learning, each neuron in the map layer was selective for a small subset of the stimulus space, with excitatory and modulatory connections adapted to achieve a topographic map of the inputs. Neurons in the output layer with a localized receptive field in the map layer were selective for positions in head-centered space, invariant to changes in retinal image due to changes in gaze direction. Our results show that coordinate transformations via basis function networks can be learned in a biologically plausible way by exploiting the spatio-temporal correlations between visual stimulation and eye position signals under natural viewing conditions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.