Recent studies claim that visual perception of stimulus features, such as orientation, numerosity, and faces, is systematically biased toward visual input from the immediate past [1-3]. However, the extent to which these positive biases truly reflect changes in perception rather than changes in post-perceptual processes is unclear [4, 5]. In the current study we sought to disentangle perceptual and decisional biases in visual perception. We found that post-perceptual decisions about orientation were indeed systematically biased toward previous stimuli and this positive bias did not strongly depend on the spatial location of previous stimuli (replicating previous work [1]). In contrast, observers' perception was repelled away from previous stimuli, particularly when previous stimuli were presented at the same spatial location. This repulsive effect resembles the well-known negative tilt-aftereffect in orientation perception [6]. Moreover, we found that the magnitude of the positive decisional bias increased when a longer interval was imposed between perception and decision, suggesting a shift of working memory representations toward the recent history as a source of the decisional bias. We conclude that positive aftereffects on perceptual choice are likely introduced at a post-perceptual stage. Conversely, perception is negatively biased away from recent visual input. We speculate that these opposite effects on perception and post-perceptual decision may derive from the distinct goals of perception and decision-making processes: whereas perception may be optimized for detecting changes in the environment, decision processes may integrate over longer time periods to form stable representations.
Perception can be described as a process of inference, integrating bottom-up sensory inputs and top-down expectations. However, it is unclear how this process is neurally implemented. It has been proposed that expectations lead to prestimulus baseline increases in sensory neurons tuned to the expected stimulus, which in turn, affect the processing of subsequent stimuli. Recent fMRI studies have revealed stimulus-specific patterns of activation in sensory cortex as a result of expectation, but this method lacks the temporal resolution necessary to distinguish pre-from poststimulus processes. Here, we combined human magnetoencephalography (MEG) with multivariate decoding techniques to probe the representational content of neural signals in a time-resolved manner. We observed a representation of expected stimuli in the neural signal shortly before they were presented, showing that expectations indeed induce a preactivation of stimulus templates. The strength of these prestimulus expectation templates correlated with participants' behavioral improvement when the expected feature was task-relevant. These results suggest a mechanism for how predictive perception can be neurally implemented.prediction | perceptual inference | predictive coding | feature-based expectation | feature-based attention P erception is heavily influenced by prior knowledge (1-3). Accordingly, many theories cast perception as a process of inference, integrating bottom-up sensory inputs and top-down expectations (4-6). However, it is unclear how this integration is neurally implemented. It has been proposed that prior expectations lead to baseline increases in sensory neurons tuned to the expected stimulus (7-9), which in turn, leads to improved neural processing of matching stimuli (10, 11). In other words, expectations may induce stimulus templates in sensory cortex before the actual presentation of the stimulus. Alternatively, topdown influences in sensory cortex may exert their influence only after the bottom-up stimulus has been initially processed, and the integration of the two sources of information may become apparent only during later stages of sensory processing (12).The evidence necessary to distinguish between these hypotheses has been lacking. fMRI studies have revealed stimulusspecific patterns of activation in sensory cortex as a result of expectation (9, 13), but this method lacks the temporal resolution necessary to distinguish pre-from poststimulus periods. Here, we combined magnetoencephalography (MEG) with multivariate decoding techniques to probe the representational content of neural signals in a time-resolved manner (14-17). The experimental paradigm was virtually identical to the ones used in our previous fMRI studies that studied how expectations modulate stimulus-specific patterns of activity in the primary visual cortex (9, 11). We trained a forward model to decode the orientation of task-irrelevant gratings from the MEG signal (18,19) and applied this decoder to trials in which participants expected a grating of a pa...
Visual perception and imagery rely on similar representations in the visual cortex. During perception, visual activity is characterized by distinct processing stages, but the temporal dynamics underlying imagery remain unclear. Here, we investigated the dynamics of visual imagery in human participants using magnetoencephalography. Firstly, we show that, compared to perception, imagery decoding becomes significant later and representations at the start of imagery already overlap with later time points. This suggests that during imagery, the entire visual representation is activated at once or that there are large differences in the timing of imagery between trials. Secondly, we found consistent overlap between imagery and perceptual processing around 160 ms and from 300 ms after stimulus onset. This indicates that the N170 gets reactivated during imagery and that imagery does not rely on early perceptual representations. Together, these results provide important insights for our understanding of the neural mechanisms of visual imagery.
A key question within systems neuroscience is how the brain translates physical stimulation into a behavioral response: perceptual decision making. To answer this question, it is important to dissociate the neural activity underlying the encoding of sensory information from the activity underlying the subsequent temporal integration into a decision variable. Here, we adopted a decoding approach to empirically assess this dissociation in human magnetoencephalography recordings. We used a functional localizer to identify the neural signature that reflects sensory-specific processes, and subsequently traced this signature while subjects were engaged in a perceptual decision making task. Our results revealed a temporal dissociation in which sensory processing was limited to an early time window and consistent with occipital areas, whereas decision-related processing became increasingly pronounced over time, and involved parietal and frontal areas. We found that the sensory processing accurately reflected the physical stimulus, irrespective of the eventual decision. Moreover, the sensory representation was stable and maintained over time when it was required for a subsequent decision, but unstable and variable over time when it was task-irrelevant. In contrast, decision-related activity displayed long-lasting sustained components. Together, our approach dissects neuro-anatomically and functionally distinct contributions to perceptual decisions.
A relatively new analysis technique, known as neural decoding or multivariate pattern analysis (MVPA), has become increasingly popular for cognitive neuroimaging studies over recent years. These techniques promise to uncover the representational contents of neural signals, as well as the underlying code and the dynamic profile thereof. A field in which these techniques have led to novel insights in particular is that of visual working memory (VWM). In the present study, we subjected human volunteers to a combined VWM/imagery task while recording their neural signals using magnetoencephalography (MEG). We applied multivariate decoding analyses to uncover the temporal profile underlying the neural representations of the memorized item. Analysis of gaze position however revealed that our results were contaminated by systematic eye movements, suggesting that the MEG decoding results from our originally planned analyses were confounded. In addition to the eye movement analyses, we also present the original analyses to highlight how these might have readily led to invalid conclusions. Finally, we demonstrate a potential remedy, whereby we train the decoders on a functional localizer that was specifically designed to target bottom-up sensory signals and as such avoids eye movements. We conclude by arguing for more awareness of the potentially pervasive and ubiquitous effects of eye movement-related confounds.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.