Event-related potentials (ERPs) are used extensively to investigate the neural mechanisms of attention control and selection. The commonly applied univariate ERP approach, however, has left important questions inadequately answered. Here, we addressed two questions by applying multivariate pattern classification to multichannel ERPs in two spatial-cueing experiments (N = 56 in total): (1) impact of cueing strategies (instructional vs. probabilistic) and (2) neural and behavioral effects of individual differences. Following the cue onset, the decoding accuracy (cue left vs. cue right) began to rise above chance level earlier and remained higher in instructional cueing (~80 ms) than in probabilistic cueing (~160 ms), suggesting that unilateral attention focus leads to earlier and more distinct formation of the attentional set. A similar temporal sequence was also found for target-related processing (cued targets vs.uncued targets), suggesting earlier and stronger attention selection under instructional cueing. Across the two experiments, individuals with higher decoding accuracy during ~460-660 ms post-cue showed higher magnitude of attentional modulation of target-evoked N1 amplitude, suggesting that better formation of anticipatory attentional state leads to better target processing. During target processing, individual difference in decoding accuracy was positively associated with behavioral performance (reaction time), suggesting that stronger selection of task-relevant information leads to better behavioral performance.Taken together, multichannel ERPs combined with machine learning decoding yields new insights into attention control and selection that are not possible with the univariate ERP approach, and along with the univariate ERP approach, provides a more comprehensive methodology to the study of visual spatial attention.
The perception of opportunities and threats in complex scenes represents one of the main functions of the human visual system. In the laboratory, its neurophysiological basis is often studied by having observers view pictures varying in affective content. This body of work has consistently shown that viewing emotionally engaging, compared to neutral, pictures (1) heightens blood flow in limbic structures and frontoparietal cortex, as well as in anterior ventral and dorsal visual cortex, and (2) prompts an increase in the late positive event-related potential (LPP), a scalp-recorded and time-sensitive index of engagement within the network of aforementioned neural structures. The role of retinotopic visual cortex in this process has, however, been contentious, with competing theoretical notions predicting the presence versus absence of emotion-specific signals in retinotopic visual areas. The present study used multimodal neuroimaging and machine learning to address this question by examining the large-scale neural representations of affective pictures. Recording EEG and fMRI simultaneously while observers viewed pleasant, unpleasant, and neutral affective pictures, and applying multivariate pattern analysis to single-trial BOLD activities in retinotopic visual cortex, we identified three robust findings: First, unpleasant-versus-neutral decoding accuracy, as well as pleasant-versus-neutral decoding accuracy, were well above chance level in all retinotopic visual areas, including primary visual cortex. Second, the decoding accuracy in ventral visual cortex, but not in early visual cortex or dorsal visual cortex, was significantly correlated with LPP amplitude. Third, effective connectivity from amygdala to ventral visual cortex predicted unpleasant-versus-neutral decoding accuracy, and effective connectivity from ventral frontal cortex to ventral visual cortex predicted pleasant-versus-neutral decoding accuracy. These results suggest that affective pictures evoked valence-specific multivoxel neural representations in retinotopic visual cortex and that these multivoxel representations were influenced by reentry signals from limbic and frontal brain regions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.