The perception of opportunities and threats in complex visual scenes represents one of the main functions of the human visual system. The underlying neurophysiology is often studied by having observers view pictures varying in affective content. It has been shown that viewing emotionally engaging, compared with neutral, pictures (1) heightens blood flow in limbic, frontoparietal, and anterior visual structures and (2) enhances the late positive event-related potential (LPP). The role of retinotopic visual cortex in this process has, however, been contentious, with competing theories predicting the presence versus absence of emotion-specific signals in retinotopic visual areas. Recording simultaneous electroencephalography–functional magnetic resonance imaging while observers viewed pleasant, unpleasant, and neutral affective pictures, and applying multivariate pattern analysis, we found that (1) unpleasant versus neutral and pleasant versus neutral decoding accuracy were well above chance level in retinotopic visual areas, (2) decoding accuracy in ventral visual cortex (VVC), but not in early or dorsal visual cortex, was correlated with LPP, and (3) effective connectivity from amygdala to VVC predicted unpleasant versus neutral decoding accuracy, whereas effective connectivity from ventral frontal cortex to VVC predicted pleasant versus neutral decoding accuracy. These results suggest that affective scenes evoke valence-specific neural representations in retinotopic visual cortex and that these representations are influenced by reentry signals from anterior brain regions.
Computer aided diagnosis of breast cancers often relies on automatic image analysis of histopathology images. The automatic region segmentation in breast cancer is challenging due to: i) large regional variations, and ii) high computational costs of pixel-wise segmentation. Deep convolutional neural network (CNN) is proven to be an effective method for image recognition and classification. However, it is often computationally expensive. In this paper, we propose to apply a fast scanning deep convolutional neural network (fCNN) to pixel-wise region segmentation. The fCNN removes the redundant computations in the original CNN without sacrificing its performance. In our experiment it takes only 2.3 seconds to segment an image with size 1000 × 1000. The comparison experiments show that the proposed system outperforms both the LBP feature-based and texton-based pixel-wise methods.
The top–down control of attention involves command signals arising chiefly in the dorsal attention network (DAN) in frontal and parietal cortex and propagating to sensory cortex to enable the selective processing of incoming stimuli based on their behavioral relevance. Consistent with this view, the DAN is active during preparatory (anticipatory) attention for relevant events and objects, which, in vision, may be defined by different stimulus attributes including their spatial location, color, motion, or form. How this network is organized to support different forms of preparatory attention to different stimulus attributes remains unclear. We propose that, within the DAN, there exist functional microstructures (patterns of activity) specific for controlling attention based on the specific information to be attended. To test this, we contrasted preparatory attention to stimulus location (spatial attention) and to stimulus color (feature attention), and used multivoxel pattern analysis to characterize the corresponding patterns of activity within the DAN. We observed different multivoxel patterns of BOLD activation within the DAN for the control of spatial attention (attending left vs. right) and feature attention (attending red vs. green). These patterns of activity for spatial and feature attentional control showed limited overlap with each other within the DAN. Our findings thus support a model in which the DAN has different functional microstructures for distinctive forms of top–down control of visual attention.
Feature-based visual attention refers to preferential selection and processing of visual stimuli based on their nonspatial attributes, such as color or shape. Recent studies have highlighted the inferior frontal junction (IFJ) as a control region for feature but not spatial attention. However, the extent to which IFJ contributes to spatial versus feature attention control remains a topic of debate. We investigated in humans of both sexes the role of IFJ in the control of feature versus spatial attention in a cued visual spatial (attend-left or attend-right) and feature (attend-red or attend-green) attention task using fMRI. Analyzing cue-related fMRI using both univariate activation and multivoxel pattern analysis, we found the following results in IFJ. First, in line with some prior studies, the univariate activations were not different between feature and spatial attentional control. Second, in contrast, the multivoxel pattern analysis decoding accuracy was above chance level for feature attention (attend-red vs attend-green) but not for spatial attention (attend-left vs attend-right). Third, while the decoding accuracy for feature attention was above chance level during attentional control in the cue-to-target interval, it was not during target processing. Fourth, the right IFJ and visual cortex (V4) were observed to be functionally connected during feature but not during spatial attention control, and this functional connectivity was positively associated with subsequent attentional selection of targets in V4, as well as with behavioral performance. These results support a model in which IFJ plays a crucial role in topdown control of visual feature but not visual spatial attention.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.