The Selective Tuning model of visual attention (Tsotsos, 1990) has proposed that the focus of attention is surrounded by an inhibitory zone, eliciting a center-surround attentional distribution. This attentional suppressive surround inhibits irrelevant information which is located close to attended information in physical space (e.g., Cutzu and Tsotsos, 2003; Hopf et al., 2010) or in feature space (e.g., Tombu and Tsotsos, 2008; Störmer and Alvarez, 2014; Bartsch et al., 2017). In Experiment 1, we investigate the interaction between location-based and feature-based surround suppression and hypothesize that the attentional surround suppression would be maximized when spatially adjacent stimuli are also represented closely within a feature map. Our results demonstrate that perceptual discrimination is worst when two similar orientations are presented in proximity to each other, suggesting the interplay of the two surround suppression mechanisms. The Selective Tuning model also predicts that the size of the attentional suppressive surround is determined by the receptive field size of the neuron which optimally processes the attended information. The receptive field size of the processing neurons is tightly associated with stimulus size and eccentricity. Therefore, Experiment 2 tested the hypothesis that the size of the attentional suppressive surround would become larger as stimulus size and eccentricity increase, corresponding to an increase in the neuron's receptive field size. We show that stimulus eccentricity but not stimulus size modulates the size of the attentional suppressive surround. These results are consistent for both low- and high-level features (e.g., orientation and human faces). Overall, the present study supports the existence of the attentional suppressive surround and reveals new properties of this selection mechanism.
Background Feature-based attention prioritizes the processing of the attended feature while strongly suppressing the processing of nearby ones. This creates a non-linearity or “attentional suppressive surround” predicted by the Selective Tuning model of visual attention. However, previously reported effects of feature-based attention on neuronal responses are linear, e.g., feature-similarity gain. Here, we investigated this apparent contradiction by neurophysiological and psychophysical approaches. Results Responses of motion direction-selective neurons in area MT/MST of monkeys were recorded during a motion task. When attention was allocated to a stimulus moving in the neurons’ preferred direction, response tuning curves showed its minimum for directions 60–90° away from the preferred direction, an attentional suppressive surround. This effect was modeled via the interaction of two Gaussian fields representing excitatory narrowly tuned and inhibitory widely tuned inputs into a neuron, with feature-based attention predominantly increasing the gain of inhibitory inputs. We further showed using a motion repulsion paradigm in humans that feature-based attention produces a similar non-linearity on motion discrimination performance. Conclusions Our results link the gain modulation of neuronal inputs and tuning curves examined through the feature-similarity gain lens to the attentional impact on neural population responses predicted by the Selective Tuning model, providing a unified framework for the documented effects of feature-based attention on neuronal responses and behavior.
The current study investigated whether training improves the capacity of visual working memory using individualized adaptive training methods. Two groups of participants were trained for two targeted processes, filtering and consolidation. Before and after the training, the participants, including those with no training, performed a lateralized change detection task in which one side of the visual display had to be selected and the other side ignored. Across ten-day training sessions, the participants performed two modified versions of the lateralized change detection task. The number of distractors and duration of the consolidation period were adjusted individually to increase the task difficulty of the filtering and consolidation training, respectively. Results showed that the degree of improvement shown during the training was positively correlated with the increase in memory capacity, and training-induced benefits were most evident for larger set sizes in the filtering training group. These results suggest that visual working memory training is effective, especially when it is adaptive, individualized, and targeted.
It is well known that simple visual tasks, such as object detection or categorization, can be performed within a short period of time, suggesting the sufficiency of feed-forward visual processing. However, more complex visual tasks, such as fine-grained localization may require high-resolution information available at the early processing levels in the visual hierarchy. To access this information using a top-down approach, feedback processing would need to traverse several stages in the visual hierarchy and each step in this traversal takes processing time. In the present study, we compared the processing time required to complete object categorization and localization by varying presentation duration and complexity of natural scene stimuli. We hypothesized that performance would be asymptotic at shorter presentation durations when feed-forward processing suffices for visual tasks, whereas performance would gradually improve as images are presented longer if the tasks rely on feedback processing. In Experiment 1, where simple images were presented, both object categorization and localization performance sharply improved until 100 ms of presentation then it leveled off. These results are a replication of previously reported rapid categorization effects but they do not support the role of feedback processing in localization tasks, indicating that feed-forward processing enables coarse localization in relatively simple visual scenes. In Experiment 2, the same tasks were performed but more attention-demanding and ecologically valid images were used as stimuli. Unlike in Experiment 1, both object categorization performance and localization precision gradually improved as stimulus presentation duration became longer. This finding suggests that complex visual tasks that require visual scrutiny call for top-down feedback processing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.