Our mental representation of object categories is hierarchically organized, and our rapid and seemingly effortless categorization ability is crucial for our daily behavior. Here, we examine responses of a large number (>600) of neurons in monkey inferior temporal (IT) cortex with a large number (>1,000) of natural and artificial object images. During the recordings, the monkeys performed a passive fixation task. We found that the categorical structure of objects is represented by the pattern of activity distributed over the cell population. Animate and inanimate objects created distinguishable clusters in the population code. The global category of animate objects was divided into bodies, hands, and faces. Faces were divided into primate and nonprimate faces, and the primate-face group was divided into human and monkey faces. Bodies of human, birds, and four-limb animals clustered together, whereas lower animals such as fish, reptile, and insects made another cluster. Thus the cluster analysis showed that IT population responses reconstruct a large part of our intuitive category structure, including the global division into animate and inanimate objects, and further hierarchical subdivisions of animate objects. The representation of categories was distributed in several respects, e.g., the similarity of response patterns to stimuli within a category was maintained by both the cells that maximally responded to the category and the cells that responded weakly to the category. These results advance our understanding of the nature of the IT neural code, suggesting an inherently categorical representation that comprises a range of categories including the amply investigated face category.
In everyday life, we efficiently find objects in the world by moving our gaze from one location to another. The efficiency of this process is brought about by ignoring items that are dissimilar to the target and remembering which target-like items have already been examined. We trained two animals on a visual foraging task in which they had to find a reward-loaded target among five task-irrelevant distractors and five potential targets. We found that both animals performed the task efficiently, ignoring the distractors and rarely examining a particular target twice. We recorded the single unit activity of 54 neurons in the lateral intraparietal area (LIP) while the animals performed the task. The responses of the neurons differentiated between targets and distractors throughout the trial. Further, the responses marked off targets that had been fixated by a reduction in activity. This reduction acted like inhibition of return in saliency map models; items that had been fixated would no longer be represented by high enough activity to draw an eye movement. This reduction could also be seen as a correlate of reward expectancy; after a target had been identified as not containing the reward the activity was reduced. Within a trial, responses to the remaining targets did not increase as they became more likely to yield a result, suggesting that only activity related to an event is updated on a moment-by-moment bases. Together, our data show that all the neural activity required to guide efficient search is present in LIP. Because LIP activity is known to correlate with saccade goal selection, we propose that LIP plays a significant role in the guidance of efficient visual search.
It has been suggested that one way we may create a stable percept of the visual world across multiple eye movements is to pass information from one set of neurons to another around the time of each eye movement. Previous studies have shown that some neurons in the lateral intraparietal area (LIP) exhibit anticipatory remapping: these neurons produce a visual response to a stimulus that will enter their receptive field after a saccade, but before it actually does so. LIP responses during fixation are thought to represent attentional priority, behavioral relevance or value. In this study, we test whether the remapped response represents this attentional priority, by examining the activity of LIP neurons while animals perform a visual foraging task. We find that the population responds more to a target than to a distractor before the saccade even begins to bring the stimulus into the receptive field. Within 20 ms of the saccade ending, the responses in almost a third of LIP neurons closely resemble the responses that will emerge during stable fixation. Finally, we show that in these neurons and in the population as a whole, this remapping occurs for all stimuli in all locations across the visual field and for both long and short saccades. We conclude that this complete remapping of attentional priority across the visual field could underlie spatial stability across saccades.
When exploring a visual scene, some objects perceptually popout because of a difference of color, shape, or size. This bottom-up information is an important part of many models describing the allocation of visual attention. It has been hypothesized that the lateral intraparietal area (LIP) acts as a "priority map," integrating bottom-up and top-down information to guide the allocation of attention. Despite a large literature describing top-down influences in LIP, the presence of a pure salience response to a salient stimulus defined by its static features alone has not been reported. We compared LIP responses with colored salient stimuli and distractors in a passive fixation task. Many LIP neurons responded preferentially to 1 of the 2 colored stimuli, yet the mean responses to the salient stimuli were significantly higher than to distractors, independent of the features of the stimuli. These enhanced responses were significant within 75 ms, and the mean responses to salient and distractor stimuli were tightly correlated, suggesting a simple gain control. We propose that a pure salience signal rapidly appears in LIP by collating salience signals from earlier visual areas. This contributes to the creation of a priority map, which is used to guide attention and saccades.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.