Recent studies of visual statistical learning (VSL) have demonstrated that statistical regularities in sequences of visual stimuli can be automatically extracted, even without intent or awareness. Despite much work on this topic, however, several fundamental questions remain about the nature of VSL. In particular, previous experiments have not explored the underlying units over which VSL operates. In a sequence of colored shapes, for example, does VSL operate over each feature dimension independently, or over multidimensional objects in which color and shape are bound together? The studies reported here demonstrate that VSL can be both object-based and feature-based, in systematic ways based on how different feature dimensions covary. For example, when each shape covaried perfectly with a particular color, VSL was object-based: Observers expressed robust VSL for colored-shape sub-sequences at test but failed when the test items consisted of monochromatic shapes or color patches. When shape and color pairs were partially decoupled during learning, however, VSL operated over features: Observers expressed robust VSL when the feature dimensions were tested separately. These results suggest that VSL is object-based, but that sensitivity to feature correlations in multidimensional sequences (possibly another form of VSL) may in turn help define what counts as an object.
Mathematics is a uniquely human capacity. Studies of animals and human infants reveal, however, that this capacity builds on language-independent mechanisms for quantifying small numbers (!4) precisely and large numbers approximately. It is unclear whether animals and human infants can spontaneously tap mechanisms for quantifying large numbers to compute mathematical operations. Moreover, all available work on addition operations in non-human animals has confounded number with continuous perceptual properties (e.g. volume, contour length) that correlate with number. This study shows that rhesus monkeys spontaneously compute addition operations over large numbers, as opposed to continuous extents, and that the limit on this ability is set by the ratio difference between two numbers as opposed to their absolute difference. q
Over repeated exposure to particular visual search displays, subjects are able to implicitly extract regularities that then make search more efficient-a phenomenon known as contextual cueing. Here we explore how the learning involved in contextual cueing is formed, maintained, and updated over experience. During an initial training phase, a group of signal first subjects searched through a series of predictive displays (where distractor locations were perfectly correlated with the target location), followed with no overt break by a series of unpredictive displays (where repeated contexts were uncorrelated with target locations). A second noise first group of subjects encountered the unpredictive displays followed by the predictive displays. Despite the fact that both groups had the same overall exposure to signal and noise, only the signal first group demonstrated subsequent contextual cueing. This primacy effect indicates that initial experience can result in hypotheses about regularities in displays-or the lack thereof-which then become resistant to updating. The absence of regularities in early stages of training even blocked observers from learning predictive regularities later on.A major goal of visual processing is to recover information about the structure of the natural environment. Such information comes in several forms. Perhaps most intuitively, visual processing involves recovering the local visual features of objects, and the way those objects and features are arranged into visual scenes. A considerable amount of regularity in visual input, however, is also statistically distributed in both space and time. Accordingly, the visual system also appears to automatically extract regularities in spatial layouts and temporal sequences-including subtle regularities that may not be available to conscious report. Here we focus on determining just how and when such learning is triggered, maintained, and updated over time, focusing on contextual cueing.Please address all correspondence to Justin Jungé, Department of Psychology, Yale University, PO Box 208205, 2 Hillhouse Ave., New Haven, CT 06520-8205, USA. E-mail: justin.junge@yale.edu. Publisher's Disclaimer: Full terms and conditions of use: http://www.informaworld.com/terms-and-conditions-of-access.pdf This article maybe used for research, teaching and private study purposes. Any substantial or systematic reproduction, re-distribution, re-selling, loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material. © Taylor and Franc...
Locating objects in the local environment is essential for successful navigation in a complex world, and visual search can operate across a wide variety of environmental conditions and over a remarkable repertoire of useful feature combinations. The central functionality of visual search has attracted considerable empirical investigation and theoretical consideration over the past several decades. Search experiments typically ask participants to locate and respond to a predefined target object in a field of distractors. Designs of this type are well suited for exploring the nature of attentional selection and the time course of processing a variety of simple and complex stimuli. Much of what is known today about visual search has been deduced from search slopes. By adding more distractors to a display containing a single target and observing the corresponding increase in average response time (RT), it is possible to infer the average amount of processing time for each additional distractor. Prominent early theories of attention and visual search made extensive use of evidence from search slopes (e.g., Duncan & Humphreys, 1989;Treisman & Gelade, 1980). However, there remain important questions about search processing that may not be readily addressed using search slopes alone. All processing prior to target identification and response gets lumped under the same RT in traditional studies of visual search, but new methods and analyses can provide a window into search processing prior to target detection. The phenomenon of rapid resumption (Lleras, Rensink, & Enns, 2005)-to be discussed at length below-may provide one such source of converging evidence and additional inference.Visual search requires both attentional selection and several types of memory (Kristjánsson, 2000;Peterson, Kramer, Wang, Irwin, & McCarley, 2001;Shore & Klein, 2000;Woodman & Chun, 2006). It has been suggested that in order to perform typical search tasks, a target template must be held in working memory (Duncan & Humphreys, 1989). In fact, top-down influences on search are essential in most models of visual search. This has led researchers to investigate the possibility of shared resources between visual search and working memory tasks, using dual-task designs. Loading executive working memory impairs visual search efficiency (Han & Kim, 2004), as does loading spatial working memory (Oh & Kim, 2004;Woodman & Luck, 2004). However, actively remembering certain simple feature details, such as color patches, does not seem to affect search slopes (Woodman, Vogel, & Luck, 2001). There is reason to think that independently of resource sharing, memory impacts search in the form of accumulated (preliminary) evidence within a given trial. At a minimum, extracting the identity of the target must cross a threshold of recognition, and the processing prior to crossing this threshold qualifies as preliminary evidence accumulated. Rapid resumption provides a new way to study the information that accumulates about targets and distractors prior to target dete...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.