Visual working memory (VWM) refers to the ability to encode, store, and retrieve visual information. The two prevailing theories that describe VWM assume that information is stored either in discrete slots or within a shared pool of resources. However, there is not yet a good understanding of the neural mechanisms that would underlie such theories. To address this gap, we provide a computationally realized neural account that uses a pool of shared neurons to store information about one or more distinct stimuli. The binding pool model is a neural network that is essentially a hybrid of the slot and resource theories. It describes how information can be stored and retrieved from a pool of shared resources using a type/token architecture (Bowman & Wyble in Psychological Review 114(1), 38-70, 2007; Kanwisher in Cognition 27, 117-143, 1987; Mozer in Journal of Experimental Psychology: Human Perception and Performance 15(2), 287-303, 1989). The model can store multiple distinct objects, each containing binding links to one or more features. The binding links are stored in a pool of shared resources and, thus, produce mutual interference as memory load increases. Given a cue, the model retrieves a specific object and then reconstructs other features bound to that object, along with a confidence metric. The model can simulate data from continuous report and change detection paradigms and generates testable predictions about the interaction of report accuracy, confidence, and stimulus similarity. The testing of such predictions will help to identify the boundaries of shared resource theories, thereby providing insight into the roles of ensembles and context in explaining our ability to remember visual information.
The significant presence but different characteristics of incidental findings in young and older subjects presumed to be neurologically healthy suggest that standards of practice are needed to guide investigators in managing and communicating their discovery.
Working memory is a limited resource. To further characterize its limitations, it is vital to understand exactly what is encoded about a visual object beyond the "relevant" features probed in a particular task. We measured the memory quality of a task-irrelevant feature of an attended object by coupling a delayed estimation task with a surprise test. Participants were presented with a single colored arrow and were asked to retrieve just its color for the first half of the experiment before unexpectedly being asked to report its direction. Mixture modeling of the data revealed that participants had highly variable precision on the surprise test, indicating a coarse-grained memory for the irrelevant feature. Following the surprise test, all participants could precisely recall the arrow's direction; however, this improvement in direction memory came at a cost in precision for color memory even though only a single object was being remembered. We attribute these findings to varying levels of attention to different features during memory encoding.
Conventional theories of cognition focus on attention as the primary determinant of working memory contents. However, here we show that about one third of observers could not report the color of a ball that they had just been specifically attending for 5-59 s. This counterintuitive result was obtained when observers repeatedly counted the passes of one of two different colored balls among actors in a video and were then unexpectedly asked to report the color of the ball that they had just tracked. Control trials demonstrated that observers' color report performance increased dramatically once they had an expectation to do so. Critically, most of the incorrect color responses were the distractor ball color, which suggested memory storage without binding. Therefore, these results, together with other recent findings argued against two opposing theories: object-based encoding and feature-based encoding. Instead, we propose a new hypothesis by suggesting that the failure to report color is because participants might only activate the color representation in long-term memory without binding it to object representation in working memory.
A major thread of visual cognition has been to explore the characteristics of the attention system by presenting two targets and observing how well they can both be reported as a function of their temporal and spatial separation. This method has illuminated effects such as the attentional blink, the attentional dwell time, competitive interference, sparing temporal order errors, and localized attentional interference. However, these different effects are typically explored separately, using quite distinct experimental paradigms. In an effort to consolidate our understanding of these various effects into a more comprehensive theory of attention, we present a new method for measuring spatial gradients of interference at different temporal separations between two targets without creating specific expectations about target location. The observed data support theories that there are multiple sources of interference within the visual system. A theoretical model is proposed that illustrates how three distinct forms of interference could arise through the processes of identifying, attending, and encoding visual targets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.