Reconsolidation postulates that reactivation of a memory trace renders it susceptible to disruption by treatments similar to those that impair initial memory consolidation. Despite evidence that implicit, or non-declarative, human memories can be disrupted at retrieval, a convincing demonstration of selective impairment in retrieval of target episodic memories following reactivation is lacking. In human subjects, we demonstrate that if reactivation of a verbal memory, through successful retrieval, is immediately followed by an emotionally aversive stimulus, a significant impairment is evident in its later recall. This effect is time-dependent and persists for at least 6 days. Thus, in line with a reconsolidation hypothesis, established human episodic memories can be selectively impaired following their retrieval.
Production and comprehension have long been viewed as inseparable components of language. The study of vision, by contrast, has centered almost exclusively on comprehension. Here we investigate drawing-the most basic form of visual production. How do we convey concepts in visual form, and how does refining this skill, in turn, affect recognition? We developed an online platform for collecting large amounts of drawing and recognition data, and applied a deep convolutional neural network model of visual cortex trained only on natural images to explore the hypothesis that drawing recruits the same abstract feature representations that support natural visual object recognition. Consistent with this hypothesis, higher layers of this model captured the abstract features of both drawings and natural images most important for recognition, and people learning to produce more recognizable drawings of objects exhibited enhanced recognition of those objects. These findings could explain why drawing is so effective for communicating visual concepts, they suggest novel approaches for evaluating and refining conceptual knowledge, and they highlight the potential of deep networks for understanding human learning.
Attending to objects in the world affects how we perceive and remember them. What are the consequences of attending to an object in mind? In particular, how does reporting the features of a recently seen object guide visual learning? In three experiments, observers were presented with abstract shapes in a particular color, orientation, and location. After viewing each object, observers were cued to report one feature from visual short-term memory (VSTM). In a subsequent test, observers were cued to report features of the same objects from visual long-term memory (VLTM). We tested whether reporting a feature from VSTM: (1) enhances VLTM for just that feature (practice-benefit hypothesis), (2) enhances VLTM for all features (object-based hypothesis), or (3) simultaneously enhances VLTM for that feature and suppresses VLTM for unreported features (feature-competition hypothesis). The results provided support for the feature-competition hypothesis, whereby the representation of an object in VLTM was biased towards features reported from VSTM and away from unreported features (Experiment 1). This bias could not be explained by the amount of sensory exposure or response learning (Experiment 2) and was amplified by the reporting of multiple features (Experiment 3). Taken together, these results suggest that selective internal attention induces competitive dynamics among features during visual learning, flexibly tuning object representations to align with prior mnemonic goals.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.