Learning complex structures from stimuli requires extended exposure and often repeated observation of the same stimuli. Learning induces stimulus-dependent changes in specific performance measures. The same performance measures, however, can also be affected by processes that arise because of extended training (e.g., fatigue) but are otherwise independent from learning. Thus, a thorough assessment of the properties of learning can only be achieved by identifying and accounting for the effects of such processes. Reactive inhibition is a process that modulates behavioral performance measures on a wide range of time scales and often has opposite effects than learning. Here we develop a tool to disentangle the effects of reactive inhibition from learning in the context of an implicit learning task, the alternating serial reaction time (ASRT) task. Our method highlights that the magnitude of the effect of reactive inhibition on measured performance is larger than that of the acquisition of statistical structure from stimuli. We show that the effect of reactive inhibition can be identified not only in population measures but also at the level of performance of individuals, revealing varying degrees of contribution of reactive inhibition. Finally, we demonstrate that a higher proportion of behavioral variance can be explained by learning once the effects of reactive inhibition are eliminated. These results demonstrate that reactive inhibition has a fundamental effect on the behavioral performance that can be identified in individual participants and can be separated from other cognitive processes like learning. (PsycINFO Database Record
It has extensively been documented that human memory exhibits a wide range of systematic distortions, which have been associated with resource constraints. Resource constraints on memory can be formalised in the normative framework of lossy compression, however traditional lossy compression algorithms result in qualitatively different distortions to those found in experiments with humans. We argue that the form of distortions is characteristic of relying on a generative model adapted to the environment for compression. We show that this semantic compression framework can provide a unifying explanation of a wide variety of memory phenomena. We harness recent advances in learning deep generative models, that yield powerful tools to approximate generative models of complex data. We use three datasets, chess games, natural text, and hand-drawn sketches, to demonstrate the effects of semantic compression on memory performance. Our model accounts for memory distortions related to domain expertise, gist-based distortions, contextual effects, and delayed recall. Author summaryHuman memory performs surprisingly poorly in many everyday tasks, which have been richly documented in laboratory experiments. While constraints on memory resources necessarily imply a loss of information, it is possible to do well or badly in relation to available memory resources. In this paper we recruit information theory, which establishes how to optimally lose information based on prior and complete knowledge of environmental statistics. For this, we address two challenges. 1, The environmental statistics is not known for the brain, rather these have to be learned over time from limited observations. 2, Information theory does not specify how different distortions of original experiences should be penalised. In this paper we tackle these challenges by assuming that a latent variable generative model of the environment is maintained in semantic memory. We show that compression of experiences through a generative model gives rise to systematic distortions that qualitatively correspond to a diverse range of observations in the experimental literature. March 20, 2020 1/23 1 It has long been known that human memory is far from an exact reinstatement of past 2 sensory experience. In fact, memory has been found surprisingly poor for even very 3 frequently encountered objects such as coins [1], traffic signs [2] or brand logos [3]. 4 Rather than being random noise however, the distortions in recalled experience show 5 robust and structured biases. A great number of experiments have shed light on 6 systematic ways in which the distortions in recalled memories can be influenced both by 7 past and future information, as well as the context of encoding and recall. Canonical 8 examples of past knowledge influencing recall include experiments of Bartlett [4] where 9for folk tales recalled by subjects of non-matching cultural background, the recalled 10 versions were found to be modified in ways that made the stories more consistent with 11 the subjects' cul...
It has extensively been documented that human memory exhibits a wide range of systematic distortions, which have been associated with resource constraints. Resource constraints on memory can be formalised in the normative framework of lossy compression, however traditional lossy compression algorithms result in qualitatively different distortions to those found in experiments with humans. We argue that the form of distortions is characteristic of relying on a generative model adapted to the environment for compression. We show that this semantic compression framework can provide a unifying explanation of a wide variety of memory phenomena. We harness recent advances in learning deep generative models, that yield powerful tools to approximate generative models of complex data. We use three datasets, chess games, natural text, and hand-drawn sketches, to demonstrate the effects of semantic compression on memory performance. Our model accounts for memory distortions related to domain expertise, gist-based distortions, contextual effects, and delayed recall.
Internal models capture the regularities of the environment and are central to understanding how humans adapt to environmental statistics. In general, the correct internal model is unknown to observers, instead they rely on an approximate model that is continually adapted throughout learning. However, experimenters assume an ideal observer model, which captures stimulus structure but ignores the diverging hypotheses that humans form during learning. We combine non-parametric Bayesian methods and probabilistic programming to infer rich and dynamic individualised internal models from response times. We demonstrate that the approach is capable of characterizing the discrepancy between the internal model maintained by individuals and the ideal observer model and to track the evolution of the contribution of the ideal observer model to the internal model throughout training. In particular, in an implicit visuomotor sequence learning task the identified discrepancy revealed an inductive bias that was consistent across individuals but varied in strength and persistence.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.