In this paper a formal model of associative learning is presented which incorporates representational and computational mechanisms that, as a coherent corpus, empower it to make accurate predictions of a wide variety of phenomena that so far have eluded a unified account in learning theory. In particular, the Double Error Dynamic Asymptote (DDA) model introduces: 1) a fully-connected network architecture in which stimuli are represented as temporally clustered elements that associate to each other, so that elements of one cluster engender activity on other clusters, which naturally implements neutral stimuli associations and mediated learning; 2) a predictor error term within the traditional error correction rule (the double error), which reduces the rate of learning for expected predictors; 3) a revaluation associability rate that operates on the assumption that the outcome predictiveness is tracked over time so that prolonged uncertainty is learned, reducing the levels of attention to initially surprising outcomes; and critically 4) a biologically plausible variable asymptote, which encapsulates the principle of Hebbian learning, leading to stronger associations for similar levels of cluster activity. The outputs of a set of simulations of the DDA model are presented along with empirical results from the literature. Finally, the predictive scope of the model is discussed.
Conditioning, how animals learn to associate two or more events, is one of the most influential paradigms in learning theory. It is nevertheless unclear how current models of associative learning can accommodate complex phenomena without ad hoc representational assumptions. We propose to embrace deep neural networks to negotiate this problem.
In this paper a formal model of associative learning is presented which incorporates representational and computational mechanisms that, as a coherent corpus, empower it to make accurate predictions of a wide variety of phenomena that so far have eluded a unified account in learning theory. In particular, the Double Error model introduces: 1) a fully-connected network architecture in which stimuli are represented as temporally distributed elements that associate to each other, which naturally implements neutral stimuli associations and mediated learning; 2) a predictor error term within the traditional error correction rule (the double error), which reduces the rate of learning for expected predictors; 3) a revaluation associability rate that operates on the assumption that the outcome predictiveness is tracked over time so that prolonged uncertainty is learned, reducing the levels of attention to initially surprising outcomes; and critically 4) a biologically plausible variable asymptote, which encapsulates the principle of Hebbian learning, leading to stronger associations for similar levels of element activity. The outputs of a set of simulations of the Double Error model are presented along with empirical results from the literature. Finally, the predictive scope of the model is discussed.Keywords: Associative learning, classical conditioning, computational modelling, real-time double error correction, variable asymptote Correspondence concerning this article should be addressed to Esther Mondragón, Centre for Computational and Animal Learning Research, 21 Bardwell Road, St Albans, Hertfordshire AL1 1RQ, United Kingdom. Email: e.mondragon@cal-r.org . CC-BY-NC-ND 4.0 International license not peer-reviewed) is the author/funder. It is made available under aThe copyright holder for this preprint (which was . http://dx.doi.org/10.1101/210674 doi: bioRxiv preprint first posted online Oct. 28, 2017; 2 Associative learning aims at understanding the precise mechanisms by which humans and animals learn to relate events in their environment. Associative learning has been replicated across numerous species and procedures (Hall, 2002;Pearce & Bouton, 2001;Turkkan, 1989) ; its neural correlates have been extensively studied (Gomez et al., 2001;Kobayashi & Poo, 2004;Marschner, Kalisch, Vervliet, Vansteenwegen, & Büchel, 2011;Panayi & Killcross, 2014;Roesch, Esber, Li, Daw, & Schoenbaum, 2012); it has proved to be a core learning mechanism in high-order cognitive processes such as judgment of causality and categorization (Shanks, 1995), and rule learning (Murphy, Mondragón, & Murphy, 2008); it underpins a good number of clinical models (Haselgrove & Hogarth, 2011;Schachtman & Reilly, 2011); and its evolutionary origins are beginning to be elucidated (Ginsburg & Jablonka, 2010). It is thus paramount that we develop comprehensive, accurate models of associative learning.In classical conditioning, a fundamental pillar of associative learning, the repeated cooccurrence of two stimuli (e.g., an odour or tone), S1 and S2, is...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.