2017
DOI: 10.1016/j.beproc.2017.08.004
|View full text |Cite
|
Sign up to set email alerts
|

Methods of comparing associative models and an application to retrospective revaluation

Abstract: Contemporary theories of associative learning are increasingly complex, which necessitates the use of computational methods to reveal predictions of these models. We argue that comparisons across multiple models in terms of goodness of fit to empirical data from experiments often reveal more about the actual mechanisms of learning and behavior than do simulations of only a single model. Such comparisons are best made when the values of free parameters are discovered through some optimization procedure based on… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
5
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 62 publications
(109 reference statements)
0
5
0
Order By: Relevance
“…When different versions of the ADM were compared, they were fit independently, and residuals were converted into a model selection metric, the corrected Akaike Information Criterion (AICc). Models were selected if they had the fewest number of free parameters among those with a ΔAICc <4 (per recommendations of Avila et al, 2009; Burnham, Anderson, & Huyvaert, 2011; Posada & Buckley, 2004; and Witnauer, Hutchings, & Miller, 2017). This ensured that the selected model provided the best balance between parsimony (number of free parameters) and fit (in this case, minimized residuals)…”
Section: Testing the Adm Of Paradoxical Choicementioning
confidence: 99%
See 1 more Smart Citation
“…When different versions of the ADM were compared, they were fit independently, and residuals were converted into a model selection metric, the corrected Akaike Information Criterion (AICc). Models were selected if they had the fewest number of free parameters among those with a ΔAICc <4 (per recommendations of Avila et al, 2009; Burnham, Anderson, & Huyvaert, 2011; Posada & Buckley, 2004; and Witnauer, Hutchings, & Miller, 2017). This ensured that the selected model provided the best balance between parsimony (number of free parameters) and fit (in this case, minimized residuals)…”
Section: Testing the Adm Of Paradoxical Choicementioning
confidence: 99%
“…ASSOCIABILITY DECAY IN PARADOXICAL CHOICE al., 2009;Burnham, Anderson, & Huyvaert, 2011;Posada & Buckley, 2004;and Witnauer, Hutchings, & Miller, 2017). This ensured that the selected model provided the best balance between parsimony (number of free parameters) and fit (in this case, minimized residuals).…”
mentioning
confidence: 99%
“…Having more parameters clearly gives an intrinsic flexibility in data fitting that more rigid theories do not have, and one should balance this factor against the actual capabilities of the new models to accommodate observations. This is the typical situation where comparative statistics, such as the Bayesian (or Schwarz) Information Criterion (BIC) and the Akaike Information Criterion (AIC) can be extremely useful, as advocated by Witnauer et al (2017).…”
Section: Data Analysis: Comparison With the Rw Modelmentioning
confidence: 99%
“…All of the examples in the Journal of the Experimental Analysis of Behavior, Journal of Experimental Psychology: Animal Behavior Processes, and Journal of Experimental Psychology: Animal Cognition and Learning were examined, as were selected papers in the other journals. Some variant of I-T analysis was used to examine models of choice or response strength (Cowie, Davison, & Elliffe, 2014;Hunter & Davison, 1982;Hutsell & Jacobs, 2013;Lau & Glimcher, 2005;MacDonall, 2009;McLean, Grace, & Nevin, 2012;Mitchell, Wilson, & Karalunas, 2015;Navakatikyan, 2007), discounting (Białaszek, Marcowski, & Ostaszewski, 2017;DeHart & Odum, 2015;Jarmolowicz et al, 2018;Rung & Young, 2015;Young, 2017), peak shift (Krägeloh, Elliffe, & Davison, 2006), behavioral momentum (Hall, Smith, & Wynne, 2015;Killeen & Nevin, 2018), associative learning (Cabrera, Sanabria, Shelley, & Killeen, 2009;Hall et al, 2015;Witnauer, Hutchings, & Miller, 2017), the partitioning of behavior into response bouts (Brackney, Cheung, Neisewander, & Sanabria, 2011;Smith, McLean, Shull, Hughes, & Pitts, 2014;Tanno, 2016), navigation strategies (Anselme, Otto, & Güntürkün, 2018), instructional techniques (Warnakulasooriya, Palazzo, & Pritchard, 2007), problem solving by corvids (Cibulski, Wascher, Weiss, & Kotrschal, 2014), models of punishment (Klapes, Riley, & McDowell, 2018), and timing (Beckmann & Young, 2009;Ludvig, Conover, & Shizgal, 2007). This is just a partial, not exhaustive, list to convey the range of applications.…”
Section: Use In Behavior Sciencementioning
confidence: 99%