“…The improvements reported by Gershman's (2016) empirical-prior approach in the context of reinforcementlearning modeling are quite fortunate, as they directly address some long-standing challenges in this domain. Reinforcement-learning models are regularly adopted as a way to analyze repeated trial-and-error decisions in psychology and neuroscience (e.g., Schulze, van Ravenzwaaij, & Newell, 2015;Erev & Barron, 2005;Baron & Erev, 2003;Niv et al, 2015;Dayan & Daw, 2008;Chase, Kumar, Eickhoff, & Dombrovski, 2015;Dayan & Balleine, 2002). Despite their prominence, these models have well-documented cases of parameter non-identifiability and sloppiness (e.g., Humphries, Bruno, Karpievitch, & Wotherspoon, 2015;Wetzels et al, 2010; but see, e.g., Ahn et al, 2011Ahn et al, , 2014Steingroever, Wetzels, & Wagenmakers, 2013, for examples of satisfactory parameter identifiability).…”