Cognitive modelling shares many features with statistical modelling, making it seem trivial to borrow from the practices of robust Bayesian statistics to protect the practice of robust cognitive modelling. We take one aspect of statistical workflow-prior predictive checks-and explore how they might be applied to a cognitive modelling task. We find that it is not only the likelihood that is needed to interpret the priors, we also need to incorporate experiment information as well. This suggests that while cognitive modelling might borrow from statistical practices, especially workflow, care must be taken to make the necessary adaptions.Cognitive modelling aims to create generative, phenomenological models of cognitive processes. In contrast, statistical tools are mostly designed to disentangle a signal from measurement error in the data. This leads to some tension between the tasks statistical tools are designed for and the types of questions cognitive models attempt to answer. In this paper we highlight that it is important to remain aware of the difference in purpose when trying to apply modern (or, for that matter, classical) statistical techniques to cognitive modelling. We focus on Bayesian methods, but we believe that the general warning should hold more generally.We highlight this by exploring how an increasingly popular concept from statistical modelling-viewing modelling as a holistic workflow rather than a set of discrete and independent activities-can be applied to cognitive modelling practices. This idea, as outlined