Psychology research has become increasingly focused on creating formalized models of psychological processes, which can make exact quantitative predictions about observed data that are the result of some unknown psychological process, allowing a better understanding of how psychological processes may actually operate. However, using models to understand psychological processes comes with an additional challenge: how do we select the best model from a range of potential models that all aim to explain the same psychological process? A recent article by Navarro (2018; Computational Brain & Behavior ) provided a detailed discussion on several broad issues within the area of model selection,with Navarro suggesting that "one of the most important functions of a scientific theory is ... to encourage directed exploration of new territory" (p.3), that "understanding how the qualitative patterns in the empirical data emerge naturally from a computational model of a psychological process is often more scientifically useful than presenting a quantified measure of its performance" (p.6), and that "quantitative measures of performance are essentially selecting models based on their ancillary assumptions" (p.6). Here, I provide a critique of several of Navarro's points on these broad issues. In contrast to Navarro, I argue that all possible data should be considered when evaluating a process model (i.e., not just data from novel contexts), that quantitative model selection methods provide a more principled and complete method of selecting between process models than visual assessments of qualitative trends, and that the idea of ancillary assumptions that are not part of the core explanation in the model is a slippery slope to an infinitely flexible model.