Performance on a dataset is often regarded as the key criterion for assessing NLP models. I will argue for a broader perspective, which emphasizes scientific explanation. I will draw on a long tradition in the philosophy of science, and on the Bayesian approach to assessing scientific theories, to argue for a plurality of criteria for assessing NLP models. To illustrate these ideas, I will compare some recent models of language production with each other. I conclude by asking what it would mean for institutional policies if the NLP community took these ideas onboard.