Retrospective pretests ask respondents to report after an intervention on their aptitudes, knowledge, or beliefs before the intervention. A primary reason to administer a retrospective pretest is that in some situations, program participants may over the course of an intervention revise or recalibrate their prior understanding of program content, with the result that their posttest scores are lower than their traditional pretest scores, even though their understanding or abilities have increased. This phenomenon is called response-shift bias. The existence of response-shift bias is undisputed, but it does not always occur, and use of the retrospective pretest in place of a traditional pretest often introduces new problems. In this commentary, I provide a brief overview of the literature on response-shift bias and discuss common pitfalls in the use and reporting of retrospective pretest results, including a failure to consider multiple factors that may affect all test scores, as well as claims that retrospective pretests are less biased than traditional pretests, provide more accurate estimates of effects, and are necessarily superior to traditional pretests in program evaluation. I comment on the article by Little et al. (2019) in this issue in light of the literature on retrospective pretests and discuss the need for a theoretical framework to guide research on response-shift bias. The goal of the commentary is to provide readers with an informed and critical lens through which to evaluate and use retrospective pretest methods.