The choice of evaluation methods is one of the most plagued questions for evaluators (Szanyi, Azzam, & Galen, 2012). Especially in development evaluation, where interventions tend to be very complex, and multiple stakeholders hold competing interests (Holvoet et al., 2018), this question is pressing. While one can discern an emerging consensus among evaluation scholars that not only (quasi-) experimental evidence can lay monopoly claim to the production of the best effectiveness evidence (Stern et al., 2012), this idea is definitely not yet commonly shared among all evaluators, let alone among commissioners of impact evaluation studies. The article by Wendy Olsen presents a strong persuasive case for considering alternative impact evaluation methods that can help overcoming the shortcomings of Randomized Controlled Trials (RCTs). The question is, however, on which conditions one should opt for such alternative methods. Or to put it differently: on which conditions can it be "unwise" to resort to such methods in impact evaluations? The aim of this contribution is to bring some nuance into the methods debate, by drawing attention to the broader organizational and institutional context in which impact evaluations take place. The commentary revolves around the idea that the choice of a particular evaluation method will as much be affected by considerations of technical appropriateness as well as political appropriateness. With technical appropriateness, we refer to the ability of the chosen method to answer the impact evaluation question at stake. Political appropriateness in turn concerns the fit between the broader institutional setting and specific impact evaluation methods. Any assessment of the merit of particular evaluation methods should ideally consider both angles.