Let's start with a confession: I agree with many points made in this article. The problem-not uncommon in this kind of positioning pieces-is that it creates a caricature version of a method in order to then prove that its arguments and methods are better. Most methods can be carried out poorly or thoughtfully, and RCTs are no exception. What bothers me is that RCTs tend to be picked on. There seem to be two reasons for this: they are more clearly understood and defined than arguably any other evaluation method, and they have experienced a boom in funding in recent years, which apparently invites envy. I would argue that these same reasons for picking on RCTs may be some of their biggest contributions to the evaluation field! Experimental and quasi-experimental methods set out clear minimum quality criteria, reporting criteria, and criteria to assess the risk of bias AND can be replicated by others and hence do not rely on non-transparent 'expert judgements'. 1 This clarity, transparency and replicability is one of their substantial contributions to the evaluation field and an example that other methods should strive to follow. We in the International Initiative for Impact Evaluation (3ie) have recently launched our Research Transparency Policy, which supports independent replication of reported quantitative results and the transparent sharing of de-identified data and codes. Given our focus on theory-based, mixed-methods impact evaluations, the next challenge is to similarly enhance the transparency and public availability of the qualitative data and findings upon which evaluations draw. We hope that other organizations and researchers will join us in this endeavor. Now to the second reason for picking on RCTs. The author of the lead piece feels that funding has been disproportionately favoring RCTs, and impact evaluations more broadly. She seems to assume a zero-sum, static game, whereas I would venture that the credibility of the evaluation field has been enhanced, which