Executive SummaryBackground: Many systematic reviews incorporate nonrandomised studies of effects, sometimes called quasi-experiments or natural experiments. However, the extent to which nonrandomised studies produce unbiased effect estimates is unclear in expectation or in practice. The usual way that systematic reviews quantify bias is through "risk of bias assessment" and indirect comparison of findings across studies using meta-analysis. A more direct, practical way to quantify the bias in nonrandomised studies is through "internal replication research", which compares the findings from nonrandomised studies with estimates from a benchmark randomised controlled trial conducted in the same population. Despite the existence of many risks of bias tools, none are conceptualised to assess comprehensively nonrandomised approaches with selection on unobservables, such as regression discontinuity designs (RDDs). The few that are conceptualised with these studies in mind do not draw on the extensive literature on internal replications (within-study comparisons) of randomised trials.Objectives: Our research objectives were as follows:Objective 1: to undertake a systematic review of nonrandomised internal study replications of international development interventions.Objective 2: to develop a risk of bias tool for RDDs, an increasingly common method used in social and economic programme evaluation.Methods: We used the following methods to achieve our objectives.Objective 1: we searched systematically for nonrandomised internal study replications of benchmark randomised experiments of social and economic interventions in low-and middle-income countries (L&MICs). We assessed the risk of bias in benchmark randomised experiments and synthesised evidence on the relative bias effect sizes produced by benchmark and nonrandomised comparison arms.Objective 2: We used document review and expert consultation to develop further a risk of bias tool for quasi-experimental studies of interventions (ROBINS-I) for RDDs.---