Randomized control trials (RCTs) are commonly regarded as the 'gold standard' for evaluating educational interventions. While this experimental design is valuable in establishing causal relationships between the tested intervention and outcomes, reliance on statistical aggregation typically underplays the situated context in which interventions are implemented. Developing innovative, systematic methods for evaluating implementation and understanding its impact on outcomes is vital to moving educational evaluation research beyond questions of 'what works', towards better understanding the mechanisms underpinning an intervention's effects. The current study presents a pragmatic, two-phased approach that combines qualitative data with quantitative analyses to examine the causal relationships between intervention implementation and outcomes. This new methodological approach is illustrated in the context of a maths app intervention recently evaluated in a RCT across 11 schools. In phase I, four implementation themes were identified; 'teacher support', 'teacher supervision', 'implementation quality', and 'established routine'. In phase II, 'established routine' was found to predict 41% of the variance in children's learning outcomes with the apps. This has significant implications for future scaling. Overall, this new methodological approach offers an innovative method for combining process and impact evaluations when seeking to gain a more nuanced understanding of what works in education and why.