In the 2018 meta-analysis of Educational Psychology Review entitled “Null effects of perceptual disfluency on learning outcomes in a text-based educational context” by Xie, Zhou, and Liu, we identify some errors and inconsistencies in both the methodological approach and the reported results regarding coding and effect sizes. While from a technical point of view the meta-analysis aligns with current meta-analytical guidelines (e.g., PRISMA) and conforms to general meta-analytical requirements (e.g., considering publication bias), it exemplifies certain insufficient practices in the creation and review of meta-analysis. We criticize the lack of transparency and negligence of open-science practices in the generation and reporting of results, which complicate evaluation of the meta-analytical reproducibility, especially given the flexibility in subjective choices regarding the analytical approach and the flexibility in creating the database. Here we present a framework applicable to pre- and post-publication review on improving the Methods Reproducibility of meta-analysis. Based on considerations of the transparency and openness (TOP)-guidlines (Nosek et al. Science 348: 1422–1425, 2015), the Reproducibility Enhancement Principles (REP; Stodden et al. Science 354:1240–1241, 2016), and recommendations by Lakens et al. (BMC Psychology 4: Article 24, 2016), we outline Computational Reproducibility (Level 1), Computational Verification (Level 2), Analysis Reproducibility (Level 3), and Outcome Reproducibility (Level 4). Applying reproducibility checks to TRANSFER performance as the chosen outcome variable, we found Xie’s and colleagues’ results to be (rather) robust. Yet, regarding RECALL performance and the moderator analysis, the identified problems raise doubts about the credibility of the reported results.