Publication bias is the disproportionate representation of studies with large effects and statistically significant findings in the published research literature. If publication bias occurs in singlecase research design studies on applied behavior-analytic (ABA) interventions, it can result in inflated estimates of ABA intervention effects. We conducted an empirical evaluation of publication bias on an evidence-based ABA intervention for children diagnosed with autism spectrum disorder, response interruption and redirection (RIRD). We determined effect size estimates for published and unpublished studies using 3 metrics, percentage of nonoverlapping data (PND), Hedges' g, and log response ratios (LRR). Omnibus effect size estimates across all 3 metrics were positive, supporting that RIRD is an effective treatment for reducing problem behavior maintained by nonsocial consequences. We observed larger PND for published compared to unpublished studies, small and nonsignificant differences in LRR for published compared to unpublished studies, and significant differences in Hedges' g for published compared to unpublished studies, with published studies showing slightly larger effect. We found little, if any, difference in methodological quality between published and unpublished studies. While RIRD appears to be an effective intervention for challenging behavior maintained by nonsocial consequences, our results reflect some degree of publication bias present in the RIRD research literature.