Background. In recent years, the veracity of scientific findings has come under intense scrutinyin what has been called the “replication crisis” (sometimes called the “reproducibility crisis” or“crisis of confidence”). This crisis is marked by the propagation of scientific claims which weresubsequently contested, found to be exaggerated, or deemed false. The causes of this crisis aremany, but include poor research design, inappropriate statistical analysis, and the manipulationof study results. Though it is uncertain if social work is in the midst of a similar crisis, it is notunlikely, given parallels between the field and adjacent disciplines in crisis.Objective. This dissertation aims to articulate these problems, as well as foundational issues instatistical theory, in order to scrutinize statistical practice in social work research. In doing so, itparallels recent work in psychology, neuroscience, medicine, ecology, and other scientificdisciplines, while introducing a new program of meta-research to the social work profession.Method. Five leading social work journals were analyzed across a five-year period (2014-2018).In all 1,906 articles were reviewed, with 310 meeting inclusion criteria. The study was dividedinto three complementary parts. Statistical reporting practices were coded and analyzed in Part 1of the study (n = 310). Using reported sample sizes from these articles, a power survey wasperformed, in Part 2, for small, medium, and large effect sizes (n = 207). A novel statistical tool,the p-curve, was used in Part 3 to evaluate the evidential value of results from one journal(Research on Social Work Practice) and to assess for bias. Results from 39 of the 78 eligiblearticles were included in the analysis. Data and materials are available at: https://osf.io/45z3h/Results. Part 1: Notably, 86.1% of articles reviewed did not report an explicit alpha level. Apower analysis was performed in only 7.4% of articles. Use of p-values was common, beingreported in 96.8% of articles, but only 29% of articles reported them in exact form. Only 36.5%of articles reported confidence intervals; with the 95% coverage rate being the most common(reported in 31.3% of all studies). Effect sizes were explicitly reported in the results section ortables in a little more than half of articles (55.2%). Part 2: The mean statistical power for articleswas 57% for small effects, 88% for medium effects, and 95% for large effects. 61% of studiesdid not have adequate power (.80) to detect a small effect, 19% did not have adequate power todetect a medium effect, and 7% a large effect. A robustness test yielded similar but moreconservative estimates for these findings. Part 3: Both the primary p-curve and robustness testyielded right-skewed curves, indicating evidential value for the included set of results, and noevidence of bias.Conclusion. Overall, these findings provide a snapshot of the status of contemporary social workresearch. The results are preliminary but indicate areas where statistical design and reporting canbe improved in published research. The results of the power survey suggest that the field hasincreased mean statistical power compared to prior decades; though these findings are tentativeand have numerous limitations. The results of the p-curve demonstrate its potential as a tool forinvestigating bias within published research; while suggesting that the results included fromResearch on Social Work Practice have evidential value. In all this study provides a first steptowards a broader and more comprehensive assessment of the field.