T he general conclusions about the efficacy of read aloud accommodations derived from Li's (2014) meta-analysis were consistent with ours (Buzick & Stone, 2014)-that (a) effect sizes are higher for reading assessments than for math assessments for both students with and without disabilities, (b) students with disabilities do not receive a differentially higher benefit from read aloud accommodations on math assessments relative to students without disabilities, (c) read aloud mode can explain some of the differences in effect sizes, (d) there is significant variability in effect sizes that we are not able to explain given information in available studies, and (e) there is an interaction among moderator variables that we cannot quantify without additional effect sizes from new studies. We both concluded that because students without disabilities benefit from read aloud accommodations on reading assessments, as well as students with disabilities, additional validity evidence would be needed to address fairness in relation to specific uses of reading test scores (e.g., as a graduation requirement).The consistency of the general conclusions from each metaanalysis is not surprising given that we both used similar models. The random-effects approach to meta-analysis that we used can be viewed as a special case of the 2-level hierarchical linear modeling (HLM) approach that Li used (see Hox, 2010, chapter 11 for a comparison of the two approaches with an example). In fact, the null HLM model (Model 0 in Li, 2014) is equivalent to the random-effects model. Both assume within-and between-study variation in effect sizes (i.e., effect size differences are due to random error within studies and systematic variation between studies), and both approaches can be used to test for the influence of moderators contributing to differences in effect sizes. The two approaches also share the same substantive limitations; in particular, the assumption of independence across effect sizes is difficult to meet when measuring the effect of the read aloud accommodation. The potential sources of bias are also the same in both meta-analyses, given that both meta-analyses attempted to estimate the efficacy of read aloud accommodations for students with and without disabilities taking mathematics or reading assessments.Our studies differ in the specific information that supports our general conclusions about the efficacy of read aloud accommodations. Li's (2014) meta-analysis included all effect sizes in one model (i.e., combining content areas and student groups), and had a greater focus on the role of moderator variables in explaining the variability in effect sizes derived from previous research. Our primary goal was to estimate the combined effect of the read aloud accommodation and test the statistical significance of the average effect separately by content area and whether or not students had a disability. Consequently, there are practical differences in the interpretations that can be derived from each meta-analysis.Li (2014) estimated a ...