Purpose: Dynamic assessments (DAs) of word reading skills (e.g., phonological awareness, decoding) demonstrate predictive validity with word reading outcomes but are characterized by substantial heterogeneity in terms of format, administration method, word, and symbol type used, factors which may affect their validity. This systematic review and meta-analysis examined whether the validity of DAs of word reading skills is affected by these characteristics. Method: Five electronic databases (Medline, Embase, PsycINFO, ERIC and CINAHL), 3 preprint repositories (MedRxiv, PsyArxiv and EdArxiv) and the gray literature were searched between March 2022 and March 2023, to identify studies with participants aged 4-10 that reported a Pearson's correlation coefficient between a DA of word reading and a word reading measure. A random effects meta-analysis and 4 subgroup analyses based on DA format, administration method, word and symbol type were conducted. Results: Thirty-two studies from 30 articles were identified. The overall effect size between DAs of word reading skills and word reading is large. There are no significant differences in mean effect sizes based on format (graduated prompt vs. train-test) or administration method (computer vs. in-person). However, DAs that use nonwords and those that use familiar letters or characters demonstrate significantly stronger correlations with word reading measures, than those that use real words and those that use novel symbols. Conclusions: Outcomes provide preliminary evidence to suggest that DAs of word reading skills that use nonwords and familiar letters in their test items are more strongly associated with later word reading ability than those that use real words or novel symbols. There were no significant differences between DAs administered in-person versus via computer. Results inform development of novel DAs of word reading, and clinical practice when it comes to selecting assessment tools.