One of the most robust phenomena studied across the behavioral sciences is numeric anchoring -- where an incidental number preceding a judgment can influence real-world relevant judgments of quantity, price, legal judgments, and more. The authors meta-analyze this expansive literature containing 2,131 total effect sizes (1,050 comparing high anchors against low anchors), finding a large (d = 0.876, 95% CI[0.808, 0.943], I^2 = 92.96%) effect with only a small reduction from publication-bias corrections. However, evidence for theory-relevant moderators is mixed aside from reduced effects associated with basic anchoring (i.e., numeric priming), non-diagnostic anchors, the presence of incentives or debiasing interventions, and weakly for knowledge. The authors supplement the meta-analysis with a large-scale meta-study (N = 1,968) comparing high against low anchors and subsequently find evidence for moderation by anchor diagnosticity, extremity, cognitive load, knowledge, and debiasing but not presence of incentives or other exploratory factors (e.g., a page break between the anchor and judgment). The authors then evaluate existing theories of anchoring against these results, discuss practical methodological points relevant for researchers and practitioners attempting to use anchoring for applied purposes, and close by crafting a taxonomy of anchoring processes to guide future anchoring research.