Around the turn of the 20th century, communicable infectious diseases were a great threat to human populations. It was only with the advent of vaccination protocols that many of such modern "plagues" were successfully controlled-indeed, today's children are routinely vaccinated against diseases such as measles and smallpox. Karl Pearson, one of the fathers of modern statistics, lived during this period. Given the controversies that existed at the time, he too was interested in the efficacy of newly developed inoculation procedures for combating vulnerability to such infections. In what can probably be called the first metaanalysis, Pearson (1904) collected correlation coefficients to examine whether inoculation against smallpox predicted survival. Furthermore, he quantitatively aggregated these correlations, yielding an unweighted average correlation of .63, a truly massive effect size with enormous real-world significance. Although it took a hiatus for a large part of the 20th century, due to seminal work of pioneers such as Bob Rosenthal (1976) and Gene Glass (1976), meta-analysis became a critical tool for reaching accurate conclusions about empirical findings and resolving sticky questions about whether and when particular effects manifest. Today, meta-analysis is one of the most popular methods for conducting secondary research and literature reviews-a search of the PsycINFO database revealed that 630 publications with the term "meta-analysis" in the abstract were published in 2008, but only 11 were published in 1980.Although there is no denying that quantitative synthesis of empirical findings is both prevalent and desirable (see, e.g., Cooper & Hedges, 1994), there is much less agreement about how to best statistically aggregate research results. Answering this question involves everything from fairly high-level considerations (e.g., how to conceptualize the research question, how to define relevant research findings, how to determine what qualifies as an appropriate test of relevant hypotheses) to lower level considerations of how to choose a relevant metric and what statistical models to use for aggregation of the metrics (e.g., Cooper, Hedges, & Valentine, 2009). Perhaps the most contentious issues revolve around the last question: What statistical procedures are the most appropriate when aggregating effect sizes? In this article, I briefly review the two dominant statistical models employed in metaanalyses, emphasize their limitations, and describe a new, no-cost statistical software tool that employs novel statistical techniques (Bonett, 2008(Bonett, , 2009 and overcomes some of the limitations of traditionally used meta-analytic techniques.
STATISTICAL MODELS IN META-ANALYSISIn order for a researcher to infer "true" values of a particular effect size (e.g., how strong is the link between smoking and lung cancer in an actual population) based on inherently limited empirical data, he or she must specify a "scheme" (generally a mathematical function) for how a given empirical finding relates to the act...