2002
DOI: 10.1177/019384102236522
|View full text |Cite
|
Sign up to set email alerts
|

Indiscriminate Data Aggregations in Meta-Analysis

Abstract: Whether you are a policy maker of social scientist, you are slowly being drowned in a sea of often inconsistent research data. Proponents of meta-analysis claim that such data can be objectively and usefully summarized for you. The author notes how the assumptions of the meta-analytic model preclude the synthesis of experimental data (which has a clear cause-and-effect logic) with quasi-experimental and/or nonexperimental data (both of which lack such clarity). Yet in the author's review of 64 recent meta-anal… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2003
2003
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(1 citation statement)
references
References 25 publications
0
1
0
Order By: Relevance
“…It has been argued e.g. by Rhodes (2012) that this approach , also referred to as literature reviews (Briggs, 2005;Lopez-Lee, 2002) is rigor wise inferior to meta-analysis: that the approach is "unprincipled in that they use no scientific standards for including studies, apply no probability-based rules for assigning weights, and cannot be replicated" (Rhodes, 2012, p. 24). While we are neither able nor attempt to summarize the average effect size in the evaluations; we recognize that much programmatic learning could still be derived from reviewing the evaluations with in a metaevaluation.…”
Section: Methodsmentioning
confidence: 99%
“…It has been argued e.g. by Rhodes (2012) that this approach , also referred to as literature reviews (Briggs, 2005;Lopez-Lee, 2002) is rigor wise inferior to meta-analysis: that the approach is "unprincipled in that they use no scientific standards for including studies, apply no probability-based rules for assigning weights, and cannot be replicated" (Rhodes, 2012, p. 24). While we are neither able nor attempt to summarize the average effect size in the evaluations; we recognize that much programmatic learning could still be derived from reviewing the evaluations with in a metaevaluation.…”
Section: Methodsmentioning
confidence: 99%