2016
DOI: 10.1037/xge0000159
|View full text |Cite
|
Sign up to set email alerts
|

Meta-analysis to integrate effect sizes within an article: Possible misuse and Type I error inflation.

Abstract: In recent years an increasing number of articles have employed meta-analysis to integrate effect sizes of researchers' own series of studies within a single article ("internal meta-analysis"). Although this approach has the obvious advantage of obtaining narrower confidence intervals, we show that it could inadvertently inflate false-positive rates if researchers are motivated to use internal meta-analysis in order to obtain a significant overall effect. Specifically, if one decides whether to stop or continue… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
44
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 53 publications
(44 citation statements)
references
References 65 publications
(134 reference statements)
0
44
0
Order By: Relevance
“…As would be the case with any random sampling process, there are some exceptions, but the bulk of the published evidence clearly supports the main claim (aligning with the rationale behind a p-curve analysis; Simonsohn et al, 2014). Although we summarize the key results of our prior experiments across several publications here, we only make qualitative comparisons between these results, given recent demonstrations that internal meta-analyses can problematically overstate the strength of evidence for an effect (Ueno et al, 2016;Vosgerau et al, 2018).…”
Section: Overview Of Resultsmentioning
confidence: 99%
“…As would be the case with any random sampling process, there are some exceptions, but the bulk of the published evidence clearly supports the main claim (aligning with the rationale behind a p-curve analysis; Simonsohn et al, 2014). Although we summarize the key results of our prior experiments across several publications here, we only make qualitative comparisons between these results, given recent demonstrations that internal meta-analyses can problematically overstate the strength of evidence for an effect (Ueno et al, 2016;Vosgerau et al, 2018).…”
Section: Overview Of Resultsmentioning
confidence: 99%
“…The sampling plan should also preclude the researcher to conduct a particular study multiple times, and only present the “best” study (i.e., the one with the most desirable results). The use of multiple small studies instead of a larger one is an effective (yet problematic) strategy to find at least one statistically significant result (Bakker et al, 2012) and small underpowered studies can also be pooled by means of a meta-analysis in an ad hoc manner to obtain a statistically significant result (Ueno et al, 2016). Hence, we call the following researcher DF, D7: Failing to specify the sampling plan and allowing for running (multiple) small studies.…”
Section: Design Phasementioning
confidence: 99%
“…The sampling plan should also preclude the researcher to conduct a particular study multiple times, and only present the "best" study (i.e., the one with the most desirable results). The use of multiple small studies instead of a larger one is an effective (yet problematic) strategy to find at least one statistically significant result and small underpowered studies can also be pooled by means of a meta-analysis in an ad hoc manner to obtain a statistically significant result (Ueno, Fastrich, & Murayama, 2016). Hence, we call the following researcher DF, D7: Failing to specify the sampling plan and allowing for running (multiple) small studies.…”
Section: Power and Sampling Planmentioning
confidence: 99%