2019
DOI: 10.1101/809715
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Sample size evolution in neuroimaging research: an evaluation of highly-cited studies (1990-2012) and of latest practices (2017-2018) in high-impact journals

Abstract: Timothy Myers for an initial check on sample sizes in about half the highly cited papers. We thank Josefína Weinerova for extracting sample size data for 2018. We thank Rik Henson (University of Cambridge) for comments on an earlier version of this manuscript.Author contributions. DS designed the research, extracted and analyzed data, wrote program code and the first draft of the paper. JPAI contributed critical comments on design, and data interpretation and revised successive drafts with DS.Competing financi… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
57
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 48 publications
(59 citation statements)
references
References 37 publications
2
57
0
Order By: Relevance
“…There are also some methodological weaknesses in some of the studies reviewed here. Much of the extant literature is based on small sample sizes, which are likely to be underpowered (Szucs and Ioannidis, 2020), and many older studies have also made use of overly-liberal multiple comparison correction methods, which may be subject to inflated type 1 error rates (Eklund et al, 2016;Cox et al, 2017). However, larger cohort-based functional imaging studies of resilience are starting to emerge, and some key findings have now been replicated in well-powered samples (e.g., Corral-Frías et al, 2015;Silveira et al, 2020).…”
Section: Discussionmentioning
confidence: 99%
“…There are also some methodological weaknesses in some of the studies reviewed here. Much of the extant literature is based on small sample sizes, which are likely to be underpowered (Szucs and Ioannidis, 2020), and many older studies have also made use of overly-liberal multiple comparison correction methods, which may be subject to inflated type 1 error rates (Eklund et al, 2016;Cox et al, 2017). However, larger cohort-based functional imaging studies of resilience are starting to emerge, and some key findings have now been replicated in well-powered samples (e.g., Corral-Frías et al, 2015;Silveira et al, 2020).…”
Section: Discussionmentioning
confidence: 99%
“…In Fig. 1e,f, we show that sampling variability (99% confidence interval [CI] of observed correlations) alone generates nominally significant (p<0.05), but inflated correlations, which would then be falsely reported 2,27 . We charted sampling variability as a function of sample size (N=25 to 3,928) for the strongest brain-wide associations as defined in the full sample (N=3,928, strict denoising).…”
Section: Fig 1 Effect Sizes and Sampling Variability Of Univariate Bmentioning
confidence: 94%
“…Such brain-wide association studies (BWAS) hold great promise for predicting and reducing psychiatric disease burden and advancing our understanding of the cognitive abilities that underlie humanity's intellectual feats. However, obtaining MRI data remains very expensive (~$1,000/hr), resulting in many small-sample BWAS studies (e.g., median N=25 1,2 ), whose results often fail to replicate 1,[11][12][13][14][15] .…”
Section: Mainmentioning
confidence: 99%
“…To have a better overview of the literature, we provide quantitative summaries of our findings on the literature content ( Figure 2). We find that validation studies have a median sample size of 13 ( Figure 2A), comparable to the median of the most cited fMRI studies during the period of publication included in our meta-analysis (12), but below current median sample size (20) (Szucs and Ioannidis, 2020). The validation literature uses a wide range of histological markers (with 9 studies using more than one marker, Figure 2F).…”
Section: Resultsmentioning
confidence: 56%