2018
DOI: 10.1101/285171
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Small effect size leads to reproducibility failure in resting-state fMRI studies

Abstract: Thousands of papers using resting-state functional magnetic resonance imaging (RS-fMRI) have been published on brain disorders. Results in each paper may have survived correction for multiple comparison. However, since there have been no robust results from large scale meta-analysis, we do not know how many of published results are truly positives. The present meta-analytic work included 60 original studies, with 57 studies (4 datasets, 2266 participants) that used a between-group design and 3 studies (1 datas… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 39 publications
0
7
0
Order By: Relevance
“…Finally, we employed a single analysis pipeline that was pre-registered to protect against pipeline exploration that would bias us toward positive results. However, variation in analytic processing streams impacts findings and/or data quality in task-based ( Botvinik-Nezer et al, 2020 ) and resting-state ( Ciric et al, 2016 ) functional neuroimaging, and individual studies of between-group rs-fc differences using single analytic approaches may be prone to error ( Jia et al, 2018 ). These same meta-science studies find that meta-analyses examining unthresholded statistical maps across processing streams and replication samples reveal patterns that are robust.…”
Section: Resultsmentioning
confidence: 99%
“…Finally, we employed a single analysis pipeline that was pre-registered to protect against pipeline exploration that would bias us toward positive results. However, variation in analytic processing streams impacts findings and/or data quality in task-based ( Botvinik-Nezer et al, 2020 ) and resting-state ( Ciric et al, 2016 ) functional neuroimaging, and individual studies of between-group rs-fc differences using single analytic approaches may be prone to error ( Jia et al, 2018 ). These same meta-science studies find that meta-analyses examining unthresholded statistical maps across processing streams and replication samples reveal patterns that are robust.…”
Section: Resultsmentioning
confidence: 99%
“…Two-sample t-tests between ADHD group and TDC group were performed in each cohort for ADHD-200 dataset. As Jia et al (2018) have recently reported, stringent or liberal multiple comparison correction could not control the false discoveries across multiple studies when the effect sizes were relatively small. The reproducibility of the results across multiple cohorts is more important for the recovery of the ground truth.…”
Section: T-tests On Amplitude Of Low-frequency Fluctuation Maps Of Eamentioning
confidence: 99%
“…How to balance the type I and type II error is a big issue in the field. Recently, a study (Jia et al ., ) used the meta‐analysis results as the robust results and found that the between‐group design results of each original study showed high‐false negative rates (median 99%), high‐false discovery rates (median 86%) and low accuracy (median 1%), regardless of whether stringent or liberal multiple comparison correction was used. These observations suggest that multiple comparison correction does not control for false discoveries across multiple studies when the effect sizes are relatively small.…”
Section: Discussionmentioning
confidence: 99%