2021
DOI: 10.1002/hbm.25374
|View full text |Cite
|
Sign up to set email alerts
|

Improving the replicability of neuroimaging findings by thresholding effect sizes instead of p‐values

Abstract: The classical approach for testing statistical images using spatial extent inference (SEI) thresholds the statistical image based on the p-value. This approach has an unfortunate consequence on the replicability of neuroimaging findings because the targeted brain regions are affected by the sample size-larger studies have more power to detect smaller effects. Here, we use simulations based on the preprocessed Autism Brain Imaging Data Exchange (ABIDE) to show that thresholding statistical images by effect size… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 9 publications
(2 citation statements)
references
References 32 publications
0
2
0
Order By: Relevance
“…The problems of thresholding p ‐values and other test statistics have been addressed at length in the current statistics literature (e.g., Wasserstein et al, 2019), as well as from the specific perspective of fMRI by Chen et al (2022). Vandekar and Stephens (2021), by contrast, suggest that thresholding effect sizes rather than p ‐values can lead to increased replicability. Several other recent works have also proposed approaches for replicable research specifically in the context of large‐sample fMRI studies (Abraham et al, 2017; Abrol et al, 2017).…”
Section: Reproducibility and Replicabilitymentioning
confidence: 99%
“…The problems of thresholding p ‐values and other test statistics have been addressed at length in the current statistics literature (e.g., Wasserstein et al, 2019), as well as from the specific perspective of fMRI by Chen et al (2022). Vandekar and Stephens (2021), by contrast, suggest that thresholding effect sizes rather than p ‐values can lead to increased replicability. Several other recent works have also proposed approaches for replicable research specifically in the context of large‐sample fMRI studies (Abraham et al, 2017; Abrol et al, 2017).…”
Section: Reproducibility and Replicabilitymentioning
confidence: 99%
“…This criterion is chosen for various reasons. First, using effect size as a threshold has been shown to improve replicability in neuroimaging findings (Vandekar and Stephens, 2021). Second, a recent study found that effect sizes of association between two variables in ABCD were mostly around 0.03 to 0.09 (Owens et al, 2021).…”
Section: Measurementsmentioning
confidence: 99%