2017
DOI: 10.1371/journal.pone.0176124
|View full text |Cite
|
Sign up to set email alerts
|

A comparison of multiple testing adjustment methods with block-correlation positively-dependent tests

Abstract: In high dimensional data analysis (such as gene expression, spatial epidemiology, or brain imaging studies), we often test thousands or more hypotheses simultaneously. As the number of tests increases, the chance of observing some statistically significant tests is very high even when all null hypotheses are true. Consequently, we could reach incorrect conclusions regarding the hypotheses. Researchers frequently use multiplicity adjustment methods to control type I error rates—primarily the family-wise error r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
51
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 67 publications
(51 citation statements)
references
References 21 publications
0
51
0
Order By: Relevance
“…Post hoc tests were performed by recoding of the reference category until all 10 pairwise comparisons of the distraction effects between the age groups were conducted. We controlled the false discovery rate (FDR) at a level of 5% using the two-step Benjamini-Hochberg procedure (Benjamini, Krieger, & Yekutieli, 2006), because it provides a good compromise between statistical power and FDR control (Stevens, Al Masud, & Suyundikov, 2017). All LMMs were conducted using the packages lme4 (Bates, Maechler, Bolker, & Walker, 2015) and lmerTest (Kuznetsova, Brockhoff, Haubo, & Christensen, 2016) in R. In order to maximize precision of random-effect estimates, the models were estimated using restricted maximum likelihood where appropriate.…”
Section: Discussionmentioning
confidence: 99%
“…Post hoc tests were performed by recoding of the reference category until all 10 pairwise comparisons of the distraction effects between the age groups were conducted. We controlled the false discovery rate (FDR) at a level of 5% using the two-step Benjamini-Hochberg procedure (Benjamini, Krieger, & Yekutieli, 2006), because it provides a good compromise between statistical power and FDR control (Stevens, Al Masud, & Suyundikov, 2017). All LMMs were conducted using the packages lme4 (Bates, Maechler, Bolker, & Walker, 2015) and lmerTest (Kuznetsova, Brockhoff, Haubo, & Christensen, 2016) in R. In order to maximize precision of random-effect estimates, the models were estimated using restricted maximum likelihood where appropriate.…”
Section: Discussionmentioning
confidence: 99%
“…We conducted descriptive and inferential statistics using IBM SPSS Statistics 23 (IBM Corporation, New York, USA). The dependence of shear strength from the enamel preparation protocol was analysed by oneway analysis of variance (ANOVA) after checking for normality of residuals and homogeneity of variances (α=5%) followed by post hoc pairwise comparisons with Sidak p-value correction [36]. We analysed the fractography patterns resulting from the enamel preparation protocols applying Fisher's exact test, as 50-75 % of the expected frequencies were less than five.…”
Section: Discussionmentioning
confidence: 99%
“…However, the large number of tests performed with the Medieval are likely to increase type I error rates (also the binomial tests are not independent because the finding of IBD between the Medieval and a given population is likely to influence subsequent tests) 58 . In similar cases of block-positive dependence among tests, it has been shown that a best option to control for false discovery rate (FDR) 59 is to use the two stage Benjamini-Hochberg (TSBH) procedure 60 . We subsequently adjusted the p-values between observed and expected IBD blocks for the TSBH procedure; a nominal type I error rate (5%) was used to estimate the number of true null hypotheses in the two-stage TSBH with R multtest package 61 .…”
Section: Population Enrichmentmentioning
confidence: 99%