2013
DOI: 10.1080/01621459.2013.829002
|View full text |Cite
|
Sign up to set email alerts
|

Discovering Findings That Replicate From a Primary Study of High Dimension to a Follow-Up Study

Abstract: We consider the problem of identifying whether findings replicate from one study of high dimension to another, when the primary study guides the selection of hypotheses to be examined in the follow-up study as well as when there is no division of roles into the primary and the follow-up study. We show that existing meta-analysis methods are not appropriate for this problem, and suggest novel methods instead. We prove that our multiple testing procedures control for appropriate error-rates. The suggested FWER c… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
62
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 30 publications
(62 citation statements)
references
References 24 publications
0
62
0
Order By: Relevance
“…Bogomolov and Heller considered the statistical problem of identifying whether findings replicate from one study of high dimension to another. To assess replicability from a validation study, Heller et al , defined the false discovery rate (FDR) for replicability analysis as the expected proportion of false replicability claims among all those called replicated.…”
Section: Introductionmentioning
confidence: 99%
“…Bogomolov and Heller considered the statistical problem of identifying whether findings replicate from one study of high dimension to another. To assess replicability from a validation study, Heller et al , defined the false discovery rate (FDR) for replicability analysis as the expected proportion of false replicability claims among all those called replicated.…”
Section: Introductionmentioning
confidence: 99%
“…Crossscreening performs poorly except when K is large and I is not extremely large, but it often wins decisively in sensitivity analyses with large K and moderate I/K. Cross-screening is related to, though distinct from, a concept of replicability developed by Bogomolov and Heller (2013); see §2.4 and §6 for detailed discussion. Cross-screening rejects hypothesis H k if either half-sample rejects H k , thereby strongly controlling the family-wise rate; however, it achieves Bogomolov-Heller replicability if both halves reject H k .…”
mentioning
confidence: 99%
“…However, research in the multiple-sequence setting has still focused on binary classification, typically on the problem of determining whether or not signals belong to class 3 of Table 1. This is of great interest because class 3 signals are more likely to constitute replicable scientific findings (Benjamini et al, 2009;Bogomolov and Heller, 2013;.…”
Section: Related Workmentioning
confidence: 99%