2020
DOI: 10.1101/2020.04.26.048306
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The case for formal methodology in scientific reform

Abstract: Current attempts at methodological reform in sciences come in response to an overall 9 lack of rigor in methodological and scientific practices in experimental sciences. However, some of 10 these reform attempts suffer from the same mistakes and over-generalizations they purport to 11 address. Considering the costs of allowing false claims to become canonized, we argue for more 12 rigor and nuance in methodological reform. By way of example, we present a formal analysis of 13 three common claims in the meta… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
91
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 68 publications
(93 citation statements)
references
References 127 publications
(168 reference statements)
0
91
0
Order By: Relevance
“…It works well when the correct analytic plan is known and can be specified a priori, and when the data are unlikely to deviate in surprising ways from the assumptions of that plan. However, preregistrations may be a particularly brittle solution in that a misspecified analysis plan will produce biased estimates, yet deviations (e.g., to correct the revealed misspecification) Improving Practices and Inferences in DCN 11 introduce the very analytic flexibility they are meant to eliminate (Devezer et al, 2020). Other possible solutions include adjusting the alpha level of preregistered analysis plans to account for specific conditional possibilities (see section 2.4 for a discussion of this problem with regard to massively univariate neuroimaging data and 4.1 for further detail on preregistration), and sensitivity analyses (see section 3.3 in which we discuss specification curves as an exploratory method).…”
Section: Reducing Analytic Flexibilitymentioning
confidence: 99%
See 3 more Smart Citations
“…It works well when the correct analytic plan is known and can be specified a priori, and when the data are unlikely to deviate in surprising ways from the assumptions of that plan. However, preregistrations may be a particularly brittle solution in that a misspecified analysis plan will produce biased estimates, yet deviations (e.g., to correct the revealed misspecification) Improving Practices and Inferences in DCN 11 introduce the very analytic flexibility they are meant to eliminate (Devezer et al, 2020). Other possible solutions include adjusting the alpha level of preregistered analysis plans to account for specific conditional possibilities (see section 2.4 for a discussion of this problem with regard to massively univariate neuroimaging data and 4.1 for further detail on preregistration), and sensitivity analyses (see section 3.3 in which we discuss specification curves as an exploratory method).…”
Section: Reducing Analytic Flexibilitymentioning
confidence: 99%
“…Despite these challenges, the benefits of preregistration can still be reaped by striving for the highest level of transparency, even after data collection has begun or been completed. The OSF motto is that a preregistration is "a plan, not a prison" (for a counterpoint, see Devezer et al, 2020) As new methodological or practical considerations come to light, preregistrations can be amended by creating a (timestamped) addendum that is linked to the original preregistration (under the same OSF repository with an updated version number), which justifies modifications to the original analysis plan (e.g., "thresholding criteria was updated to use a new approach based on recent paper, and this was done prior to data analysis").…”
Section: Improving Practices and Inferences In Dcn 36mentioning
confidence: 99%
See 2 more Smart Citations
“…If playing with data to produced researcher's preferred results to meet institutionalized incentive criteria was not bad enough, researchers who might only pursue a hypothesis test in a dataset if they observe some 'desirable' pattern in that data first. In doing this they have invalidated the p-test in the first place because they have conditioned their test on prior conditions that do not represent the distribution of potential outcomes, i.e., they inflated the Type I error by some (most likely unknown) amount as discussed in Devezer et al (2020).…”
Section: Figure 1 How To Visualize Bias In P-valuesmentioning
confidence: 99%