In an attempt to increase the reliability of empirical findings, psychological scientists have recently proposed a number of changes in the practice of experimental psychology. Most current reform efforts have focused on the analysis of data and the reporting of findings for empirical studies. However, a large contingent of psychologists build models that explain psychological processes and test psychological theories using formal psychological models. Some, but not all, recommendations borne out of the broader reform movement bear upon the practice of behavioral or cognitive modeling. In this article, we consider which aspects of the current reform movement are relevant to psychological modelers, and we propose a number of techniques and practices aimed at making psychological modeling more transparent, trusted, and robust.Cognitive modeling | Reproducibility | Open science | Robustness | Model comparison You never want a serious crisis to go to waste . . . This crisis provides the opportunity for us to do things that you could not before.
Current attempts at methodological reform in sciences come in response to an overall 9 lack of rigor in methodological and scientific practices in experimental sciences. However, some of 10 these reform attempts suffer from the same mistakes and over-generalizations they purport to 11 address. Considering the costs of allowing false claims to become canonized, we argue for more 12 rigor and nuance in methodological reform. By way of example, we present a formal analysis of 13 three common claims in the metascientific literature: (a) that reproducibility is the cornerstone of 14 science; (b) that data must not be used twice in any analysis; and (c) that exploratory projects are 15 characterized by poor statistical practice. We show that none of these three claims are correct in 16 general and we explore when they do and do not hold. 17 18 42 1 of 28 Manuscript submitted.an opportunity: reformers are in an opportune position to take criticism and self-correct before 43 allowing false claims to be canonized as methodological facts (Nissen et al., 2016). 44 In this paper we advocate for the necessity of statistically rigorous and scientifically nuanced 45 arguments to make proper methodological claims in the reform literature. Toward this aim, we 46 evaluate three examples of methodological claims that have been advanced and well-accepted (as 47 implied by the large number of citations) in the reform literature: 48 1. Reproducibility is the cornerstone of, or a demarcation criterion for, science. 49 2. Using data more than once invalidates statistical inference. 50 3. Exploratory research uses "wonky" statistics. 51 Each of these claims suffers from some of the problems outlined earlier and as a result, has 52 contributed to methodological half-truths (or untruths). We evaluate each claim using statistical 53 theory against a broad philosophical and scientific background. 54 While we focus on these three claims, we believe our call for rigor and nuance can reach 55 further with the following emphasis: Statistics is a formal science whose methodological claims 56 follow from probability calculus. Methodological claims are either proved mathematically or by 57 simulation before being advanced for the use of scientists. Most valid methodological advances 58 are incremental, and they rarely ever provide simple prescriptions to complex inference problems. 59 Norms issued on the basis of bold claims about new methods might be quickly adopted by empirical 60 scientists as heuristics and might alter scientific practices. However, advancing such reforms in the 61 absence of formal proofs is sacrificing rigor for boldness and can lead to unforeseeable scientific 62 consequences. We believe that hasty revolution may hold science back more than it helps move 63 it forward. We hope that our approach may facilitate scientific progress that stands on firm 64 ground-supported by theory or evidence. 65 Claim 1: Reproducibility is the cornerstone of, or a demarcation criterion 66 for, science. 67 A common asser...
Consistent confirmations obtained independently of each other lend credibility to a scientific result. We refer to results satisfying this consistency as reproducible and assume that reproducibility is a desirable property of scientific discovery. Yet seemingly science also progresses despite irreproducible results, indicating that the relationship between reproducibility and other desirable properties of scientific discovery is not well understood. These properties include early discovery of truth, persistence on truth once it is discovered, and time spent on truth in a long-term scientific inquiry. We build a mathematical model of scientific discovery that presents a viable framework to study its desirable properties including reproducibility. In this framework, we assume that scientists adopt a model-centric approach to discover the true model generating data in a stochastic process of scientific discovery. We analyze the properties of this process using Markov chain theory, Monte Carlo methods, and agent-based modeling. We show that the scientific process may not converge to truth even if scientific results are reproducible and that irreproducible results do not necessarily imply untrue results. The proportion of different research strategies represented in the scientific population, scientists’ choice of methodology, the complexity of truth, and the strength of signal contribute to this counter-intuitive finding. Important insights include that innovative research speeds up the discovery of scientific truth by facilitating the exploration of model space and epistemic diversity optimizes across desirable properties of scientific discovery.
Current attempts at methodological reform in sciences come in response to an overall lack of rigor in methodological and scientific practices in experimental sciences. However, most methodological reform attempts suffer from similar mistakes and over-generalizations to the ones they aim to address. We argue that this can be attributed in part to lack of formalism and first principles. Considering the costs of allowing false claims to become canonized, we argue for formal statistical rigor and scientific nuance in methodological reform. To attain this rigor and nuance, we propose a five-step formal approach for solving methodological problems. To illustrate the use and benefits of such formalism, we present a formal statistical analysis of three popular claims in the metascientific literature: (i) that reproducibility is the cornerstone of science; (ii) that data must not be used twice in any analysis; and (iii) that exploratory projects imply poor statistical practice. We show how our formal approach can inform and shape debates about such methodological claims.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.