Doing research inevitably involves making numerous decisions that can influence research outcomes in such a way that it leads to overconfidence in statistical conclusions. One proposed method to increase the interpretability of a research finding is preregistration, which involves documenting analytic choices on a public, third-party repository prior to any influence by data. To investigate whether, in psychology, preregistration lives up to that potential, we focused on all articles published in Psychological Science with a preregistered badge between February 2015 and November 2017, and assessed the adherence to their corresponding preregistration plans. We observed deviations from the plan in all studies, and, more importantly, in all but one study, at least one of these deviations was not fully disclosed. We discuss examples and possible explanations, and highlight good practices for preregistering research.
Preregistration is a method to increase research transparency by documenting research decisions on a public, third-party repository prior to any influence by data. It is becoming increasingly popular in all subfields of psychology and beyond. Adherence to the preregistration plan may not always be feasible and even is not necessarily desirable, but without disclosure of deviations, readers who do not carefully consult the preregistration plan might get the incorrect impression that the study was exactly conducted and reported as planned. In this paper, we have investigated adherence and disclosure of deviations for all articles published with the Preregistered badge in Psychological Science between February 2015 and November 2017 and shared our findings with the corresponding authors for feedback. Two out of 27 preregistered studies contained no deviations from the preregistration plan. In one study, all deviations were disclosed. Nine studies disclosed none of the deviations. We mainly observed (un)disclosed deviations from the plan regarding the reported sample size, exclusion criteria and statistical analysis. This closer look at preregistrations of the first generation reveals possible hurdles for reporting preregistered studies and provides input for future reporting guidelines. We discuss the results and possible explanations, and provide recommendations for preregistered research.
Just as teachers give students exams to assess their mastery of a subject, researchers submit their theories to empirical tests. And just as a high score on a test by itself is not sufficient to believe in the student’s mastery of a subject, researchers need severe tests to make reliable inferences from observations to theories. In this paper, we provide an explication of the concept of severity, and how it underlies three current methodological crises in psychology: the theory crisis, the measurement crisis, and the generalizability crisis. Our detailed account reinforces the importance of designing tests that can prove yourself wrong, and should assist empirical researchers in evaluating the severity of their own tests.
To test scientific hypotheses in the social sciences the substantive hypothesis is formulated together with auxiliary hypotheses, so that predictions about observable quantities can be logically derived from their conjunction -the complete hypothesis.Auxiliary hypotheses are statistical and theoretical claims about the variables of interest. For instance, beliefs about how the observed data were generated, the cause
Study preregistration has become increasingly popular in psychology, but its effectiveness in restricting potentially biasing researcher degrees of freedom remains unclear. We used an extensive protocol to assess the strictness of preregistrations and the consistency between preregistration and publications of 300 preregistered psychology studies. We found that preregistrations often lack methodological details and that undisclosed deviations from preregistered plans are frequent. Combining the strictness and consistency results highlights that biases due to researcher degrees of freedom are prevalent and likely in many preregistered studies. More comprehensive registration templates typically yielded stricter and hence better preregistrations. We did not find that effectiveness of preregistrations differed over time or between original and replication studies. Furthermore, we found that operationalizations of variables were generally more effectively preregistered than other study parts. Inconsistencies between preregistrations and published studies were mainly encountered for data collection procedures, statistical models, and exclusion criteria. Our results indicate that, to unlock the full potential of preregistration, researchers in psychology should aim to write stricter preregistrations, adhere to these preregistrations more faithfully, and more transparently report any deviations from the preregistrations. This could be facilitated by training and education to improve preregistration skills, as well as the development of more comprehensive templates.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.