Survey researchers take great care to measure respondents’ answers in an unbiased way; but, how successful are we as a field at remedying unintended and intended biases in our research? The validity of inferences drawn from studies has been found to be improved by the implementation of preregistration practices. Despite this, only 3 of the 83 published articles in POQ and IJPOR in 2020 feature explicitly stated preregistered hypotheses or analyses. This manuscript aims to show survey methodologists how preregistration and replication (where possible) are in service to the broader mission of survey methodology. To that end, we present a practical example of how unknown biases in analysis strategies without preregistration or replication inflate type I errors. In an initial data collection, our analysis showed that the visual layout of battery-type questions significantly decreased data quality. But after committing to replicating and preregistering the hypotheses and analysis plans, none of the results replicated successfully, despite keeping the procedure, sample provider, and analyses identical. This manuscript illustrates how preregistration and replication practices might, in the long term, likely help unburden the academic literature from follow-up publications relying on type I errors.
Survey researchers take great care to measure and report respondents’ answers in an unbiased way; but, how well are we as a field at remedying our own unintended and intended biases in our research? The validity of inferences drawn from studies has been found to be improved by the implementation of preregistration practices. Despite this, only three of the 83 published articles in POQ and IJPOR in 2020 featured explicitly stated preregistered hypotheses or analyses. This manuscript aims to show survey methodologists how preregistration and replication (where possible) are in service to the broader mission of survey methodology. To that end, we also present a practical example of how unknown biases in analysis strategies without preregistration or replication inflate type I errors. In an initial data collection, our analysis shows a certain visual layout of battery-type questions significantly decreases data quality. But after committing to replicating and preregistering the hypotheses and analysis plans, none of the results were successfully replicated, despite keeping the procedure, sample provider, and analyses identical. This manuscript illustrates how preregistration and replication practices will, in the long-term, unburden the academic literature from follow-up publications relying on type I errors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.