This study explores how researchers’ analytical choices affect the reliability of scientific findings. Most discussions of reliability problems in science focus on systematic biases. We broaden the lens to emphasize the idiosyncrasy of conscious and unconscious decisions that researchers make during data analysis. We coordinated 161 researchers in 73 research teams and observed their research decisions as they used the same data to independently test the same prominent social science hypothesis: that greater immigration reduces support for social policies among the public. In this typical case of social science research, research teams reported both widely diverging numerical findings and substantive conclusions despite identical start conditions. Researchers’ expertise, prior beliefs, and expectations barely predict the wide variation in research outcomes. More than 95% of the total variance in numerical results remains unexplained even after qualitative coding of all identifiable decisions in each team’s workflow. This reveals a universe of uncertainty that remains hidden when considering a single study in isolation. The idiosyncratic nature of how researchers’ results and conclusions varied is a previously underappreciated explanation for why many scientific hypotheses remain contested. These results call for greater epistemic humility and clarity in reporting scientific findings.
Typically, associations between being unemployed and policy attitudes are explained with reference to economic self‐interest considerations of the unemployed. Preferences for labour market policies (LMP) and egalitarian preferences are the prime example and the focus of this study. Its aim is to challenge this causal self‐interest argument: self‐interest consistent associations of unemployment with policy preferences are neither necessarily driven by self‐interest nor necessarily causal. To that end, this article first confronts the self‐interest argument with a broader perspective on attitudes. Given that predispositions (e.g., value orientations) are stable and influence more specific policy attitudes, it is at least questionable whether people change their policy attitudes simply because they get laid off. Second, the article derives a non‐causal argument behind associations between unemployment and policy attitudes, arguing that these might be spurious associations driven by individuals’ socioeconomic background. After all, the entire socioeconomic background of a person is simultaneously related to both the risk of getting unemployed (‘selection into unemployment’) and distinct political socialisation experiences from early childhood onwards. Third, this article uses methods inspired by a counterfactual account on causality to test the non‐causal claims. Analyses are carried out using the fourth wave of the European Social Survey and applying entropy balancing to control for selection bias. In only two of the 31 analysed countries do unemployment effects on egalitarian orientations remain significant after controlling for selection bias. The same holds for effects on active LMP attitudes with the exception of six countries. Attitudes towards passive LMP are to some degree an exception since effects remain in a third of the countries. Robustness checks and Bayes factor replications showing evidence for the absence of unemployment effects support the general impression from these initial analyses. After discussing this article's results and limitations, its broader implications are considered. On the one hand, the article offers a new perspective on the conceptualisation and measurement of unemployment risk. On the other hand, its theoretical argument, as well as its treatment of the resulting selection bias, can be broadly applied. Thus, this article can contribute to many other research questions regarding the (ir)relevance of individual life events for political attitudes and political behaviour.
In an era of mass migration, social scientists, populist parties and social movements raise concerns over the future of immigration-destination societies. What impacts does this have on policy and social solidarity? Comparative cross-national research, relying mostly on secondary data, has findings in different directions. There is a threat of selective model reporting and lack of replicability. The heterogeneity of countries obscures attempts to clearly define data-generating models. P-hacking and HARKing lurk among standard research practices in this area.This project employs crowdsourcing to address these issues. It draws on replication, deliberation, meta-analysis and harnessing the power of many minds at once. The Crowdsourced Replication Initiative carries two main goals, (a) to better investigate the linkage between immigration and social policy preferences across countries, and (b) to develop crowdsourcing as a social science method. The Executive Report provides short reviews of the area of social policy preferences and immigration, and the methods and impetus behind crowdsourcing plus a description of the entire project. Three main areas of findings will appear in three papers, that are registered as PAPs or in process.
Are self-interest or presumably stable value orientations and other predispositions the main drivers behind social policy attitudes? This article contributes to this debate by moving away from its binary discussion. It differentiates between attitude changes driven by self-interest that are in line with pre-existing predispositions and those that are not. Empirically, this article focuses on changes of labour market policy attitudes after employment transitions and job insecurity changes. More precisely, this article differentiates between attitude changes within three subgroups. (A) People whose self-interest after the employment transitions reinforces their prior predispositions. (B) People without strong prior predispositions, who are thus unconstrained by them. And (C) people whose self-interest after the employment transitions contradicts their prior predispositions. Panel analyses with fixed effects use German SOEP waves from 1997 and 2002. Main effects suggest an important role for selfinterest as they show significant attitudinal reactions after most of the transitions and perception changes. However, subgroup analyses result in a somewhat mixed picture. They show attitude changes within different subgroups after different transitions and perception changes. This mixed empirical picture suggests caution when interpreting attitudinal change or stability after changing material circumstances as a sign for the relative importance of self-interest or predispositions.
The paper reports findings from a crowdsourced replication. Eighty-four replicator teams attempted to verify results reported in an original study by running the same models with the same data. The replication involved an experimental condition. A “transparent” group received the original study and code, and an “opaque” group received the same underlying study but with only a methods section and description of the regression coefficients without size or significance, and no code. The transparent group mostly verified the original study (95.5%), while the opaque group had less success (89.4%). Qualitative investigation of the replicators’ workflows reveals many causes of non-verification. Two categories of these causes are hypothesized, routine and non-routine. After correcting non-routine errors in the research process to ensure that the results reflect a level of quality that should be present in ‘real-world’ research, the rate of verification was 96.1% in the transparent group and 92.4% in the opaque group. Two conclusions follow: (1) Although high, the verification rate suggests that it would take a minimum of three replicators per study to achieve replication reliability of at least 95% confidence assuming ecological validity in this controlled setting, and (2) like any type of scientific research, replication is prone to errors that derive from routine and undeliberate actions in the research process. The latter suggests that idiosyncratic researcher variability might provide a key to understanding part of the “reliability crisis” in social and behavioral science and is a reminder of the importance of transparent and well documented workflows.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.