52% Yes, a signiicant crisis 3% No, there is no crisis 7% Don't know 38% Yes, a slight crisis 38% Yes, a slight crisis 1,576 RESEARCHERS SURVEYED M ore than 70% of researchers have tried and failed to reproduce another scientist's experiments, and more than half have failed to reproduce their own experiments. Those are some of the telling figures that emerged from Nature's survey of 1,576 researchers who took a brief online questionnaire on reproducibility in research. The data reveal sometimes-contradictory attitudes towards reproduc-ibility. Although 52% of those surveyed agree that there is a significant 'crisis' of reproducibility, less than 31% think that failure to reproduce published results means that the result is probably wrong, and most say that they still trust the published literature. Data on how much of the scientific literature is reproducible are rare and generally bleak. The best-known analyses, from psychology 1 and cancer biology 2 , found rates of around 40% and 10%, respectively. Our survey respondents were more optimistic: 73% said that they think that at least half of the papers in their field can be trusted, with physicists and chemists generally showing the most confidence. The results capture a confusing snapshot of attitudes around these issues, says Arturo Casadevall, a microbiologist at the Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland. "At the current time there is no consensus on what reproducibility is or should be. " But just recognizing that is a step forward, he says. "The next step may be identifying what is the problem and to get a consensus. "
Although replication is a central tenet of science, direct replications are rare in psychology. This research tested variation in the replicability of thirteen classic and contemporary effects across 36 independent samples totaling 6,344 participants. In the aggregate, ten effects replicated consistently.One effect -imagined contact reducing prejudice -showed weak support for replicability. And two effects -flag priming influencing conservatism and currency priming influencing system justification -did not replicate. We compared whether the conditions such as lab versus online or U.S. versus international sample predicted effect magnitudes. By and large they did not. The results of this small sample of effects suggest that replicability is more dependent on the effect itself than on the sample and setting used to investigate the effect. Word Count = 121 words Many Labs 3 Investigating variation in replicability: A "Many Labs" Replication ProjectReplication is a central tenet of science; its purpose is to confirm the accuracy of empirical findings, clarify the conditions under which an effect can be observed, and estimate the true effect size (Brandt et al., 2013; Open Science Collaboration, 2012. Successful replication of an experiment requires the recreation of the essential conditions of the initial experiment. This is often easier said than done. There may be an enormous number of variables influencing experimental results, and yet only a few tested. In the behavioral sciences, many effects have been observed in one cultural context, but not observed in others. Likewise, individuals within the same society, or even the same individual at different times (Bodenhausen, 1990), may differ in ways that moderate any particular result.Direct replication is infrequent, resulting in a published literature that sustains spurious findings (Ioannidis, 2005) and a lack of identification of the eliciting conditions for an effect. While there are good epistemological reasons for assuming that observed phenomena generalize across individuals and contexts in the absence of contrary evidence, the failure to directly replicate findings is problematic for theoretical and practical reasons. Failure to identify moderators and boundary conditions of an effect may result in overly broad generalizations of true effects across situations (Cesario, 2013) or across individuals (Henrich, Heine, & Norenzayan, 2010). Similarly, overgeneralization may lead observations made under laboratory observations to be inappropriately extended to ecological contexts that differ in important ways (Henry, MacLeod, Phillips, & Crawford, 2004). Practically, attempts to closely replicate research findings can reveal important differences in what is considered a direct replication (Schimdt, 2009), thus leading to refinements of the initial theory (e.g., Aronson, 1992, Greenwald et al., 1986. Close replication can also lead to Many Labs 4 the clarification of tacit methodological knowledge that is necessary to elicit the effect of interest (Collins,...
Although replication is a central tenet of science, direct replications are rare in psychology. This research tested variation in the replicability of thirteen classic and contemporary effects across 36 independent samples totaling 6,344 participants. In the aggregate, ten effects replicated consistently. One effect – imagined contact reducing prejudice – showed weak support for replicability. And two effects – flag priming influencing conservatism and currency priming influencing system justification – did not replicate. We compared whether the conditions such as lab versus online or U.S. versus international sample predicted effect magnitudes. By and large they did not. The results of this small sample of effects suggest that replicability is more dependent on the effect itself than on the sample and setting used to investigate the effect.
Why are American landscapes (e.g., housing developments, shopping malls) so uniform, despite the well-known American penchant for independence and uniqueness? We propose that this paradox can be explained by American mobility: Residential mobility fosters familiarity-seeking and familiarity-liking, while allowing individuals to pursue their personal goals and desires. We reason that people are drawn to familiar objects (e.g., familiar, national chain stores) when they move. We conducted 5 studies to test this idea at the levels of society, individuals, and situations. We found that (a) national chain stores do better in residentially mobile places than in residentially stable places (controlling for other economic and demographic factors; Study 1); (b) individuals who have moved a lot prefer familiar, national chain stores to unfamiliar stores (Studies 2a and 2b); and (c) a residential mobility mindset enhances the mere exposure and familiarity-liking effect (Studies 4 and 5). In Study 5, we demonstrated that the link between mobility and familiarity-liking was mediated by anxiety evoked by mobility.
Interest in unintended discrimination that can result from implicit attitudes and stereotypes (implicit biases) has stimulated many research investigations. Much of this research has used the Implicit Association Test (IAT) to measure association strengths that are presumed to underlie implicit biases. It had been more than a decade since the last published treatment of recommended best practices for research using IAT measures. After an initial draft by the first author, and continuing through three subsequent drafts, the 22 authors and 14 commenters contributed extensively to refining the selection and description of recommendation-worthy research practices. Individual judgments of agreement or disagreement were provided by 29 of the 36 authors and commenters. Of the 21 recommended practices for conducting research with IAT measures presented in this article, all but two were endorsed by 90% or more of those who felt knowledgeable enough to express agreement or disagreement; only 4% of the totality of judgments expressed disagreement. For two practices that were retained despite more than two judgments of disagreement (four for one, five for the other), the bases for those disagreements are described in presenting the recommendations. The article additionally provides recommendations for how to report procedures of IAT measures in empirical articles.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.