52% Yes, a signiicant crisis 3% No, there is no crisis 7% Don't know 38% Yes, a slight crisis 38% Yes, a slight crisis 1,576 RESEARCHERS SURVEYED M ore than 70% of researchers have tried and failed to reproduce another scientist's experiments, and more than half have failed to reproduce their own experiments. Those are some of the telling figures that emerged from Nature's survey of 1,576 researchers who took a brief online questionnaire on reproducibility in research. The data reveal sometimes-contradictory attitudes towards reproduc-ibility. Although 52% of those surveyed agree that there is a significant 'crisis' of reproducibility, less than 31% think that failure to reproduce published results means that the result is probably wrong, and most say that they still trust the published literature. Data on how much of the scientific literature is reproducible are rare and generally bleak. The best-known analyses, from psychology 1 and cancer biology 2 , found rates of around 40% and 10%, respectively. Our survey respondents were more optimistic: 73% said that they think that at least half of the papers in their field can be trusted, with physicists and chemists generally showing the most confidence. The results capture a confusing snapshot of attitudes around these issues, says Arturo Casadevall, a microbiologist at the Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland. "At the current time there is no consensus on what reproducibility is or should be. " But just recognizing that is a step forward, he says. "The next step may be identifying what is the problem and to get a consensus. "
We outline the need to, and provide a guide on how to, conduct a meta-analysis on one's own studies within a manuscript. Although conducting a "mini meta" within one's manuscript has been argued for in the past, this practice is still relatively rare and adoption is slow. We believe two deterrents are responsible. First, researchers may not think that it is legitimate to do a meta-analysis on a small number of studies. Second, researchers may think a meta-analysis is too complicated to do without expert knowledge or guidance. We dispel these two misconceptions by (1) offering arguments on why researchers should be encouraged to do mini metas, (2) citing previous articles that have conducted such analyses to good effect, and (3) providing a user-friendly guide on calculating some metaanalytic procedures that are appropriate when there are only a few studies. We provide formulas for calculating effect sizes and converting effect sizes from one metric to another (e.g., from Cohen's d to r), as well as annotated Excel spreadsheets and a step-by-step guide on how to conduct a simple meta-analysis. A series of related studies can be strengthened and better understood if accompanied by a mini meta-analysis.
While the gender gap in mathematics and science has narrowed, men pursue these fields at a higher rate than women. In this study, 165 men and women at a university in the northeastern United States completed implicit and explicit measures of science stereotypes (association between male and science, relative to female and humanities), and gender identity (association between the concept "self" and one's own gender, relative to the concept "other" and the other gender), and reported plans to pursue science-oriented and humanities-oriented academic programs and careers. Although men were more likely than women to plan to pursue science, this gap in students' intentions was completely accounted for by implicit stereotypes. Moreover, implicit gender identity moderated the relationship between women's stereotypes and their academic plans, such that implicit stereotypes only predicted plans for women who strongly implicitly identified as female. These findings illustrate how an understanding of implicit cognitions can illuminate between-group disparities as well as within-group variability in science pursuit.
Sexual harassment is pervasive and has adverse effects on its victims, yet perceiving sexual harassment is wrought with ambiguity, making harassment difficult to identify and understand. Eleven preregistered, multimethod experiments (total N = 4,065 participants) investigated the nature of perceiving sexual harassment by testing whether perceptions of sexual harassment and its impact are facilitated when harassing behaviors target those who fit with the prototype of women (e.g., those who have feminine features, interests, and characteristics) relative to those who fit less well with this prototype. Studies A1–A5 demonstrate that participants’ mental representation of sexual harassment targets overlapped with the prototypes of women as assessed through participant-generated drawings, face selection tasks, reverse correlation, and self-report measures. In Studies B1–B4, participants were less likely to label incidents as sexual harassment when they targeted nonprototypical women compared with prototypical women. In Studies C1 and C2, participants perceived sexual harassment claims to be less credible and the harassment itself to be less psychologically harmful when the victims were nonprototypical women rather than prototypical women. This research offers theoretical and methodological advances to the study of sexual harassment through social cognition and prototypicality perspectives, and it has implications for harassment reporting and litigation as well as the realization of fundamental civil rights. For materials, data, and preregistrations of all studies, see https://osf.io/xehu9/.
Psychological science is at an inflection point: The COVID-19 pandemic has exacerbated inequalities that stem from our historically closed and exclusive culture. Meanwhile, reform efforts to change the future of our science are too narrow in focus to fully succeed. In this article, we call on psychological scientists—focusing specifically on those who use quantitative methods in the United States as one context for such conversations—to begin reimagining our discipline as fundamentally open and inclusive. First, we discuss whom our discipline was designed to serve and how this history produced the inequitable reward and support systems we see today. Second, we highlight how current institutional responses to address worsening inequalities are inadequate, as well as how our disciplinary perspective may both help and hinder our ability to craft effective solutions. Third, we take a hard look in the mirror at the disconnect between what we ostensibly value as a field and what we actually practice. Fourth and finally, we lead readers through a roadmap for reimagining psychological science in whatever roles and spaces they occupy, from an informal discussion group in a department to a formal strategic planning retreat at a scientific society.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.