52% Yes, a signiicant crisis 3% No, there is no crisis 7% Don't know 38% Yes, a slight crisis 38% Yes, a slight crisis 1,576 RESEARCHERS SURVEYED M ore than 70% of researchers have tried and failed to reproduce another scientist's experiments, and more than half have failed to reproduce their own experiments. Those are some of the telling figures that emerged from Nature's survey of 1,576 researchers who took a brief online questionnaire on reproducibility in research. The data reveal sometimes-contradictory attitudes towards reproduc-ibility. Although 52% of those surveyed agree that there is a significant 'crisis' of reproducibility, less than 31% think that failure to reproduce published results means that the result is probably wrong, and most say that they still trust the published literature. Data on how much of the scientific literature is reproducible are rare and generally bleak. The best-known analyses, from psychology 1 and cancer biology 2 , found rates of around 40% and 10%, respectively. Our survey respondents were more optimistic: 73% said that they think that at least half of the papers in their field can be trusted, with physicists and chemists generally showing the most confidence. The results capture a confusing snapshot of attitudes around these issues, says Arturo Casadevall, a microbiologist at the Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland. "At the current time there is no consensus on what reproducibility is or should be. " But just recognizing that is a step forward, he says. "The next step may be identifying what is the problem and to get a consensus. "
Interpreting a failure to replicate is complicated by the fact that the failure could be due to the original finding being a false positive, unrecognized moderating influences between the original and replication procedures, or faulty implementation of the procedures in the replication. One strategy to maximize replication quality is involving the original authors in study design. We (N = 21 Labs and N = 2,220 participants) experimentally tested whether original author involvement improved replicability of a classic finding from Terror Management Theory (Greenberg et al., 1994). Our results were non-diagnostic of whether original author involvement improves replicability because we were unable to replicate the finding under any conditions. This suggests that the original finding was either a false positive or the conditions necessary to obtain it are not yet understood or no longer exist. Data, materials, analysis code, preregistration, and supplementary documents can be found on the OSF page: https://osf.io/8ccnw/
We propose a generic model that explains why political systems tend toward certain outcomes. The model identifies possible economic and psychological paths toward change in a metaphorical political economy consisting of farmers, bandits and soldiers. In addition to economic factors, we also consider how two psychological factors, broadly categorized as group identity and exposure to violence, affect the behavior of metaphorical agents. We find that though outcomes tend to be similar with and without the psychological influences, the psychological influences accelerate the adjustment process and create additional policy space for interventions. A methodological contribution of the paper also is the use of summary performance measures represented as phase plots that have the potential to be used with advantage in system dynamics analyses. Copyright © 2014 System Dynamics Society
People typically apply the concept of intentionality to actions directed at achieving desired outcomes. For example, a businessperson might intentionally start a program aimed at increasing company profits. However, if starting the program leads to a foreknown and harmful side effect (e.g., to the environment), the side effect is frequently labeled as intentional even though it was not specifically intended or desired. In contrast, positive side effects (e.g., helping the environment) are rarely labeled as intentional. One explanation of this side-effect effect—that harmful (but not helpful) side effects are labeled as intentional—is that moral considerations influence whether people view actions as intentional or not, implying that bad outcomes are perceived as more intentional than good outcomes. The present research, however, shows that people redefine questions about intentionality to focus on agents’ foreknowledge in harming cases and on their lack of desire or intention in helpful cases, suggesting that the same intentionality question is being interpreted differently as a function of side effect valence. Consistent with this, removing foreknowledge lowers the frequency of labeling harming as intentional without affecting whether people label helping as intentional. Likewise, increasing agents’ desire to help or avoid harming increases rates of labeling helping as intentional without affecting rates of labeling harming as intentional. In summary, divergent decisions to label side effects as intentional or not appear to reflect differences in the criteria people use to evaluate each case, resulting in different interpretations of what questions about intentionality are asking.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.