Problematisation: In recent years, psychology has been going through a crisis of sorts. Research methods and practices have come under increased scrutiny, with many issues identified as negatively contributing to low replicability and reproducibility of psychological research.Implications: As a consequence, researchers are increasingly called upon to overhaul and improve their research process. Various stakeholders within the scientific community are arguing for more openness and rigor within industrial and organisational (I-O) psychological research. A lack of transparency and openness further fuels criticisms as to the credibility and trustworthiness of I-O psychology which negatively affects the evidence-based practices which it supports. Furthermore, traditional gate-keepers such as grant agencies, professional societies and journals, are adapting their policies, reflecting an effort to curtail these trends.Purpose: The purpose of this opinion paper is, therefore, to stimulate an open dialogue with the South African Journal of Industrial Psychology (SAJIP) contributing authors, its editorial board and readership about the challenges associated with the replication crisis in psychology. Furthermore, it attempts to discuss how the identified issues affect I-O psychology and how these could be managed through open science practices and other structural improvements within the SAJIP.Recommendations: We enumerate several easily implementable open science practices, methodological improvements and editorial policy enhancements to enhance credibility and transparency within the SAJIP. Relying on these, we recommend changes to the current practices that can be taken up by researchers and the SAJIP to improve reproducibility and replicability in I-O psychological science.
In this study the normative ratings of the International Affective Picture System (IAPS, Center for the Study of Emotion and Attention [CSEA], 1995) were compared with the ratings from a Bosnian sample. Seventy-two psychology undergraduates from the University of Sarajevo (Bosnia and Herzegovina) rated valence, dominance and arousal for a stratified sample of 60 pictures that was selected from the IAPS. Reliability coefficients indicate that the self-report ratings are internally consistent. The affective ratings from our sample correlated strongly with the North American ratings at: .95, .81 and .91, respectively for valence, arousal and dominance. Consistent with expectations, mean valence and dominance ratings did not differ significantly between the Bosnian and North American sample. Furthermore, plotting of the Bosnian valence and arousal ratings results in a similar boomerang shaped distribution as the North American affective ratings. Taken together, findings obtained from the Bosnian sample confirm the cross-cultural validity of the IAPS
Algorithms consistently perform well on various prediction tasks, but people often mistrust their advice. Here, we demonstrate one component that affects people's trust in algorithmic predictions: response time. In seven studies (total N = 1928 with 14,184 observations), we find that people judge slowly generated predictions from algorithms as less accurate and they are less willing to rely on them. This effect reverses for human predictions, where slowly generated predictions are judged to be more accurate. In explaining this asymmetry, we find that slower response times signal the exertion of effort for both humans and algorithms. However, the relationship between perceived effort and prediction quality differs for humans and algorithms. For humans, prediction tasks are seen as difficult and effort is therefore positively correlated with the perceived quality of predictions. For algorithms, however, prediction tasks are seen as easy and effort is therefore uncorrelated to the quality of algorithmic predictions. These results underscore the complex processes and dynamics underlying people's trust in algorithmic (and human) predictions and the cues that people use to evaluate their quality.
Many decisions rest on people’s ability to make estimates of unknown quantities. In these judgments, the aggregate estimate of a crowd of individuals is often more accurate than most individual estimates. Remarkably, similar principles apply when multiple estimates from the same person are aggregated, and a key challenge is to identify strategies that improve the accuracy of people’s aggregate estimates. Here, we present the following strategy: Combine people’s first estimate with their second estimate, made from the perspective of someone they often disagree with. In five preregistered experiments ( N = 6,425 adults; N = 53,086 estimates) with populations from the United States and United Kingdom, we found that such a strategy produced accurate estimates (compared with situations in which people made a second guess or when second estimates were made from the perspective of someone they often agree with). These results suggest that disagreement, often highlighted for its negative impact, is a powerful tool in producing accurate judgments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.