Traditional metric indicators of scientific productivity (e.g., journal impact factor; h-index) have been heavily criticized for being invalid and fueling a culture that focuses on the quantity, rather than the quality, of a person’s scientific output. There is now a wide-spread demand for specified alternatives to current academic evaluation practices. In a previous report, we laid out four basic principles of a more responsible research assessment in academic hiring and promotion processes (Schönbrodt et al., 2022). The present paper offers a specific proposal for how these principles may be implemented in practice: We argue in favor of broadening the range of relevant research contributions and thus propose concrete quality criteria (including ready-to-use online templates) for published research articles, data sets and research software. These criteria are supposed to be used primarily in the first phase of the assessment process. Their function is to help establish a minimum threshold of methodological rigor that candidates need to pass in order to be further considered for hiring and promotion. In contrast, the second phase of the assessment process will focus more on the actual content of candidates’ research output and necessarily use more narrative means of assessment. We hope that this proposal will help get our colleagues in the field engaged in a discussion over ways of replacing current invalid evaluation criteria with ones that relate more closely to scientific quality.
This target article is part of a theme bundle including open peer commentaries (https://doi.org/10.5964/ps.9227) and a rejoinder by the authors (https://doi.org/10.5964/ps.7961). We point out ten steps that we think will go a long way in improving personality science. The first five steps focus on fostering consensus regarding (1) research goals, (2) terminology, (3) measurement practices, (4) data handling, and (5) the current state of theory and evidence. The other five steps focus on improving the credibility of empirical research, through (6) formal modelling, (7) mandatory pre-registration for confirmatory claims, (8) replication as a routine practice, (9) planning for informative studies (e.g., in terms of statistical power), and (10) making data, analysis scripts, and materials openly available. The current, quantity-based incentive structure in academia clearly stands in the way of implementing many of these practices, resulting in a research literature with sometimes questionable utility and/or integrity. As a solution, we propose a more quality-based reward scheme that explicitly weights published research by its Good Science merits. Scientists need to be increasingly rewarded for doing good work, not just lots of work.
The use of journal impact factors and other metric indicators of research productivity, such as the h-index, has been heavily criticized for being invalid for the assessment of individual researchers and for fueling a detrimental “publish or perish” culture. Multiple initiatives call for developing alternatives to existing metrics that better reflect quality (instead of quantity) in research assessment. This report, written by a task force established by the German Psychological Society, proposes how responsible research assessment could be done in the field of psychology. We present four principles of responsible research assessment in hiring and promotion and suggest a two-phase assessment procedure that combines the objectivity and efficiency of indicators with a qualitative, discursive assessment of shortlisted candidates. The main aspects of our proposal are (a) to broaden the range of relevant research contributions to include published data sets and research software, along with research papers, and (b) to place greater emphasis on quality and rigor in research evaluation.
Emotion regulation (ER) can be conceptualized as any process by which individuals modify their emotional experiences, expressions, and physiology (Gross, 1998). Individuals encounter situations every day in which they have to regulate emotions. To achieve this, people can choose from a variety of strategies: situation selection, situation modification,
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.