This review of the international literature on evaluation systems, evaluation practices and metrics (mis-)uses was written as part of a larger review commissioned by the Higher Education Funding Council for England (HEFCE) to inform their independent assessment of the role of metrics in research evaluation (2014-2015). The literature on evaluation systems, practices and effects of indicator uses is extremely heterogeneous: it comprises hundreds of sources published in different media, spread over disciplines, and with considerable variation in the nature of the evidence. A condensation of the state-of-the-art in relevant research is therefore highly timely. Our review presents the main strands in the literature, with a focus on empirical materials about possible effects of evaluation exercises, 'gaming' of indicators, and strategic responses by scientific communities and others to requirements in research assessments. In order to increase visibility and availability, an adapted and updated review is presented here as a stand-aloneafter authorisation by HEFCE.
This document presents the Bonn PRINTEGER Consensus Statement: Working with Research Integrity—Guidance for research performing organisations. The aim of the statement is to complement existing instruments by focusing specifically on institutional responsibilities for strengthening integrity. It takes into account the daily challenges and organisational contexts of most researchers. The statement intends to make research integrity challenges recognisable from the work-floor perspective, providing concrete advice on organisational measures to strengthen integrity. The statement, which was concluded February 7th 2018, provides guidance on the following key issues:
Providing information about research integrityProviding education, training and mentoringStrengthening a research integrity cultureFacilitating open dialogueWise incentive managementImplementing quality assurance proceduresImproving the work environment and work satisfactionIncreasing transparency of misconduct casesOpening up researchImplementing safe and effective whistle-blowing channelsProtecting the alleged perpetratorsEstablishing a research integrity committee and appointing an ombudspersonMaking explicit the applicable standards for research integrity
The range and types of performance metrics has recently proliferated in academic settings, with bibliometric indicators being particularly visible examples. One field that has traditionally been hospitable towards such indicators is biomedicine. Here the relative merits of bibliometrics are widely discussed, with debates often portraying them as heroes or villains. Despite a plethora of controversies, one of the most widely used indicators in this field is said to be the Journal Impact Factor (JIF). In this article we argue that much of the current debates around researchers’ uses of the JIF in biomedicine can be classed as ‘folk theories’: explanatory accounts told among a community that seldom (if ever) get systematically checked. Such accounts rarely disclose how knowledge production itself becomes more-or-less consolidated around the JIF. Using ethnographic materials from different research sites in Dutch University Medical Centers, this article sheds new empirical and theoretical light on how performance metrics variously shape biomedical research on the ‘shop floor.’ Our detailed analysis underscores a need for further research into the constitutive effects of evaluative metrics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.