This review of the international literature on evaluation systems, evaluation practices and metrics (mis-)uses was written as part of a larger review commissioned by the Higher Education Funding Council for England (HEFCE) to inform their independent assessment of the role of metrics in research evaluation (2014-2015). The literature on evaluation systems, practices and effects of indicator uses is extremely heterogeneous: it comprises hundreds of sources published in different media, spread over disciplines, and with considerable variation in the nature of the evidence. A condensation of the state-of-the-art in relevant research is therefore highly timely. Our review presents the main strands in the literature, with a focus on empirical materials about possible effects of evaluation exercises, 'gaming' of indicators, and strategic responses by scientific communities and others to requirements in research assessments. In order to increase visibility and availability, an adapted and updated review is presented here as a stand-aloneafter authorisation by HEFCE.
Over the past decades, science funding shows a shift from recurrent block funding towards project funding mechanisms. However, our knowledge of how project funding arrangements influence the organizational and epistemic properties of research is limited. To study this relation, a bridge between science policy studies and science studies is necessary. Recent studies have analyzed the relation between the affordances and constraints of project grants and the epistemic properties of research. However, the potentially very different affordances and constraints of funding arrangements such as awards, prizes and fellowships, have not yet been taken into account. Drawing on eight case studies of funding arrangements in high performing Dutch research groups, this study compares the institutional affordances and constraints of prizes with those of project grants and their effects on organizational and epistemic properties of research. We argue that the prize case studies diverge from project-funded research in three ways: 1) a more flexible use, and adaptation of use, of funds during the research process compared to project grants; 2) investments in the larger organization which have effects beyond the research project itself; and 3), closely related, greater deviation from epistemic and organizational standards. The increasing dominance of project funding arrangements in Western science systems is therefore argued to be problematic in light of epistemic and organizational innovation. Funding arrangements that offer funding without scholars having to submit a project-proposal remain crucial to support researchers and research groups to deviate from epistemic and organizational standards.
How are "interesting" research problems identified and made durable by academic researchers, particularly in situations defined by multiple evaluation principles? Building on two case studies of research groups working on rare diseases in academic biomedicine, we explore how group leaders arrange their groups to encompass research problems that latch onto distinct evaluation principles by dividing and combining work into "basicoriented" and "clinical-oriented" spheres of inquiry. Following recent developments in the sociology of (e)valuation comparing academics to capitalist entrepreneurs in pursuit of varying kinds of worth, we argue that the metaphor of the portfolio is helpful in analyzing how group leaders manage these different research lines as "alternative investment options" from which they were variously hoping to capitalize. We argue portfolio development is a useful concept for exploring how group leaders fashion "entrepreneurial" practices to manage and exploit tensions between multiple matrices of (e)valuation and conclude with suggestions for how this vocabulary can further extend analysis of epistemic capitalism within science and technology studies.
This paper presents a new method for identifying scholars who have a Twitter account from bibliometric data from Web of Science (WoS) and Twitter data from Altmetric.com . The method reliably identifies matches between Twitter accounts and scholarly authors. It consists of a matching of elements such as author names, usernames, handles, and URLs, followed by a rule-based scoring system that weights the common occurrence of these elements related to the activities of Twitter users and scholars. The method proceeds by matching the Twitter accounts against a database of millions of disambiguated bibliographic profiles from WoS. This paper describes the implementation and validation of the matching method, and performs verification through precision-recall analysis. We also explore the geographical, disciplinary, and demographic variations in the distribution of scholars matched to a Twitter account. This approach represents a step forward in the development of more advanced forms of social media studies of science by opening up an important door for studying the interactions between science and social media in general, and for studying the activities of scholars on Twitter in particular.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.