Citations are increasingly used as performance indicators in research policy and within the research system. Usually, citations are assumed to reflect the impact of the research or its quality. What is the justification for these assumptions and how do citations relate to research quality? These and similar issues have been addressed through several decades of scientometric research. This article provides an overview of some of the main issues at stake, including theories of citation and the interpretation and validity of citations as performance measures. Research quality is a multidimensional concept, where plausibility/soundness, originality, scientific value, and societal value commonly are perceived as key characteristics. The article investigates how citations may relate to these various research quality dimensions. It is argued that citations reflect aspects related to scientific impact and relevance, although with important limitations. On the contrary, there is no evidence that citations reflect other key dimensions of research quality. Hence, an increased use of citation indicators in research evaluation and funding may imply less attention to these other research quality dimensions, such as solidity/plausibility, originality, and societal value.
When distributing grants, research councils use peer expertise as a guarantee for supporting the best projects. However, there are no clear norms for assessments, and there may be a large variation in what criteria reviewers emphasize - and how they are emphasized. The determinants of peer review may therefore be accidental, in the sense that who reviews what research and how reviews are organized may determine outcomes. This paper deals with how the review process affects the outcome of grant review. The case study considers the procedures of The Research Council of Norway, which practises several different grant-review models, and consequently is especially suited for explorations of the implications of different models. Data sources are direct observation of panel meetings, interviews with panel members and study of applications and review documents. A central finding is that rating scales and budget restrictions are more important than review guidelines for the kind of criteria applied by the reviewers. The decision-making methods applied by the review panels when ranking proposals are found to have substantial effects on the outcome. Some ranking methods tend to support uncontroversial and safe projects, whereas other methods give better chances for scholarly pluralism and controversial research.
After more than two decades of external quality assurance, there is an increasing interest in questions concerning the impact and effects of this activity. Following an external evaluation of NOKUT -the Norwegian quality assurance agency, this article studies the impact of external quality assurance in detail by analysing quantitative and qualitative feedback from those exposed to evaluations conducted by this agency. The study provides information on the impact of various methods used, how impact is perceived by students, staff and management within universities and colleges, and finally in what areas impact may be identified. A major finding is that impacts are perceived as quite similar regardless of the evaluation method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.