The purpose of this paper is to give a macro-picture of collaboration in research groups and networks across all academic fields in Norwegian research universities, and to examine the relative importance of membership in groups and networks for individual publication output. To our knowledge, this is a new approach, which may provide valuable information on collaborative patterns in a particular national system, but of clear relevance to other national university systems. At the system level, conducting research in groups and networks are equally important, but there are large differences between academic fields. The research group is clearly most important in the field of medicine and health, while undertaking research in an international network is most important in the natural sciences. Membership in a research group and active participation in international networks are likely to enhance publication productivity and the quality of research.
Studies of universities' external engagement have found that individual and discipline-level characteristics explain most of the participation in different kinds of external engagement activities, but characteristics at the institutional level are often not studied explicitly. In this paper, we analyze how five different forms of external engagement are influenced by a range of factors, using a multilevel regression approach on a complex combined dataset including a survey to 4400 Norwegian academics and detailed data on the 31 higher education institutions where the academics are employed. The goal is to test whether university-level characteristics matter for participation in different kinds of external engagement, when we also control for the influence of individual and discipline level factors. We find that university-level variables explain few of the differences in external engagement among academic staff in general. Still, there are important nuances, and the multi-level analysis has revealed a complex picture of influences on forms of external engagement among academics. Participation in consultancy and commercialization among academics is in particular influenced by university-level factors.
This paper investigates the use of metrics to recruit professors for academic positions. We analyzed confidential reports with candidate evaluations in economics, sociology, physics, and informatics at the University of Oslo between 2000 and 2017. These unique data enabled us to explore how metrics were applied in these evaluations in relation to other assessment criteria. Despite being important evaluation criteria, metrics were seldom the most salient criteria in candidate evaluations. Moreover, metrics were applied chiefly as a screening tool to decrease the number of eligible candidates and not as a replacement for peer review. Contrary to the literature suggesting an escalation of metrics, we foremost detected stable assessment practices with only a modestly increased reliance on metrics. In addition, the use of metrics proved strongly dependent on disciplines where the disciplines applied metrics corresponding to their evaluation cultures. These robust evaluation practices provide an empirical example of how core university processes are chiefly characterized by path-dependency mechanisms, and only moderately by isomorphism. Additionally, the disciplinary-dependent spread of metrics offers a theoretical illustration of how travelling standards such as metrics are not only diffused but rather translated to fit the local context, resulting in heterogeneity and context-dependent spread.
This article explores the significance for academic staff members of research groups established and formalised as part of research strategies at university faculties. It also explores the levels of participation and stresses the importance of such group-related activities with regard to the level of participation, perceived impact on research quality and researcher training. The study is based on data from a survey and in-depth interviews with academic staff at Norwegian universities as well as document reviews. It provides evidence that formalised research groups can have a positive effect on the quality of individual research as well as researcher training. The study reveals significant differences between fields of science with regard to the importance of such groups for research activities and quality. Nevertheless, it finds that they contribute to more institution-based research, and also in subjects and qualifications where the research has primarily been conducted on an individual basis, such as in the humanities. These groups cannot simply be understood as a legitimating device for scientific communities due to changing funding and steering criteria; rather they manifest themselves as modes of academic work serving as a supplement to, rather than substitute for, other forms of cooperation.
Metrics on scientific publications and their citations are easily accessible and are often referred to in assessments of research and researchers. This paper addresses whether metrics are considered a legitimate and integral part of such assessments. Based on an extensive questionnaire survey in three countries, the opinions of researchers are analysed. We provide comparisons across academic fields (cardiology, economics, and physics) and contexts for assessing research (identifying the best research in their field, assessing grant proposals and assessing candidates for positions). A minority of the researchers responding to the survey reported that metrics were reasons for considering something to be the best research. Still, a large majority in all the studied fields indicated that metrics were important or partly important in their review of grant proposals and assessments of candidates for academic positions. In these contexts, the citation impact of the publications and, particularly, the number of publications were emphasized. These findings hold across all fields analysed, still the economists relied more on productivity measures than the cardiologists and the physicists. Moreover, reviewers with high scores on bibliometric indicators seemed more frequently (than other reviewers) to adhere to metrics in their assessments. Hence, when planning and using peer review, one should be aware that reviewers—in particular reviewers who score high on metrics—find metrics to be a good proxy for the future success of projects and candidates, and rely on metrics in their evaluation procedures despite the concerns in scientific communities on the use and misuse of publication metrics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.