Web of Science (WoS) is the world’s oldest, most widely used and authoritative database of research publications and citations. Based on the Science Citation Index, founded by Eugene Garfield in 1964, it has expanded its selective, balanced, and complete coverage of the world’s leading research to cover around 34,000 journals today. A wide range of use cases are supported by WoS from daily search and discovery by researchers worldwide through to the supply of analytical data sets and the provision of specialized access to raw data for bibliometric partners. A long- and well-established network of such partners enables the Institute for Scientific Information (ISI) to continue to work closely with bibliometric groups around the world to the benefit of both the community and the services that the company provides to researchers and analysts.
This article reviews the nature and use of the journal impact factor and other common bibliometric measures for assessing research in the sciences and social sciences based on data compiled by Thomson Reuters. Journal impact factors are frequently misused to assess the influence of individual papers and authors, but such uses were never intended. Thomson Reuters also employs other measures of journal influence, which are contrasted with the impact factor. Finally, the author comments on the proper use of citation data in general, often as a supplement to peer review. This review may help government policymakers, university administrators, and individual researchers become better acquainted with the potential benefits and limitations of bibliometrics in the evaluation of research.
Many academic analyses of good practice in the use of bibliometric data address only technical aspects and fail to account for and appreciate user requirements, expectations, and actual practice. Bibliometric indicators are rarely the only evidence put before any user group. In the present state of knowledge, it is more important to consider how quantitative evaluation can be made simple, transparent, and readily understood than it is to focus unduly on precision, accuracy, or scholarly notions of purity. We discuss how the interpretation of ‘performance’ from a presentation using accurate but summary bibliometrics can change when iterative deconstruction and visualization of the same dataset is applied. From the perspective of a research manager with limited resources, investment decisions can easily go awry at governmental, funding program, and institutional levels. By exploring select real-life data samples we also show how the specific composition of each dataset can influence interpretive outcomes.
Citations can be an indicator of publication significance, utility, attention, visibility or short-term impact but analysts need to confirm whether a high citation count for an individual is a genuine reflection of influence or a consequence of extraordinary, even excessive, self-citation. It has recently been suggested there may be increasing misrepresentation of research performance by individuals who self-cite inordinately to achieve scores and win rewards. In this paper we consider self-referencing and self-citing, describe the typical shape of self-citation patterns for carefully curated publication sets authored by 3517 Highly Cited Researchers and quantify the variance in the distribution of self-citation rates within and between all 21 Essential Science Indicators' fields. We describe both a generic level of median self-referencing rates, common to most fields, and a graphical, distribution-driven assessment of excessive self-citation that demarcates a threshold not dependent on statistical tests or percentiles (since for some fields all values are within a central 'normal' range). We describe this graphical procedure for identifying exceptional self-citation rates but emphasize the necessity for expert interpretation of the citation profiles of specific individuals, particularly in fields with atypical self-citation patterns. , for their advice and suggestions during the development of this work. We also thank an anonymous referee for their comments regarding the 'typicality' of highly-cited researchers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.