For more than 40 years, the Institute for Scientific Information (ISI, now part of Thomson Reuters) produced the only available bibliographic databases from which bibliometricians could compile largescale bibliometric indicators. ISI's citation indexes, now regrouped under the Web of Science (WoS), were the major sources of bibliometric data until 2004, when Scopus was launched by the publisher Reed Elsevier. For those who perform bibliometric analyses and comparisons of countries or institutions, the existence of these two major databases raises the important question of the comparability and stability of statistics obtained from different data sources. This paper uses macrolevel bibliometric indicators to compare results obtained from the WoS and Scopus. It shows that the correlations between the measures obtained with both databases for the number of papers and the number of citations received by countries, as well as for their ranks, are extremely high (R 2 ≈ .99). There is also a very high correlation when countries' papers are broken down by field. The paper thus provides evidence that indicators of scientific production and citations at the country level are stable and largely independent of the database.
Background and research question
The goal of this paper is to examine the impact of linguistic coverage of databases used by bibliometricians on the capacity to effectively benchmark the work of researchers in social sciences and humanities. We examine the strong link between bibliometrics and the Thomson Scientific's database and review the differences in the production and diffusion of knowledge in the social sciences and humanities (SSH) and the natural sciences and engineering (NSE). This leads to a re-examination of the debate on the coverage of these databases, more specifically in the SSH. The methods section explains how we have compared the coverage of Thomson Scientific databases in the NSE and SSH to the Ulrich extensive database of journals. Our results show that there is a 20 to 25% overrepresentation of English-language journals in Thomson Scientific's databases compared to the list of journals presented in Ulrich. This paper concludes that because of this bias, Thomson Scientific databases cannot be used in isolation to benchmark the output of countries in the SSH.
This paper examines the genesis of journal impact measures and how their evolution culminated in the journal impact factor (JIF) produced by the Institute for Scientific Information. The paper shows how the various building blocks of the dominant JIF (published in the Journal Citation Report -JCR) came into being. The paper argues that these building blocks were all constructed fairly arbitrarily or for different purposes than those that govern the contemporary use of the JIF. The results are a faulty method, widely open to manipulation by journal editors and misuse by uncritical parties. The discussion examines some solution offered to the bibliometrics and scientific communities considering the wide use of this indicator at present.
sciences, but that its role in the humanities is stagnant and has even tended to diminish slightly in the 1990s. Journal literature accounts for less than 50% of the citations in several disciplines of the social sciences and humanities; hence, special care should be used when using bibliometric indicators that rely only on journal literature.
IntroductionBibliometrics and other quantitative methods are being used increasingly in research evaluation because of the growing concern about accountability of public spending in science (King, 1987;Treasury Board of Canada Secretariat, 2001). While the validity and appropriateness of bibliometric methods are largely accepted in the natural sciences, the situation is more complex in the case of the social sciences and humanities.Bibliometricians who evaluate research output in the natural sciences can rely on a well-defined set of core journals that contains the most-cited research and is covered comprehensively by both disciplinary and interdisciplinary databases. The same cannot be said about the social sciences and humanities.Hicks (1999, 2004)
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.