One of the most fundamental issues in academia today is understanding the differences between legitimate and questionable publishing. While decision-makers and managers consider journals indexed in popular citation indexes such as Web of Science or Scopus as legitimate, they use two lists of questionable journals (Beall’s and Cabell’s), one of which has not been updated for a few years, to identify the so-called predatory journals. The main aim of our study is to reveal the contribution of the journals accepted as legitimate by the authorities to the visibility of questionable journals. For this purpose, 65 questionable journals from social sciences and 2338 Web-of-Science-indexed journals that cited these questionable journals were examined in-depth in terms of index coverages, subject categories, impact factors and self-citation patterns. We have analysed 3234 unique cited papers from questionable journals and 5964 unique citing papers (6750 citations of cited papers) from Web of Science journals. We found that 13% of the questionable papers were cited by WoS journals and 37% of the citations were from impact-factor journals. The findings show that neither the impact factor of citing journals nor the size of cited journals is a good predictor of the number of citations to the questionable journals.
This study uses content-based citation analysis to move beyond the simplified classification of predatory journals. We present that, when we analyze papers not only in terms of the quantity of their citations but also the content of these citations, we are able to show the various roles played by papers published in journals accused of being predatory. To accomplish this, we analyzed the content of 9,995 citances (i.e., citation sentences) from 6,706 papers indexed in the Web of Science Core Collection, which cites papers published in so-called “predatory” (or questionable) journals. The analysis revealed that the vast majority of such citances are neutral (97.3%), and negative citations of articles published in the analyzed journals are almost completely nonexistent (0.8%). Moreover, the analysis revealed that the most frequently mentioned countries in the citances are India, Pakistan, and Iran, with mentions of Western countries being rare. This highlights a geopolitical bias and shows the usefulness of looking at such journals as mislocated centers of scholarly communication. The analyzed journals provide regional data prevalent for mainstream scholarly discussions, and the idea of predatory publishing hides geopolitical inequalities in global scholarly publishing. Our findings also contribute to the further development of content-based citation analysis.
Peer Review
https://publons.com/publon/10.1162/qss_a_00242
This article discusses the use of bibliometric indicators for the assessment of individual academics. We focused on national indicators for the assessment of productivity in Polish higher education institutions. We analysed whether institutions (N = 768) adopted national templates for their own sets of criteria for intra-institutional evaluations. This study combined an analysis of internal policy documents with semistructured interviews with deans from institutions in different fields of science. Our findings showed that, despite their high levels of institutional autonomy, the majority of institutions adopted the national criteria for the evaluation of individual academics. This article concludes with recommendations for reducing the negative consequences of local use of national indicators for the assessment of researchers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.