One of the most fundamental issues in academia today is understanding the differences between legitimate and questionable publishing. While decision-makers and managers consider journals indexed in popular citation indexes such as Web of Science or Scopus as legitimate, they use two lists of questionable journals (Beall’s and Cabell’s), one of which has not been updated for a few years, to identify the so-called predatory journals. The main aim of our study is to reveal the contribution of the journals accepted as legitimate by the authorities to the visibility of questionable journals. For this purpose, 65 questionable journals from social sciences and 2338 Web-of-Science-indexed journals that cited these questionable journals were examined in-depth in terms of index coverages, subject categories, impact factors and self-citation patterns. We have analysed 3234 unique cited papers from questionable journals and 5964 unique citing papers (6750 citations of cited papers) from Web of Science journals. We found that 13% of the questionable papers were cited by WoS journals and 37% of the citations were from impact-factor journals. The findings show that neither the impact factor of citing journals nor the size of cited journals is a good predictor of the number of citations to the questionable journals.
This article discusses the open-identity label, i.e., the practice of disclosing reviewers’ names in published scholarly books, a common practice in Central and Eastern European countries. This study’s objective is to verify whether the open-identity label is a type of peer-review label (like those used in Finland and Flanders, i.e., the Flemish part of Belgium), and as such, whether it can be used as a delineation criterion in various systems used to evaluate scholarly publications. We have conducted a two-phase sequential explanatory study. In the first phase, interviews with 20 of the 40 largest Polish publishers of scholarly books were conducted to investigate how Polish publishers control peer reviews and whether the open-identity label can be used to identify peer-reviewed books. In the other phase, two questionnaires were used to analyze perceptions of peer-review and open-identity labelling among authors (n = 600) and reviewers (n = 875) of books published by these 20 publishers. Integrated results allowed us to verify publishers’ claims concerning their peer-review practices. Our findings reveal that publishers actually control peer reviews by providing assessment criteria to reviewers and sending reviews to authors. Publishers rarely ask for permission to disclose reviewers’ names, but it is obvious to reviewers that this practice of disclosing names is part of peer reviewing. This study also shows that only the names of reviewers who accepted manuscripts for publication are disclosed. Thus, most importantly, our analysis shows that the open-identity label that Polish publishers use is a type of peer-review label like those used in Flanders and Finland, and as such, it can be used to identify peer-reviewed scholarly books.
This article aims to investigate the structures of 935 conferences organised by OMICS and 296 conferences organised by WASET from 2015 through 2017. These conferences are characterised in existing literature as so-called predatory or questionable conferences that provide low-quality academic meetings. We analyse 40,224 presenters, focusing on top-ranked institutions according to three global university ranking systems (Academic Ranking of World Universities, Times Higher Education World University Rankings, and QS World University Rankings). Our analysis shows that participants in OMICS events were primarily researchers from the United States, India, the United Kingdom, and China. WASET attracted more researchers from Turkey, India, and South Korea. We found that 11.0% of OMICS and 5.7% of WASET presenters were affiliated with institutions ranked in the top 100 in one of the three aforementioned rankings. We also found that both companies mostly organised conferences in cities that were top tourist destinations.
W artykule omawiamy wzór na nowy polski wskaźnik bibliometryczny, tj. Polski Współczynnik Wpływu, z perspektywy nauk humanistycznych. Badaniom poddaliśmy dwa prestiżowe polskie czasopisma humanistyczne („Pamiętnik Literacki” i „Diametros – An Online Journal of Philosophy”), aby sprawdzić poprawność założeń przyjętych dla Polskiego Współczynnika Wpływu. Przeanalizowaliśmy wszystkie artykuły opublikowane w latach 2004–2014 (odpowiednio: N = 850, N = 555) i wszystkie prace w nich zacytowane (odpowiednio: N = 21 805, N = 8 298). W interpretacji wyników przyjęliśmy założenie o odmiennej kulturze cytowań w różnych grupach nauk. Wyniki pokazują, że wzór na Polski Współczynnik Wpływu nie bierze pod uwagę najczęściej cytowanych źródeł w humanistyce, tj. książek i rozdziałów. Poza tym wiele cytowań nie zostanie uwzględnionych przy wyliczaniu Polskiego Współczynnika Wpływu ze względu na ich wiek, ponieważ będą brane pod uwagę prace co najwyżej pięcioletnie lub nowsze. Zbadaliśmy wiek cytowanych tekstów i pokazaliśmy, że większość z cytowań jest starsza niż 5 lat (odpowiednio: 84,2% oraz 73,2%). Nasza analiza pokazuje, że Polski Współczynnik Wpływu nie jest odpowiednim narzędziem do bibliometrycznej oceny czasopism humanistycznych w Polsce. Artykuł kończy dyskusja nad możliwościami ulepszenia tego nowego wskaźnika bibliometrycznego.
This article discusses the use of bibliometric indicators for the assessment of individual academics. We focused on national indicators for the assessment of productivity in Polish higher education institutions. We analysed whether institutions (N = 768) adopted national templates for their own sets of criteria for intra-institutional evaluations. This study combined an analysis of internal policy documents with semistructured interviews with deans from institutions in different fields of science. Our findings showed that, despite their high levels of institutional autonomy, the majority of institutions adopted the national criteria for the evaluation of individual academics. This article concludes with recommendations for reducing the negative consequences of local use of national indicators for the assessment of researchers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.