promise was doubtful and its validity unlikely to have been vetted. Predatory journals are a global threat. They accept articles for publication-along with authors' fees-without performing promised quality checks for issues such as plagiarism or ethical approval. Naive readers are not the only victims. Many researchers have been duped into submitting to predatory journals, in which their work can be overlooked. One study that focused on 46,000 researchers based in Italy found that about 5% of them published in such outlets 1. A separate analysis suggests predatory publishers collect millions of dollars in publication fees that are ultimately paid out by funders such as the US National Institutes of Health (NIH) 2. One barrier to combating predatory publishing is, in our view, the lack of an agreed definition. By analogy, consider the historical criteria for deciding whether an abnormal bulge in the aorta, the largest artery in the body, could be deemed an aneurysm-a dangerous W hen 'Jane' turned to alternative medicine, she had already exhausted radiotherapy, chemotherapy and other standard treatments for breast cancer. Her alternative-medicine practitioner shared an article about a therapy involving vitamin infusions. To her and her practitioner, it seemed to be authentic grounds for hope. But when Jane showed the article to her son-in-law (one of the authors of this Comment), he realized it came from a predatory journal-meaning its Leading scholars and publishers from ten countries have agreed a definition of predatory publishing that can protect scholarship. It took 12 hours of discussion, 18 questions and 3 rounds to reach.
We aimed to develop an in-depth understanding of quality criteria for scholarly journals by analyzing journals and publishers indexed in blacklists of predatory journals and whitelists of legitimate journals and the lists’ inclusion criteria. To quantify content overlaps between blacklists and whitelists, we employed the Jaro-Winkler string metric. To identify topics addressed by the lists’ inclusion criteria and to derive their concepts, we conducted qualitative coding. We included two blacklists (Beall’s and Cabells Scholarly Analytics’) and two whitelists (the Directory of Open Access Journals’ and Cabells Scholarly Analytics’). The number of journals per list ranged from 1,404 to 12,357, and the number of publishers ranged from 473 to 5,638. Seventy-two journals and 42 publishers were included in both a blacklist and a whitelist. Seven themes were identified in the inclusion criteria: (i) peer review; (ii) editorial services; (iii) policy; (iv) business practices; (v) publishing, archiving, and access; (vi) website; and (vii) indexing and metrics. Business practices accounted for almost half of the blacklists’ criteria, whereas whitelists gave more emphasis to criteria related to policy. Criteria could be allocated to four concepts: (i) transparency, (ii) ethics, (iii) professional standards, and (iv) peer review and other services. Whitelists gave most weight to transparency. Blacklists focused on ethics and professional standards. Whitelist criteria were easier to verify than those used in blacklists. Both types gave little emphasis to quality of peer review. Overall, the results show that there is overlap of journals and publishers between blacklists and whitelists. Lists differ in their criteria for quality and the weight given to different dimensions of quality. Aspects that are central but difficult to verify receive little attention. IMPORTANCE Predatory journals are spurious scientific outlets that charge fees for editorial and publishing services that they do not provide. Their lack of quality assurance of published articles increases the risk that unreliable research is published and thus jeopardizes the integrity and credibility of research as a whole. There is increasing awareness of the risks associated with predatory publishing, but efforts to address this situation are hampered by the lack of a clear definition of predatory outlets. Blacklists of predatory journals and whitelists of legitimate journals have been developed but not comprehensively examined. By systematically analyzing these lists, this study provides insights into their utility and delineates the different notions of quality and legitimacy in scholarly publishing used. This study contributes to a better understanding of the relevant concepts and provides a starting point for the development of a robust definition of predatory journals.
Background. Despite growing awareness of predatory publishing and research on its market characteristics, the defining attributes of fraudulent journals remain controversial. We aimed to develop a better understanding of quality criteria for scholarly journals by analysing journals and publishers indexed in blacklists of predatory journals and whitelists of legitimate journals and the lists’ inclusion criteria. Methods. We searched for blacklists and whitelists in early 2018. Lists that included journals across disciplines were eligible. We used a mixed methods approach, combining quantitative and qualitative analyses. To quantify overlaps between lists in terms of indexed journals and publishers we employed the Jaro-Winkler string metric and Venn diagrams. To identify topics addressed by the lists’ inclusion criteria and to derive their broader conceptual categories, we used a qualitative coding approach. Results. Two blacklists (Beall’s and Cabell’s) and two whitelists (DOAJ and Cabell’s) were eligible. The number of journals per list ranged from 1404 to 12357 and the number of publishers from 473 to 5638. Seventy-three journals and 42 publishers were included both in a blacklist and whitelist. A total of 198 inclusion criteria were examined. Seven thematic themes were identified: (i) peer review, (ii) editorial services, (iii) policy, (iv) business practices, (v) publishing, archiving and access, (vi) website and (vii) indexing and metrics. Business practices accounted for almost half of blacklists’ criteria, whereas whitelists gave more emphasis to criteria related to policy and guidelines. Criteria were grouped into four broad concepts: (i) transparency, (ii) ethics, (iii) professional standards and (iv) peer review and other services. Whitelists gave more weight to transparency whereas blacklists focused on ethics and professional standards. The criteria included in whitelists were easier to verify than those used in blacklists. Both types of list gave relatively little emphasis to the quality of peer review. Conclusions. There is overlap between journals and publishers included in blacklists and whitelists. Blacklists and whitelists differ in their criteria for quality and the weight given to different dimensions of quality. Aspects that are central but difficult to verify receive insufficient attention.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.