This study examines the nature of citations to articles that were retracted in 2014. Out of 987 retracted articles found in ScienceDirect, an Elsevier full text database, we selected all articles that received more than 10 citations between January 2015 and March 2016. Since the retraction year was known for only about 83% of the retracted articles, we chose to concentrate on recent citations, that for certain appeared after the cited paper was retracted. Overall, we analyzed 238 citing documents and identified the context of each citation as positive, negative or neutral. Our results show that the vast majority of citations to retracted articles are positive despite of the clear retraction notice on the publisher’s platform and regardless of the reason for retraction. Positive citations can be also seen to articles that were retracted due to ethical misconduct, data fabrication and false reports. In light of these results, we listed some recommendations for publishers that could potentially minimize the referral to retracted studies as valid.
This article introduces the Multidimensional Research Assessment Matrix of scientific output. Its base notion holds that the choice of metrics to be applied in a research assessment process depends on the unit of assessment, the research dimension to be assessed, and the purposes and policy context of the assessment. An indicator may by highly useful within one assessment process, but less so in another. For instance, publication counts are useful tools to help discriminate between those staff members who are research active, and those who are not, but are of little value if active scientists are to be compared with one another according to their research performance. This paper gives a systematic account of the potential usefulness and limitations of a set of 10 important metrics, including altmetrics, applied at the level of individual articles, individual researchers, research groups, and institutions. It presents a typology of research impact dimensions and indicates which metrics are the most appropriate to measure each dimension. It introduces the concept of a “meta‐analysis” of the units under assessment in which metrics are not used as tools to evaluate individual units, but to reach policy inferences regarding the objectives and general setup of an assessment process.
A new methodology is proposed for comparing Google Scholar (GS) with other citation indexes. It focuses on the coverage and citation impact of sources, indexing speed, and data quality, including the effect of duplicate citation counts. The method compares GS with Elsevier's Scopus, and is applied to a limited set of articles published in 12 journals from six subject fields, so that its findings cannot be generalized to all journals or fields. The study is exploratory, and hypothesis generating rather than hypothesis-testing. It confirms findings on source coverage and citation impact obtained in earlier studies. The ratio of GS over Scopus citation varies across subject fields between 1.0 and 4.0, while Open Access journals in the sample show higher ratios than their non-OA counterparts. The linear correlation between GS and Scopus citation counts at the article level is high: Pearson's R is in the range of 0.8-0.9. A median Scopus indexing delay of two months compared to GS is largely though not exclusively due to missing cited references in articles in press in Scopus. The effect of double citation counts in GS due to multiple citations with identical or substantially similar meta-data occurs in less than 2 per cent of cases. Pros and cons of article-based and what is termed as concept-based citation indexes are discussed.
A bibliometric approach is explored to tracking international scientific migration, based on an analysis of the affiliation countries of authors publishing in peer reviewed journals indexed in Scopus™. The paper introduces a model that relates base concepts in the study of migration to bibliometric constructs, and discusses the potentialities and limitations of a bibliometric approach both with respect to data accuracy and interpretation. Synchronous and asynchronous analyses are presented for 10 rapidly growing countries and 7 scientifically established countries. Rough error rates of the proposed indicators are estimated. It is concluded that the bibliometric approach is promising provided that its outcomes are interpreted with care, based on insight into the limits and potentialities of the bibliometric approach, and combined with complementary data, obtained, for instance, from researchers' Curricula Vitae or survey or questionnaire based data. Error rates for units of assessment with indicator values based on sufficiently large numbers are estimated to be fairly below 10 per cent, but can be expected to vary substantially among countries of origin, especially between Asian countries and Western countries.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.