2021
DOI: 10.3171/2019.11.jns192679
|View full text |Cite
|
Sign up to set email alerts
|

Assessment of the NIH-supported relative citation ratio as a measure of research productivity among 1687 academic neurological surgeons

Abstract: OBJECTIVEPublication metrics such as the Hirsch index (h-index) are often used to evaluate and compare research productivity in academia. The h-index is not a field-normalized statistic and can therefore be dependent on overall rates of publication and citation within specific fields. Thus, a metric that adjusts for this while measuring individual contributions would be preferable. The National Institutes of Health (NIH) has developed a new, field-normalized, article-… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

7
74
2

Year Published

2021
2021
2024
2024

Publication Types

Select...
9

Relationship

1
8

Authors

Journals

citations
Cited by 31 publications
(83 citation statements)
references
References 21 publications
7
74
2
Order By: Relevance
“…1,32 Additionally, the total number of publications or H-index may not be the best measure for research productivity, since obtaining research grants, longitudinal basic science projects, or higher impact research are also important measures of scholarly productivity, but not accounted for in this manuscript. Furthermore, the H-index is not the solitary measure of research productivity, as other similar indices have been utilized for similar purposes before, namely m-index, 30 Relative Citation Ratio, 33 and Radicchi index. 34 This is in addition to the fact that Scopus and/or other sources for H-index (e.g., Web of Knowledge, Google Scholar, and ResearchGate) can provide conflicting results.…”
Section: Discussionmentioning
confidence: 99%
“…1,32 Additionally, the total number of publications or H-index may not be the best measure for research productivity, since obtaining research grants, longitudinal basic science projects, or higher impact research are also important measures of scholarly productivity, but not accounted for in this manuscript. Furthermore, the H-index is not the solitary measure of research productivity, as other similar indices have been utilized for similar purposes before, namely m-index, 30 Relative Citation Ratio, 33 and Radicchi index. 34 This is in addition to the fact that Scopus and/or other sources for H-index (e.g., Web of Knowledge, Google Scholar, and ResearchGate) can provide conflicting results.…”
Section: Discussionmentioning
confidence: 99%
“…In general, research field is a major factor of the outcome of citation, for example in Qian et al (2017) it is shown that even the sub-fields within computer science have significant effect on the citation rates. Various field-normalized bibliometric methods are devised and utilized to compensate such variation for the purpose of fair and accurate evaluation (see Ahlgren and Sjögårde 2015;Bornmann and Haunschild 2016;Reddy et al 2020).…”
Section: Research Field and H-dimensionmentioning
confidence: 99%
“…The h-index combines frequency of publication and frequency of citation into a single numerical value, which often disadvantages younger authors with few but impactful publications and prioritizes quantity over quality [ 8 ]. Further, because the h-index is not field-normalized, its value is skewed by the size of the academic specialty, which limits the ability to make accurate cross-specialty comparisons [ 8 , 9 ]. For example, those publishing in a larger field, such as internal medicine, are likely to accrue a higher number of citations than publications within a niche subspecialty field.…”
Section: Introductionmentioning
confidence: 99%
“…The RCR is calculated by dividing the total number of citations per year of a publication divided by the average citations per year received by NIH-funded papers in the same field [ 13 ]. Dynamic field-normalization through the use of a co-citation network of an article separates the RCR from traditional metrics and allows for a more accurate comparison of research impact across academic specialties [ 9 , 14 ]. Author-level derivatives of the RCR include mean RCR and weighted RCR and are calculated by taking the mean and sum, respectively, of all article-level RCR scores pertaining to a single researcher [ 13 ].…”
Section: Introductionmentioning
confidence: 99%