2016
DOI: 10.1016/j.joi.2016.01.010
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating paper and author ranking algorithms using impact and contribution awards

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
27
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 47 publications
(28 citation statements)
references
References 13 publications
1
27
0
Order By: Relevance
“…This leads us to the conclusion that rescaled PageRank is the best-performing metric overall. With respect to previous works [18,21,29,42] that claimed the superiority of network-based metrics in identifying important papers, our results clarify the essential role of paper age in determining the metrics' performance: rescaled PageRank excels and PageRank performs poorly in identifying MLs short after their publication, and the performance of the two methods becomes comparable only 15 years after the MLs are published. Qualitatively similar results are found for an alternative list of APS outstanding papers which only includes works that have led to Nobel prize for some of the authors (the list is provided in the Table S1).…”
Section: Introductionsupporting
confidence: 75%
“…This leads us to the conclusion that rescaled PageRank is the best-performing metric overall. With respect to previous works [18,21,29,42] that claimed the superiority of network-based metrics in identifying important papers, our results clarify the essential role of paper age in determining the metrics' performance: rescaled PageRank excels and PageRank performs poorly in identifying MLs short after their publication, and the performance of the two methods becomes comparable only 15 years after the MLs are published. Qualitatively similar results are found for an alternative list of APS outstanding papers which only includes works that have led to Nobel prize for some of the authors (the list is provided in the Table S1).…”
Section: Introductionsupporting
confidence: 75%
“…While we have analyzed in detail the presence of age and field bias in the ranking, it still remains to evaluate the actual ranking performance of the newly proposed indicators in artificial data [32] or in real data where the ground truth is provided by some external source [18,33]. Another important issue is the comparison between metrics based on citation count and metrics that take the whole citation network into account to determine papers' score.…”
Section: Discussionmentioning
confidence: 99%
“…For example, network centrality metrics can be evaluated according to their ability to identify expert-selected significant nodes. Benchmarks of this kind include (but are not limited to) identification of expert-selected movies [16,245], identification of awarded conference papers [301,302] or of editor-selected milestone papers [165], identification of researchers awarded with international prizes [13,303].…”
Section: Perspectives and Conclusionmentioning
confidence: 99%