2015
DOI: 10.1007/s11192-015-1772-6
|View full text |Cite
|
Sign up to set email alerts
|

Computing a journal meta-ranking using paired comparisons and adaptive lasso estimators

Abstract: In a "publish-or-perish culture", the ranking of scientific journals plays a central role in assessing the performance in the current research environment. With a wide range of existing methods for deriving journal rankings, meta-rankings have gained popularity as a means of aggregating different information sources. In this paper, we propose a method to create a meta-ranking using heterogeneous journal rankings. Employing a parametric model for paired comparison data we estimate quality scores for 58 journals… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
9
0

Year Published

2016
2016
2025
2025

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 13 publications
(10 citation statements)
references
References 46 publications
1
9
0
Order By: Relevance
“…The outcome are estimates, including their statistical variances, of the likelihood that one CSR motivation would rank more highly than another if both co‐appeared in the same survey, as weighted according to survey sample size (alternative weighting methods as robustness checks are discussed shortly). Bradley–Terry modeling is attractive for its widespread use in the social sciences (Liu et al, 2019; Maystre & Grossglauser, 2017), 5 ease of implementation with common statistical software (Dittrich et al, 2000), flexibility in dealing with partially overlapping rank orders (Vana et al, 2016), allowance of different weights by which underlying rankings can be integrated (Simko & Pechenick, 2010), and strong performance against alternative approaches (Montequin et al, 2020; Negahban et al, 2017; Rajkumar & Agarwal, 2014; Simko & Pechenick, 2010).…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…The outcome are estimates, including their statistical variances, of the likelihood that one CSR motivation would rank more highly than another if both co‐appeared in the same survey, as weighted according to survey sample size (alternative weighting methods as robustness checks are discussed shortly). Bradley–Terry modeling is attractive for its widespread use in the social sciences (Liu et al, 2019; Maystre & Grossglauser, 2017), 5 ease of implementation with common statistical software (Dittrich et al, 2000), flexibility in dealing with partially overlapping rank orders (Vana et al, 2016), allowance of different weights by which underlying rankings can be integrated (Simko & Pechenick, 2010), and strong performance against alternative approaches (Montequin et al, 2020; Negahban et al, 2017; Rajkumar & Agarwal, 2014; Simko & Pechenick, 2010).…”
Section: Methodsmentioning
confidence: 99%
“…We do, however, also analyze and present results for the practitioner literature as a robustness check.3 By "universal," we do not mean to imply that the CSR motivations in the tripartite schema have a relative self-selected importance that is invariably the same everywhere, only that this relative importance has a high degree of statistical significance in our empirical models even when examined across major distinctions such as time, place, and industry. This is similar to the usage of "worldwide" and "global" to refer to phenomena that are highly international, although not necessarily present in each and every locale in an identical manner and to the same degree.4 Where not explicitly disclosed, the survey year was assumed to be 1 year prior to survey publication.5 For example, Bradley-Terry modeling has been used to create a metaranking of the quality of academic journals based on their rankings within dozens of partially overlapping lists provided by governments, universities, and practitioner organizations(Vana et al, 2016).6 "Ability" is the conventional referent for Bradley-Terry coefficients.7 The quality of statistical estimates does not increase linearly with sample size. For surveys, for example, the margin of error decreases by the inverse of the square root of sample size.8 That is, CSR motivations appearing in 10 or more surveys.…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…Among various methods for evaluating journals, meta ordering is popular because of its large number of information sources. Vana et al [23] described a method for creating meta-rankings by using heterogeneous journal ranking. The greatest advantage of meta-ranking is its ability to find journals with similar quality.…”
Section: B Other Metricsmentioning
confidence: 99%
“…Masarotto and Varin (2012) classified the sports teams into groups by maximising the log‐likelihood of the Bradley–Terry model with the general fused lasso penalty. A similar method was used in Vana et al (2016) to create a meta‐ranking using heterogeneous journal rankings. Such penalised estimation techniques were also used in the Bradley–Terry model with ordered responses (Tutz & Schauberger, 2015).…”
Section: Introductionmentioning
confidence: 99%