2020
DOI: 10.1371/journal.pone.0229209
|View full text |Cite
|
Sign up to set email alerts
|

Apportionment and districting by Sum of Ranking Differences

Abstract: Sum of Ranking Differences is an innovative statistical method that ranks competing solutions based on a reference point. The latter might arise naturally, or can be aggregated from the data. We provide two case studies to feature both possibilities. Apportionment and districting are two critical issues that emerge in relation to democratic elections. Theoreticians invented clever heuristics to measure malapportionment and the compactness of the shape of the constituencies, yet, there is no unique best method … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
9
1

Relationship

2
8

Authors

Journals

citations
Cited by 13 publications
(9 citation statements)
references
References 40 publications
(49 reference statements)
0
9
0
Order By: Relevance
“…A link to a downloadable program can be found here [22]. Two validation options are available: randomization test [23] and crossvalidation [23] with the application of the Wilcoxon test [24]. Although SRD was primarily developed to solve model comparison problems, it realizes multicriteria optimization [25], as long as the input matrix is arranged as such: alternatives in the columns and features (criteria) in the rows.…”
Section: Fair Methods (Model) Comparisonmentioning
confidence: 99%
“…A link to a downloadable program can be found here [22]. Two validation options are available: randomization test [23] and crossvalidation [23] with the application of the Wilcoxon test [24]. Although SRD was primarily developed to solve model comparison problems, it realizes multicriteria optimization [25], as long as the input matrix is arranged as such: alternatives in the columns and features (criteria) in the rows.…”
Section: Fair Methods (Model) Comparisonmentioning
confidence: 99%
“…The reason for this choice is that we cannot arbitrarily select the best similarity metric a priori. By contrast, the merit of consensus metrics/methodologies have been extensively demonstrated [ 16 , 17 , 18 ]. Briefly, using the average of the 44 similarity metrics is supported by the maximum likelihood principle, which “yields a choice of the estimator as the value for the parameter that makes the observed data most probable” [ 19 ].…”
Section: Methodsmentioning
confidence: 99%
“…To classify the different participating laboratories according to their respective overall performance in trueness and repeatability, we performed a Sum of Ranking Difference (SRD, see the Supplementary Materials for file format) test, a non-parametric multivariate technique recommended when the number of participating laboratories is less than 13 (Heberger and Kollaŕ-Hunek, 2011). While SRD has multiple applications (e.g., Andrićand Heberger, 2015;Kalivas et al, 2015;Sziklai and Heberger, 2020), it allows us to analyze similarities between laboratories by ranking (columns of the input matrix relative to trueness and replicability targets, across the measurement of several environmental variables, and rows of the input matrix). The closer to zero the SRD value, the better the overall performance of the laboratory.…”
Section: Calculation Of Systematic and Random Errors And Performancementioning
confidence: 99%