2022
DOI: 10.1111/bmsp.12268
|View full text |Cite
|
Sign up to set email alerts
|

Refinement: Measuring informativeness of ratings in the absence of a gold standard

Abstract: We propose a new metric for evaluating the informativeness of a set of ratings from a single rater on a given scale. Such evaluations are of interest when raters rate numerous comparable items on the same scale, as occurs in hiring, college admissions, and peer review. Our exposition takes the context of peer review, which involves univariate and multivariate cardinal ratings. We draw on this context to motivate an informationtheoretic measure of the refinement of a set of ratingsentropic refinementas well as … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 49 publications
0
1
0
Order By: Relevance
“…The evaluative method described here (collecting both ratings and partial rankings) and the combination of these streams of information into one integrated score provides a coherent way to assess a group of research proposals. While funding schemas like partial lotteries embrace the seemingly stochastic nature of grant review scores and outcomes ([ 51 ]), our approach conversely works to decrease the noise in review outcomes by extracting more meaningful information from reviewers ([ 52 ]). Moreover, the output of this process is a priority list for project funding that is structured optimally for the decision type of many secondary, programmatic funding panels and offers a confidence interval or a probability surrounding a given project’s priority.…”
Section: Discussionmentioning
confidence: 99%
“…The evaluative method described here (collecting both ratings and partial rankings) and the combination of these streams of information into one integrated score provides a coherent way to assess a group of research proposals. While funding schemas like partial lotteries embrace the seemingly stochastic nature of grant review scores and outcomes ([ 51 ]), our approach conversely works to decrease the noise in review outcomes by extracting more meaningful information from reviewers ([ 52 ]). Moreover, the output of this process is a priority list for project funding that is structured optimally for the decision type of many secondary, programmatic funding panels and offers a confidence interval or a probability surrounding a given project’s priority.…”
Section: Discussionmentioning
confidence: 99%