2018
DOI: 10.48550/arxiv.1806.06266
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On Strategyproof Conference Peer Review

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 9 publications
(17 citation statements)
references
References 31 publications
0
17
0
Order By: Relevance
“…We test our algorithms on several real-world datasets. The first real-world dataset is a similarity matrix recreated from ICLR 2018 data in [34]; this dataset has n = 2435 reviewers and d = 911 papers. We also run experiments on similarity matrices created from reviewer bid data for three AI conferences from PrefLib dataset MD-00002 [48], with sizes (n = 31, d = 54), (n = 24, d = 52), and (n = 146, d = 176) respectively.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…We test our algorithms on several real-world datasets. The first real-world dataset is a similarity matrix recreated from ICLR 2018 data in [34]; this dataset has n = 2435 reviewers and d = 911 papers. We also run experiments on similarity matrices created from reviewer bid data for three AI conferences from PrefLib dataset MD-00002 [48], with sizes (n = 31, d = 54), (n = 24, d = 52), and (n = 146, d = 176) respectively.…”
Section: Methodsmentioning
confidence: 99%
“…For all three PrefLib datasets, we transformed "yes," "maybe," and "no response" bids into similarities of 4, 2, and 1 respectively, as is often done in practice [29]. As done in [34], we set loads k = 6 and = 3 for all datasets since these are common loads for computer science conferences (except on the PrefLib2 dataset, for which we set k = 7 for feasibility). All results are averaged over 10 trials with error bars plotted representing the standard error of the mean, although they are sometimes not visible since the variance is very low.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Other proposals involve improvements to various procedural aspects of peer review; these include a recruitment and mentorship pipeline for junior academics to alleviate the scarcity of reviewers (Stelmakh et al, 2020), optimizing reviewer assignment to obtain less noisy reviews (Shah, 2019) and mitigate conflicts of interest (Xu et al, 2018;Jecmen et al, 2020), and better aggregation of subjective opinions on criteria scores to accept/reject recommendations (Noothigattu et al, 2021). Such ideas are complementary to our Stage II reviewing mechanism; they focus on improving paper-reviewer matching or the acceptance decision process, while we focus on incentivizing honest and effortful reviews after reviewers are assigned and before scores are aggregated into accept/reject decisions.…”
Section: Proposals From the Machine Learning Communitymentioning
confidence: 99%
“…However, if the matched reviewers have a conflict of interest, this would weaken the incentive to report honest and effortful reviews. Thus, related work on optimizing paper-reviewer matches (Xu et al, 2018;Jecmen et al, 2020) to avoid conflicts of interest is complementary and important to the success of our mechanism.…”
Section: Conflicts Of Interestmentioning
confidence: 99%