Findings of the Association for Computational Linguistics: EMNLP 2020 2020
DOI: 10.18653/v1/2020.findings-emnlp.112
|View full text |Cite
|
Sign up to set email alerts
|

What Can We Do to Improve Peer Review in NLP?

Abstract: Peer review is our best tool for judging the quality of conference submissions, but it is becoming increasingly spurious. We argue that a part of the problem is that the reviewers and area chairs face a poorly defined task forcing apples-to-oranges comparisons. There are several potential ways forward, but the key difficulty is creating the incentives and mechanisms for their consistent implementation in the NLP community.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
24
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2
2

Relationship

2
7

Authors

Journals

citations
Cited by 25 publications
(24 citation statements)
references
References 27 publications
0
24
0
Order By: Relevance
“…Our fast-paced field rewards the contribution of new methods and state-of-the-art results (Rogers and Augenstein, 2020), which often contrasts with controlled comparisons and training multiple models for variance estimation. In this paper, we showed that several methods for vision-andlanguage representation learning do not significantly differ when compared in a controlled setting.…”
Section: Discussionmentioning
confidence: 99%
“…Our fast-paced field rewards the contribution of new methods and state-of-the-art results (Rogers and Augenstein, 2020), which often contrasts with controlled comparisons and training multiple models for variance estimation. In this paper, we showed that several methods for vision-andlanguage representation learning do not significantly differ when compared in a controlled setting.…”
Section: Discussionmentioning
confidence: 99%
“…Most task-specific tracks (question answering, summarization, dialogue etc.) are supposed to receive both engineering and data submissions, but in that setting the interdisciplinary tension may lead to resource papers voted down simply for being resource papers (Rogers and Augenstein, 2020). Bawden (2019) cites an ACL 2019 reviewer who complained that "the paper is mostly a description of the corpus and its collection and contains little scientific contribution".…”
Section: Moving Forwardmentioning
confidence: 99%
“…Once papers are written and submitted for peer review, it is pertinent to evaluate them fairly and objectively. This process is far from straight-forward, as, among others, reviewers have certain biases, including against truly novel research (Rogers and Augenstein, 2020;Bhattacharya and Packalen, 2020). Research has thus focused on automatically generating peer reviews from paper content , as well as on studying how well review scores can be predicted from review texts (Kang et al, 2018;Plank and van Dalen, 2019).…”
Section: The Life Cycle Of Scientific Researchmentioning
confidence: 99%