Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1106
|View full text |Cite
|
Sign up to set email alerts
|

DeepSentiPeer: Harnessing Sentiment in Review Texts to Recommend Peer Review Decisions

Abstract: Automatically validating a research artefact is one of the frontiers in Artificial Intelligence (AI) that directly brings it close to competing with human intellect and intuition. Although criticized sometimes, the existing peer review system still stands as the benchmark of research validation. The present-day peer review process is not straightforward and demands profound domain knowledge, expertise, and intelligence of human reviewer(s), which is somewhat elusive with the current state of AI. However, the p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
36
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 48 publications
(42 citation statements)
references
References 10 publications
0
36
1
Order By: Relevance
“…In this short paper, we offer solutions to three particularities of this task that the above approaches do not address: a) Often, the recommendations given by the area chair and the reviewers are in disagreement. Whereas previous studies have used either the former (Kang et al, 2018;Wang and Wan, 2018;Ghosal et al, 2019) or a soft label average of the latter (Stappen et al, 2020) for supervision, we show that both signals comprise complementary information. b) Whereas soft labels de-emphasise subjective articles with disagreeing reviews during training (Stappen et al, 2020), we manage to outperform the latter study by explicitly modelling aleatory uncertainty as an auxiliary prediction task.…”
Section: Contributionscontrasting
confidence: 77%
See 4 more Smart Citations
“…In this short paper, we offer solutions to three particularities of this task that the above approaches do not address: a) Often, the recommendations given by the area chair and the reviewers are in disagreement. Whereas previous studies have used either the former (Kang et al, 2018;Wang and Wan, 2018;Ghosal et al, 2019) or a soft label average of the latter (Stappen et al, 2020) for supervision, we show that both signals comprise complementary information. b) Whereas soft labels de-emphasise subjective articles with disagreeing reviews during training (Stappen et al, 2020), we manage to outperform the latter study by explicitly modelling aleatory uncertainty as an auxiliary prediction task.…”
Section: Contributionscontrasting
confidence: 77%
“…Finally, we have shown in our study that only a representation of the abstract is required as the input both for acceptance and hallucination modelling. Since previous work (Ghosal et al, 2019) has shown that modelling an article based on the entire paper can be beneficial, we also intend to explore the impact of using such a highly expressive article representation for hallucinating review representations.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations