2020
DOI: 10.48550/arxiv.2006.04532
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Detecting Problem Statements in Peer Assessments

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 12 publications
0
2
0
Order By: Relevance
“…Zingle et al compared rule-based machine-learning and deep neural-network methods for detecting suggestions in peer assessments, and the result showed that deep-learning methods outperformed other traditional methods [37]. Xiao et al collected around 20,000 peer-review comments and leveraged different neural networks to detect problems in peer assessments [31].…”
Section: Related Work 21 Automated Peer-review Evaluationmentioning
confidence: 99%
See 1 more Smart Citation
“…Zingle et al compared rule-based machine-learning and deep neural-network methods for detecting suggestions in peer assessments, and the result showed that deep-learning methods outperformed other traditional methods [37]. Xiao et al collected around 20,000 peer-review comments and leveraged different neural networks to detect problems in peer assessments [31].…”
Section: Related Work 21 Automated Peer-review Evaluationmentioning
confidence: 99%
“…Zingle et al [37] utilized different rule-based, machine-learning, and deep-learning methods for detecting suggestions in peer-review comments. However, to the best of our knowledge, no single study exists that investigates using a multi-task learning (MTL) model to detect multiple features simultaneously (as illustrated in Figure 1), albeit extensive research has been carried out on the topic of automated peer-review evaluation (e.g., [34,32,33,31,30,37,13,19,8]).…”
Section: Introductionmentioning
confidence: 99%