Findings of the Association for Computational Linguistics: EMNLP 2020 2020
DOI: 10.18653/v1/2020.findings-emnlp.141
|View full text |Cite
|
Sign up to set email alerts
|

Enhancing Automated Essay Scoring Performance via Fine-tuning Pre-trained Language Models with Combination of Regression and Ranking

Abstract: Automated Essay Scoring (AES) is a critical text regression task that automatically assigns scores to essays based on their writing quality. Recently, the performance of sentence prediction tasks has been largely improved by using Pre-trained Language Models via fusing representations from different layers, constructing an auxiliary sentence, using multitask learning, etc. However, to solve the AES task, previous works utilize shallow neural networks to learn essay representations and constrain calculated scor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
46
0
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 55 publications
(48 citation statements)
references
References 16 publications
1
46
0
1
Order By: Relevance
“…This idea might also be added to our approach. A neural network-based approach to fine-tune a BERT model is discussed in [54,56]. [31] states that pre-trained models may gen-eralize better on new essay topics that the trained model has not seen yet than simple neural networks such as LSTMs.…”
Section: Autogradingmentioning
confidence: 99%
“…This idea might also be added to our approach. A neural network-based approach to fine-tune a BERT model is discussed in [54,56]. [31] states that pre-trained models may gen-eralize better on new essay topics that the trained model has not seen yet than simple neural networks such as LSTMs.…”
Section: Autogradingmentioning
confidence: 99%
“…Then, the embedding representation w t corresponding to w t is calculable as a dot product w t = A ⋅ w t . (Taghipour and Ng 2016;Alikaniotis et al 2016) -Hierarchical representation models (Dong and Zhang 2016;Dong et al 2017), -Coherence models (Tay et al 2018;Li et al 2018;Farag et al 2018;Mesgar and Strube 2018;Yang and Zhong 2021), -BERT-based models (Nadeem et al 2019;Rodriguez et al 2019;Yang et al 2020;Mayfield and Black 2020), -Hybrid models (Dasgupta et al 2018; -Robust model (Uto and Okano 2020)…”
Section: Rnn-based Modelmentioning
confidence: 99%
“…5. Furthermore, Yang et al (2020) proposed fine-tuning the BERT model so that the essay scoring task and an essay ranking task are jointly resolved. As shown in Fig.…”
Section: Bert-based Modelsmentioning
confidence: 99%
See 2 more Smart Citations