2020
DOI: 10.3389/feduc.2020.572367
|View full text |Cite
|
Sign up to set email alerts
|

Explainable Automated Essay Scoring: Deep Learning Really Has Pedagogical Value

Abstract: Automated essay scoring (AES) is a compelling topic in Learning Analytics for the primary reason that recent advances in AI find it as a good testbed to explore artificial supplementation of human creativity. However, a vast swath of research tackles AES only holistically; few have even developed AES models at the rubric level, the very first layer of explanation underlying the prediction of holistic scores. Consequently, the AES black box has remained impenetrable. Although several algorithms from Explainable… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
29
0
3

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 66 publications
(32 citation statements)
references
References 28 publications
0
29
0
3
Order By: Relevance
“…They highlighted the need for using XAI in the educational field. In the context of automated essay scoring, the authors in [34] have studied the impact and trustworthiness of neural networks by means of the SHAP explanation framework [35]). Similar attempts have been made in the domains of computation thinking [36] and knowledge tracing [29].…”
Section: Literature Reviewmentioning
confidence: 99%
“…They highlighted the need for using XAI in the educational field. In the context of automated essay scoring, the authors in [34] have studied the impact and trustworthiness of neural networks by means of the SHAP explanation framework [35]). Similar attempts have been made in the domains of computation thinking [36] and knowledge tracing [29].…”
Section: Literature Reviewmentioning
confidence: 99%
“…Implementing a function to check the quality of predicted scores is another future research direction because we are sometimes interested in knowing the reliability or confidence level of scores predicted by AES. Furthermore, taking advantage of the unique property of the proposed method, namely, its high interpretability in terms of rater biases, another future direction is to analyze how rater biases affect the behavior of AES models, for example, by using an explanation model, as in [83].…”
Section: Discussionmentioning
confidence: 99%
“…Explainability techniques have also been assessed in the context of autograders (Kumar & Boulanger 2020). Explanations can increase understanding of automatic grading decisions and provide justification of those decisions.…”
Section: Autogradingmentioning
confidence: 99%