2021
DOI: 10.1177/07356331211038168
|View full text |Cite
|
Sign up to set email alerts
|

An Interpretable Pipeline for Identifying At-Risk Students

Abstract: This paper introduces a novel approach to identify at-risk students with a focus on output interpretability through analyzing learning activities at a finer granularity on a weekly basis. Specifically, this approach converts the predicted output from the former weeks into meaningful probabilities to infer the predictions in the current week for maintaining the consecutiveness among learning activities. To demonstrate the efficacy of our model in identifying at-risk students, we compare the weekly AUCs and aver… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 39 publications
0
6
0
1
Order By: Relevance
“…There is not enough evidence to show that data-driven technologies help teachers make less biased decisions A core premise of research on data-driven technologies for teachers is that they enable teachers to engage in fairer and more equitable decision making and teaching practices (Angeli et al, 2017;Lameras and Arnab, 2022;Uttamchandani and Quick, 2022;Williamson and Kizilcec, 2022). Whilst much literature has demonstrated that biases can potentially be addressed algorithmically (e.g., Pei and Xing, 2021, from this review), here we question the extent to which this automatically leads to less biased decisions by teachers.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…There is not enough evidence to show that data-driven technologies help teachers make less biased decisions A core premise of research on data-driven technologies for teachers is that they enable teachers to engage in fairer and more equitable decision making and teaching practices (Angeli et al, 2017;Lameras and Arnab, 2022;Uttamchandani and Quick, 2022;Williamson and Kizilcec, 2022). Whilst much literature has demonstrated that biases can potentially be addressed algorithmically (e.g., Pei and Xing, 2021, from this review), here we question the extent to which this automatically leads to less biased decisions by teachers.…”
Section: Discussionmentioning
confidence: 99%
“…The framework can similarly be used to analyse interventions that were not designed to explicitly promote debiasing. For example, Figure 3B shows the framework as applied to Pei and Xing ( 2021 )'s intervention that help instructors identify students at risk of dropping out, through the use of machine learning techniques and complex and varied data visualizations. Researchers might consider whether, e.g., training in representations, could support instructors in making less biased decisions based on these varied visualizations.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…These two classifiers are Support Vector Machines (Joachims, 1998) and Naïve Bayes (Chen et al, 2009). It should be noted that these kinds of classifiers were used in some previous studies related to education and computation (Liu et al, 2021;Pei & Xing, 2021). These methods are briefly explained in the next subsections.…”
Section: Classifiersmentioning
confidence: 99%
“…[11] suggested interventions for wheel-spinning students based on Shapley values. Finally, [12] explored LIME on ensemble machine learning methods for student performance prediction, [13] integrated LIME explanations in student advising dashboards, and [14] used LIME for interpreting models identifying at-risk students.…”
Section: Introductionmentioning
confidence: 99%