2022
DOI: 10.1080/10494820.2022.2124425
|View full text |Cite
|
Sign up to set email alerts
|

A systematic review for MOOC dropout prediction from the perspective of machine learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 45 publications
0
4
0
Order By: Relevance
“…These research reviews synthesize published studies from the previous decade (2010–2020), and thus provide a great scope of the research field. Still, we find that drop-out, retention, and attrition issues spark great research interest (Chen et al, 2022 ; Chiappe & Castillo, 2021 ; Estrada-Molina & Fuentes-Cancell, 2022 ; Wang et al, 2022 ). These research reviews are prone to emphasizing different explanations as to why learners drop out of MOOCs, which can be attributed to ineffective online course design, lack of belonging, time factors, and hidden costs, etc.…”
Section: Theoretical Perspectivesmentioning
confidence: 92%
“…These research reviews synthesize published studies from the previous decade (2010–2020), and thus provide a great scope of the research field. Still, we find that drop-out, retention, and attrition issues spark great research interest (Chen et al, 2022 ; Chiappe & Castillo, 2021 ; Estrada-Molina & Fuentes-Cancell, 2022 ; Wang et al, 2022 ). These research reviews are prone to emphasizing different explanations as to why learners drop out of MOOCs, which can be attributed to ineffective online course design, lack of belonging, time factors, and hidden costs, etc.…”
Section: Theoretical Perspectivesmentioning
confidence: 92%
“…In [6], another extension of BPR (EBPR) is proposed where users' consumption behavior such as reading a news article or listening to a music track is used to model users' preferences. Dropout prediction in MOOCs has been extensively studied as a classification problem, and several Machine Learning models were used [2,4]. However, in such examples, time information was mostly discarded.…”
Section: Related Workmentioning
confidence: 99%
“…We evaluated our approach by using three widely used publicly available datasets, namely XuetangX [5], KDDCUP [5], and Canvas [14]. Both KDD-CUP and Xuetangx anonymized datasets are provided by XuetangX 2 The hyperparameter tuning procedure is explained in Section 4. 2 describes the three (preprocessed) publicly available datasets relating to MOOCs that were used to evaluate the proposed approach.…”
Section: Experimental Design 41 Datasets Prepossessing and Descriptionmentioning
confidence: 99%
See 1 more Smart Citation