Proceedings of the Third (2016) ACM Conference on Learning @ Scale 2016
DOI: 10.1145/2876034.2893444
|View full text |Cite
|
Sign up to set email alerts
|

Deep Neural Networks and How They Apply to Sequential Education Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
1
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 39 publications
(21 citation statements)
references
References 4 publications
0
21
0
Order By: Relevance
“…Compared to multiple regression analysis, the Long-Short Term Memory achieves far better results, with an accuracy of more than 90% from the point where 40% of the course had been completed and 100% when 2/3 of the course had been completed. Tang, Peterson and Pardos (2016) outline two possible applications of Deep Learning in Intelligent Tutoring Systems. One is word suggestion when a student gets stuck while writing an essay.…”
Section: Resultsmentioning
confidence: 99%
“…Compared to multiple regression analysis, the Long-Short Term Memory achieves far better results, with an accuracy of more than 90% from the point where 40% of the course had been completed and 100% when 2/3 of the course had been completed. Tang, Peterson and Pardos (2016) outline two possible applications of Deep Learning in Intelligent Tutoring Systems. One is word suggestion when a student gets stuck while writing an essay.…”
Section: Resultsmentioning
confidence: 99%
“…For this purpose, the Kaggle platform has been used to obtain datasets for automated essay scoring. In fact, there were a specific competition for this task called ASAP (https://www.kaggle.com/c/asap-aes) whose dataset has been used in different works [21,40,54]. It consists of essays written in English by students (from Grade 7 to Grade 10), including a score for each one.…”
Section: Discussionmentioning
confidence: 99%
“…Lin and Chi, 2017 [11] ITS Pyrenees Specific Zhang et al, 2017 [49] ASSISment and OLI datasets General Kim et al, 2018 [26] Udacity Specific Lalwani and Agrawal, 2017 [14] Funtoot dataset Specific Okubo et al, 2017 [24] Information Science Course dataset Specific Guo et al, 2015 [23] High schools dataset Specific Sharada et al, 2018 [22] ASSIStment 2018 General Wang et al, 2017 [12] Code course dataset Specific Tang et al, 2016 [21] Kaggle Automated Essay Scoring General Bendangnuksung and P., 2018 [20] Kaggle Students' Academic Performance dataset General [31] HarvardX MOOCs General Wang et al, 2017 [30] Code course dataset Specific Min et al, 2016 [33] Game-based virtual learning environment Crystal Island Specific Tato et al, 2017 [37] French corpus Specific Yang et al, 2018 [35] Videos collected in unconstrained environments Specific Xing and Du, 2018 [32] Canvas project management MOOC Specific [45] PODS dataset Specific Sales et al, 2018 [46] 2015 ASSISTments Skill Builder Data General 6 Complexity each task and the works related in more detail. The details about the DL implementation on each paper are described in Section 5.…”
Section: Reference Dataset Typementioning
confidence: 99%
See 1 more Smart Citation
“…Khajah, Lindsey, & Mozer (2016) compared deep knowledge tracing (DKT) to more standard "Bayesian knowledge tracing" (BKT) models and showed that it was possible to equate the performance of the BKT model by additional features and parameters that represent core aspects of the psychology of learning and memory such as forgetting and individual abilities (Khajah et al, 2016). An ongoing debate remains in this community whether using flexible models with lots of data can improve over more heavily structured, theory-based models (Tang et al, 2016;Xiong et al, 2016;Zhang et al, 2017).…”
Section: Introductionmentioning
confidence: 99%