2020 IEEE 2nd International Conference on Computer Science and Educational Informatization (CSEI) 2020
DOI: 10.1109/csei50228.2020.9142472
|View full text |Cite
|
Sign up to set email alerts
|

Deep Knowledge Tracking based on Attention Mechanism for Student Performance Prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 11 publications
0
3
0
Order By: Relevance
“…This paper compares our model with DKT [4], SAKT [9], and MAKT [11], which have been described in the previous introductory sections. DKT uses a single-layer recurrent neural network for student performance prediction and is a ground-breaking approach in the field of deep knowledge tracing.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…This paper compares our model with DKT [4], SAKT [9], and MAKT [11], which have been described in the previous introductory sections. DKT uses a single-layer recurrent neural network for student performance prediction and is a ground-breaking approach in the field of deep knowledge tracing.…”
Section: Methodsmentioning
confidence: 99%
“…Traditional KT is based on Bayes theorem, and it uses a Hidden Markov model to represent the student's knowledge state as a hidden variable. Hidden Markov model do not perform well on long sequence problems because they assume that the present topic is just relationally connected to the previous status [9]. Based on BKT, [10] proposed richer extensions by considering individual students.…”
Section: Knowledge Tracingmentioning
confidence: 99%
“…In order to solve these problems, many researchers are committed to in-depth research on DKT and put forward many new methods. Dong et al [7] used Jaccard coefficient to calculate the attention weight between knowledge components in the model a-dkt, and combined LSTM and total attention value to get the final prediction result. Zhang et al [8] used the method of feature engineering to add the dimension reduction of answer time, answer times and the first action to the input layer of LSTM by using an auto-encoder.…”
Section: Related Workmentioning
confidence: 99%