2018
DOI: 10.1016/j.ins.2018.02.053
|View full text |Cite
|
Sign up to set email alerts
|

Personalized learning full-path recommendation model based on LSTM neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
65
0
2

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 190 publications
(80 citation statements)
references
References 17 publications
1
65
0
2
Order By: Relevance
“…LSTM is considered one of the most successful variant of RNN, with the capability of capturing long-term relationships in a sequence and suffering from the vanishing gradient problem. So far, LSTM models have achieved tremendous success in sequence modelling tasks [20,21].…”
Section: Related Workmentioning
confidence: 99%
“…LSTM is considered one of the most successful variant of RNN, with the capability of capturing long-term relationships in a sequence and suffering from the vanishing gradient problem. So far, LSTM models have achieved tremendous success in sequence modelling tasks [20,21].…”
Section: Related Workmentioning
confidence: 99%
“…GRU4Rec [11]). For example, Zhou et al [40] introduced Recurrent Neural Network (RNN) to predict the expectation of the whole path for learner groups. Some researches proposed to enhance the recommendation strategy by explicitly using cognitive structure.…”
Section: Related Workmentioning
confidence: 99%
“…genetic epistemology [26]), meanwhile the knowledge structure captures the cognitive relations among the learning items. However, existing methods for adaptive learning only utilize either knowledge level [33,40] or knowledge structure [38,41]. Although these methods have made a great success in adaptive learning, there are some limitations of them.…”
Section: Introductionmentioning
confidence: 99%
“…The Long-Short Term Memory model (LSTM) is a kind of Recurrent Neural Network (RNN) [10,12]. It proposes an improvement to the gradient disappearance problem in the RNN model and replaces the hidden layer nodes in the original RNN model with one memory unit.…”
Section: Long-short Term Memorymentioning
confidence: 99%