2021
DOI: 10.1504/ijcistudies.2021.115424
|View full text |Cite
|
Sign up to set email alerts
|

Code completion for programming education based on deep learning

Abstract: In solving programming problems, it is difficult for beginners to create a program from scratch. One way to navigate this difficulty is to suggest the next word following an incomplete program. In the present study, we propose a method for code completion characterised by two principal elements: the prediction of the next within-vocabulary word and the prediction of the next referenceable identifier. For the prediction of within-vocabulary words, a neural language model based on an LSTM network with an attenti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 13 publications
(4 citation statements)
references
References 18 publications
0
4
0
Order By: Relevance
“…After Zhang and Cao proposed the perceptron model, research on artificial neural networks began to increase, and there was a strong interest in this learning model of biological neural system models [ 21 ]. Terada and Watanobe trained deep neural network models with high-dimensional features after the introduction of deep neural networks with a complete representation of correlation information between features of sample data, combined with continuous feature information to form higher-order features and dimensional feature samples [ 22 ]. Doleck et al proposed a novel feedforward network learning algorithm for solving large-scale linear equations based on the introduction of the conjugate gradient method [ 23 ].…”
Section: Related Workmentioning
confidence: 99%
“…After Zhang and Cao proposed the perceptron model, research on artificial neural networks began to increase, and there was a strong interest in this learning model of biological neural system models [ 21 ]. Terada and Watanobe trained deep neural network models with high-dimensional features after the introduction of deep neural networks with a complete representation of correlation information between features of sample data, combined with continuous feature information to form higher-order features and dimensional feature samples [ 22 ]. Doleck et al proposed a novel feedforward network learning algorithm for solving large-scale linear equations based on the introduction of the conjugate gradient method [ 23 ].…”
Section: Related Workmentioning
confidence: 99%
“…In [28] a new framework based on the LSTM algorithm was proposed for performing personalized and adaptive learning, giving better results than the Deep Knowledge Tracing (DKT) method. And in [29], a method based on LSTM helps programming students by providing the next word following an incomplete program.…”
Section: Time-series Based Modelsmentioning
confidence: 99%
“…Notably, the model exhibits high accuracy in discerning errors within faulty solution codes. Terada et al [16] presented an intriguing model designed for predicting subsequent code sequences to facilitate code completion. Leveraging an LSTM network architecture, their model serves as a valuable resource for novice programmers grappling with the challenge of crafting complete code from scratch.…”
Section: IImentioning
confidence: 99%