2022
DOI: 10.1109/tse.2020.3018481
|View full text |Cite
|
Sign up to set email alerts
|

Deep Learning Based Program Generation From Requirements Text: Are We There Yet?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
17
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 29 publications
(18 citation statements)
references
References 47 publications
1
17
0
Order By: Relevance
“…Allamanis and Sutton (2013) introduce the GitHub Java Corpus used for performing language modeling on Java code. Liu et al (2020a) do a smaller-scale analysis of code generation but with their limited language-specific training data models "fail to pass even a single predefined test case" on their 300 test problems, while with our large training set and test set, trained models can pass tens of thousands of test cases. Zelle and Mooney (1996) and Tang and Mooney (2001) precedes Yu et al (2018) by also facilitating the synthesis of database queries, though more recent program synthesis works such as Wang et al (2019c) use Spider from Yu et al (2018).…”
Section: A Additional Dataset Informationmentioning
confidence: 99%
“…Allamanis and Sutton (2013) introduce the GitHub Java Corpus used for performing language modeling on Java code. Liu et al (2020a) do a smaller-scale analysis of code generation but with their limited language-specific training data models "fail to pass even a single predefined test case" on their 300 test problems, while with our large training set and test set, trained models can pass tens of thousands of test cases. Zelle and Mooney (1996) and Tang and Mooney (2001) precedes Yu et al (2018) by also facilitating the synthesis of database queries, though more recent program synthesis works such as Wang et al (2019c) use Spider from Yu et al (2018).…”
Section: A Additional Dataset Informationmentioning
confidence: 99%
“…Liu et al [52] investigate the performance of deep learningbased approaches for generating code from requirement texts. For that, they assessed five state-of-the-art approaches on a larger and more diverse dataset of pairs of software requirement texts and their validated implementation as compared to those used in the literature.…”
Section: Studies About the Effectiveness Of Code Completion Approachesmentioning
confidence: 99%
“…Emmet 6 , called "Zen Coding" before, is a toolkit for FE developers. It is widely used to write HTML-CSS code and improve FE's workflow.…”
Section: Emmetmentioning
confidence: 99%
“…The value x of a neuron is accepted with their original value x when x is greater than 0 and suppressed to 0 when x is less than or equals with 0. Here are other activation functions we used in our experiments, that is, Tanhin Equation ( 5) and Softmax in Equation (6). 17 as shown in Figure 2A, is an object detection network composed of two modules: Region Proposal Network (RPN) and fast R-CNN 18 detector.…”
Section: Cnnmentioning
confidence: 99%
See 1 more Smart Citation