2021
DOI: 10.1007/978-3-030-84060-0_4
|View full text |Cite
|
Sign up to set email alerts
|

Text2PyCode: Machine Translation of Natural Language Intent to Python Source Code

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 7 publications
0
3
0
Order By: Relevance
“…These pre-trained vectors are also added while building vocabulary for the model. The final model is shown in figure 3 taken from [11]. This model has a BLEU score of 32.40 and a ROUGE score of 85.1 which Is at par with results of natural language processing.…”
Section: Text2pycode: Machine Translationof Naturalmentioning
confidence: 98%
See 1 more Smart Citation
“…These pre-trained vectors are also added while building vocabulary for the model. The final model is shown in figure 3 taken from [11]. This model has a BLEU score of 32.40 and a ROUGE score of 85.1 which Is at par with results of natural language processing.…”
Section: Text2pycode: Machine Translationof Naturalmentioning
confidence: 98%
“…First being the baseline model which used the implementation of the Sockeye toolkit [10] containing 3 sub layers and 8 heads for multi-head attention mechanism. Next being the initial model where the Spacy tokenizer was used, here the encoder and decoder layer of the model were also fed with the word embedding created by following the Word2Vec algorithm [11]. In the final model the word embeddings of the source code was generated using the GenSim library on the CoNala python corpus.…”
Section: Text2pycode: Machine Translationof Naturalmentioning
confidence: 99%
“…Transformer Models. Bonthu et al (2021) use a standard transformer architecture (Vaswani et al, 2017) to generate Python source code from natural language intents and report BLEU score of 32.4 and Rouge-L of 82.1 on a custom curated dataset. Liang et al (2021) set a baseline for a turducken-style code generation task where SQL is embedded with Python.…”
Section: Lstm-based Seq2seq With Abstract Syntaxmentioning
confidence: 99%