2021
DOI: 10.48550/arxiv.2107.12938
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Yet Another Combination of IR- and Neural-based Comment Generation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…One reason is that our method enhances the AST and code sequence, which can extract the source code semantic information more fully (explained in Section 4.2 ). LeClair et al [ 30 ] mentioned that the decoder with an attention mechanism is less effective than a transformer. We also find that the BLEU-N of the transformer-based method is on average about 7% higher than code+gnn+BiLSTM.…”
Section: Results and Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…One reason is that our method enhances the AST and code sequence, which can extract the source code semantic information more fully (explained in Section 4.2 ). LeClair et al [ 30 ] mentioned that the decoder with an attention mechanism is less effective than a transformer. We also find that the BLEU-N of the transformer-based method is on average about 7% higher than code+gnn+BiLSTM.…”
Section: Results and Analysismentioning
confidence: 99%
“…The dataset contains around 2.1 million <Java code, comment> pairs [ 29 ], which are widely used in lots of SCS generation tasks [ 10 , 20 , 30 ]. We analyzed the dataset from two aspects: (1) statistical length distribution of source codes and their comments (see Figure 9 ); and (2) a count of the scale of Java code numbers with the same comment (see Figure 10 ).…”
Section: Experiments Setupmentioning
confidence: 99%
“…The dataset contains around 2.1 million <Java function, comment> pairs [30], which is widely used in lots of SCS generation tasks [10,22,31]. We analysis the dataset from two aspects: 1) Statistical length distribution of source codes and their comments (see Figure 8).…”
Section: Dataset Analysismentioning
confidence: 99%
“…Comment synthesis is now an active research area, including many projects such as CodeNN [30], DeepCom [26], Astattgru [40], C BERT [18], Rencos [74], SecNN [42], PLBART [1], CoTexT [54], ProphetNet-X [55], NCS [2], Code2seq [7], Re 2 Com [71], and many more [19,24,25,27,28,38,39,41,49,50,66,67,69,70,72,73]. All these approaches rely on datasets of aligned code-comment pairs.…”
Section: Introductionmentioning
confidence: 99%