2019 34th IEEE/ACM International Conference on Automated Software Engineering (ASE) 2019
DOI: 10.1109/ase.2019.00152
|View full text |Cite
|
Sign up to set email alerts
|

Retrieve and Refine: Exemplar-Based Neural Comment Generation

Abstract: Code comment generation is a crucial task in the field of automatic software development. Most previous neural comment generation systems used an encoder-decoder neural network and encoded only information from source code as input. Software reuse is common in software development. However, this feature has not been introduced to existing systems. Inspired by the traditional IR-based approaches, we propose to use the existing comments of similar source code as exemplars to guide the comment generation process.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
37
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 34 publications
(37 citation statements)
references
References 45 publications
0
37
0
Order By: Relevance
“…To complement the above objective metrics, we also conduct a human evaluation to further assess the quality of the comments generated by the masked training method, data augmentation and normal training method. Generally, we follow the evaluation settings from the previous work [17,46]. Particularly, the comments are examined from three aspects, i.e., similarity, naturalness, and informativeness [46].…”
Section: Human Evaluationmentioning
confidence: 99%
See 1 more Smart Citation
“…To complement the above objective metrics, we also conduct a human evaluation to further assess the quality of the comments generated by the masked training method, data augmentation and normal training method. Generally, we follow the evaluation settings from the previous work [17,46]. Particularly, the comments are examined from three aspects, i.e., similarity, naturalness, and informativeness [46].…”
Section: Human Evaluationmentioning
confidence: 99%
“…Generally, we follow the evaluation settings from the previous work [17,46]. Particularly, the comments are examined from three aspects, i.e., similarity, naturalness, and informativeness [46]. Similarity refers to how similar the generated comment is to the reference comment; naturalness measures the grammaticality and luency; informativeness focuses on the content delivery from code snippet to the generated comments.…”
Section: Human Evaluationmentioning
confidence: 99%
“…We note that the ideas similar to our approach for automated log generation have been already applied and proven to be effective in adjacent software engineering tasks such as automated commit message [71] and comment [73] generation. For example, Wei et al [73] used comments of similar code snippets as 'exemplars' to assist in generating comments for new code snippets. Both papers' ideas and application scenarios are analogous to a large extent to those of our work.…”
Section: Practicality In Software Engineeringmentioning
confidence: 99%
“…Both papers' ideas and application scenarios are analogous to a large extent to those of our work. Similarly, both approaches utilize BLEU [73,73] and ROUGE-L [73] scores for evaluating the quality of the auto-generated text.…”
Section: Practicality In Software Engineeringmentioning
confidence: 99%
“…To complement the above objective metrics, we also conduct a human evaluation to further assess the quality of the comments generated by the masked training method and data augmentation. Generally, we follow the evaluation settings from the previous work [17,45]. Particularly, the comments are examined from three aspects, i.e., similarity, naturalness, and informativeness [45].…”
Section: Human Evaluationmentioning
confidence: 99%