Proceedings of the 26th Conference on Program Comprehension 2018
DOI: 10.1145/3196321.3196334
|View full text |Cite
|
Sign up to set email alerts
|

Deep code comment generation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
602
4
11

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 591 publications
(620 citation statements)
references
References 33 publications
3
602
4
11
Order By: Relevance
“…We compared our approach to three neural network-based baseline methods, including vanilla seq2seq [18], Deep-Com [3], and HAD [4]. The last two methods are state-ofthe-art methods on comment generation.…”
Section: Methodsmentioning
confidence: 99%
“…We compared our approach to three neural network-based baseline methods, including vanilla seq2seq [18], Deep-Com [3], and HAD [4]. The last two methods are state-ofthe-art methods on comment generation.…”
Section: Methodsmentioning
confidence: 99%
“…The code summarization task can be modeled as machine translation problem, so some models based on Seq2Seq paradigm [28] were proposed. Hu et al [16] proposed a structure-based traversal (SBT) algorithm in the encoder to flatten an AST and link the tokens in the source code with their AST node types.…”
Section: Code Summarizationmentioning
confidence: 99%
“…Referring to the process as "summarization" alludes to a history of work in Natural Language Processing of extractive summarization of documents -early attempts at code summarization involved choosing a set of n important words from code [18], [19] and then converting those words into complete sentences by placing them into sentence templates [2], [20]- [22]. A 2016 survey [23] highlights these approaches around the time that a vast majority of code summarization techniques began to be based on neural networks trained from big data input [10], [14], [24]- [27]. These NN-based approaches have proliferated, but suffer an Achilles' heel of reliance on very large, clean datasets of examples of code comments.…”
Section: A Source Code Summarizationmentioning
confidence: 99%
“…This application is especially useful for large repositories of legacy code such as the industrial situation described by McMillan et al [13]. A second application is in generating large datasets of code-comment pairs to serve as training data for automatic code summarization tools such as described by LeClair et al [10] and Hu et al [14]. These code summarization tools could reach a much wider audience (e.g.…”
Section: Introductionmentioning
confidence: 99%