2022 IEEE International Conference on Software Maintenance and Evolution (ICSME) 2022
DOI: 10.1109/icsme55016.2022.00016
|View full text |Cite
|
Sign up to set email alerts
|

BashExplainer: Retrieval-Augmented Bash Code Comment Generation based on Fine-tuned CodeBERT

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
23
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 17 publications
(25 citation statements)
references
References 41 publications
2
23
0
Order By: Relevance
“…Motivated by the neural machine translation research domain, one intuitive way is to model this problem as the automatic generation task, which can generate the required API for the developer's query directly. Although the popular generation models perform well in software engineering generation tasks (such as source code summarization [1,20,21,45,49], issue title generation [5,22], code generation [44,47], Stack Overflow title generation [23,51]), we find that the performance of this intuitive way is not promising after our preliminary investigation. Figure 1 shows two examples of generating incorrect APIs by using this intuitive generation approach.…”
Section: Introductionmentioning
confidence: 82%
“…Motivated by the neural machine translation research domain, one intuitive way is to model this problem as the automatic generation task, which can generate the required API for the developer's query directly. Although the popular generation models perform well in software engineering generation tasks (such as source code summarization [1,20,21,45,49], issue title generation [5,22], code generation [44,47], Stack Overflow title generation [23,51]), we find that the performance of this intuitive way is not promising after our preliminary investigation. Figure 1 shows two examples of generating incorrect APIs by using this intuitive generation approach.…”
Section: Introductionmentioning
confidence: 82%
“…The second group is deep learning approaches, including CodeBert [12], UniXcoder [13], and CodeT5 [14]. The last group is hybrid approaches, including Rencos [15] and BashExplainer [30].…”
Section: Baselinesmentioning
confidence: 99%
“…Li et al [20] combined the code comments obtained by information retrieval with the semantic information of the input code to generate code comments. Recently, Yu et al [16] proposed a hybrid method of two-stage training, which generates Bash comments through one-stage information retrieval and twostage CodeBERT fine-tuning.…”
Section: A Code Comment Generationmentioning
confidence: 99%
“…Motivation. The classical deep learning model is a highly recognized model in earlier studies, and it is often used as a baseline in current studies [16], [28]. We consider them as experimental objects, which can better reflect the rigor of our experiments.…”
Section: Research Questionsmentioning
confidence: 99%
See 1 more Smart Citation