Proceedings of the 30th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Enginee 2022
DOI: 10.1145/3540250.3558967
|View full text |Cite
|
Sign up to set email alerts
|

An empirical study of deep transfer learning-based program repair for Kotlin projects

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 38 publications
0
7
0
Order By: Relevance
“…The effectiveness of the transformer-based program repair model has been experimentally demonstrated in both encoder-decoder families 2 (Li et al, 2022;Kim et al, 2022b;Wang et al, 2021;Berabi et al, 2021) and decoder-only families (Jesse et al, 2023;Joshi et al, 2022;Prenner and Robbes, 2021), with their correct patch generation accuracy. The program repair model is trained to transform the input buggy code into a fixed code (that is, a patch).…”
Section: Transformer For Program Repairmentioning
confidence: 99%
“…The effectiveness of the transformer-based program repair model has been experimentally demonstrated in both encoder-decoder families 2 (Li et al, 2022;Kim et al, 2022b;Wang et al, 2021;Berabi et al, 2021) and decoder-only families (Jesse et al, 2023;Joshi et al, 2022;Prenner and Robbes, 2021), with their correct patch generation accuracy. The program repair model is trained to transform the input buggy code into a fixed code (that is, a patch).…”
Section: Transformer For Program Repairmentioning
confidence: 99%
“…In addition to the above technical papers, Kim et al [73] empirically investigate the performance of TFix in fixing errors from industrial Samsung Kotlin projects detected by a static analysis tool SonarQube. Mohajer et al [105] conduct a more comprehensive study of LLMs in the static code analysis domain, and propose SkipAnalyzer, an LLM-based powered tool to perform three related tasks: detecting bugs, filtering false positive warnings, and patching the detected bugs.…”
Section: Static Warningsmentioning
confidence: 99%
“…Similar to most traditional learning-based APR, this type of input regards APR as an NMT task, which translates a sentence from one source language (i.e., buggy code) to another target language (i.e., fixed code). Such representation directly feeds LLMs with the buggy code snippet and has been typically employed to train LLMs with supervised learning in semantic bugs [28,101,206] security veulnerabilities [39,188], and static warnings [73]. For example, Zhang et al [188] investigate the performance of three bug-fixing representations (i.e., context, abstraction, and tokenization) to fine-tune five LLMs for vulnerability repair.…”
Section: What Input Forms Are Software Bugs Transformed Into When Uti...mentioning
confidence: 99%
See 2 more Smart Citations