2021
DOI: 10.48550/arxiv.2105.04123
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Neural Program Repair with Execution-based Backpropagation

He Ye,
Matias Martinez,
Martin Monperrus

Abstract: Neural machine translation (NMT) architectures have achieved promising results for automatic program repair. Yet, they have the limitation of generating low-quality patches (e.g., not compilable patches). This is because the existing works only optimize a purely syntactic loss function based on characters and tokens without incorporating program-specific information during neural net weight optimization. In this paper, we propose a novel program repair model called RewardRepair. The core novelty of RewardRepai… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 42 publications
0
3
0
Order By: Relevance
“…DL-based APRs have achieved state-of-the-art performance on program repair task [9,11,45,76,88]. Most of them treat repairing as a neural machine translation task and optimize an encoderdecoder model on a set of bug-fix pairs to learn latent patterns based on supervised learning.…”
Section: Background 21 Dl-based Aprmentioning
confidence: 99%
“…DL-based APRs have achieved state-of-the-art performance on program repair task [9,11,45,76,88]. Most of them treat repairing as a neural machine translation task and optimize an encoderdecoder model on a set of bug-fix pairs to learn latent patterns based on supervised learning.…”
Section: Background 21 Dl-based Aprmentioning
confidence: 99%
“…This is likely one of the reasons behind the low-quality patches generated through deep learning approaches. RewardRepair developed a new loss function that executes predicted patches and penalizes patches that do not compile and thereby enables improvement over state-of-the-art approaches [38]. These results are promising; however, a discussion with the authors of the above papers revealed that there are a number of uncertainties in the future of deep learning-based APR methods.…”
Section: Automatic Program Repairmentioning
confidence: 99%
“…For incremental learning in S4Eq, we use an "instance incremental scenario" in that for our problem we keep the output vocabulary constant while creating new data for incremental learning model updates [58]. Ye et al discuss using an output verification process (in their case compilation and test for program repair) to adjust the loss function in later training iterations [59]; our approach is related in that we test outputs but instead of adjusting the training loss we create new training samples which helps to generalize the model to a different problem domain.…”
Section: Incremental and Transfer Learningmentioning
confidence: 99%