2022
DOI: 10.48550/arxiv.2201.08698
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Natural Attack for Pre-trained Models of Code

Zhou Yang,
Jieke Shi,
Junda He
et al.

Abstract: Pre-trained models of code have achieved success in many important software engineering tasks. However, these powerful models are vulnerable to adversarial attacks that slightly perturb model inputs to make a victim model produce wrong outputs. Current works mainly attack models of code with examples that preserve operational program semantics but ignore a fundamental requirement for adversarial example generation: perturbations should be natural to human judges, which we refer to as naturalness requirement.In… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
36
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(37 citation statements)
references
References 30 publications
1
36
0
Order By: Relevance
“…There was also quite a bit of research centered on leveraging variables and their names to develop effective models of code. Yang et al focused on adversarially training models of code by changing variable names [37]. Variable names are changed by greedily selecting more tokens that were more influential for correct predictions.…”
Section: Variablesmentioning
confidence: 99%
“…There was also quite a bit of research centered on leveraging variables and their names to develop effective models of code. Yang et al focused on adversarially training models of code by changing variable names [37]. Variable names are changed by greedily selecting more tokens that were more influential for correct predictions.…”
Section: Variablesmentioning
confidence: 99%
“…However, it is challenging to improve the robustness of pretrained models. Although the latest work by Yang et al [40] proposed some attack strategies to make CodeBERT and GraphCodeBERT have poor performance on adversarial samples. They further combined adversarial samples with original samples to fine-tune pre-trained models without any changes to the model architecture to improve prediction robustness on downstream tasks.…”
Section: Introductionmentioning
confidence: 99%
“…The research about adversarial robustness analysis on the models of code has attracted the attention [218,[243][244][245][246][247]. Generally, these works can be categorized into two groups:…”
Section: Adversarial Robustness On Models Of Codementioning
confidence: 99%
“…However, it is challenging to improve the robustness of pre-trained models. Although the latest work by Yang et al [218] proposed some attack strategies to make CodeBERT and…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation