2020
DOI: 10.1609/aaai.v34i01.5469
|View full text |Cite
|
Sign up to set email alerts
|

Generating Adversarial Examples for Holding Robustness of Source Code Processing Models

Abstract: Automated processing, analysis, and generation of source code are among the key activities in software and system lifecycle. To this end, while deep learning (DL) exhibits a certain level of capability in handling these tasks, the current state-of-the-art DL models still suffer from non-robust issues and can be easily fooled by adversarial attacks.Different from adversarial attacks for image, audio, and natural languages, the structured nature of programming languages brings new challenges. In this paper, we p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
90
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 77 publications
(90 citation statements)
references
References 0 publications
0
90
0
Order By: Relevance
“…Since we are the irst to consider adversarial examples for code comment generation tasks, the literature is short of algorithms for direct comparison. To demonstrate the efectiveness of our approach, we adopt two algorithms as the baseline, i.e., the random substitute algorithm and the algorithm based on Metropolis-Hastings sampling [49].…”
Section: Evaluation 41 Experiments Setupmentioning
confidence: 99%
See 2 more Smart Citations
“…Since we are the irst to consider adversarial examples for code comment generation tasks, the literature is short of algorithms for direct comparison. To demonstrate the efectiveness of our approach, we adopt two algorithms as the baseline, i.e., the random substitute algorithm and the algorithm based on Metropolis-Hastings sampling [49].…”
Section: Evaluation 41 Experiments Setupmentioning
confidence: 99%
“…Metropolis-Hastings algorithm. The Metropolis-Hastings sampling based algorithm was recently used to generate adversarial examples for attacking source code classiiers [49]. Recall that the Metropolis-Hastings algorithm is a classical Markov Chain Monte Carlo sampling approach, which can generate desirable examples given the targeted stationary distribution and the transition proposal.…”
Section: Evaluation 41 Experiments Setupmentioning
confidence: 99%
See 1 more Smart Citation
“…Each of the transformation operators above is designed to change the structure representation of the source code differently. For example, with VR, we want the NN to understand that even the change in textual information does not affect the semantic meaning of the source code, inspired by a recent finding of Zhang et al [74]. It is suggested that the source code model should be equipped with adversarial examples of token changes to make the model become more robust.…”
Section: Program Transformation Operatorsmentioning
confidence: 99%
“…The security in deep learning has been a hot topic recently. Works [223][224][225][226][227][228][229][230] have been proposed to prevent adversarial attacks in deep learning models. On the other hand, we can use deep learning in security to help us capture the semantic of the program.…”
Section: Binary Function Semantics Capturingmentioning
confidence: 99%