2021 14th IEEE Conference on Software Testing, Verification and Validation (ICST) 2021
DOI: 10.1109/icst49551.2021.00016
|View full text |Cite
|
Sign up to set email alerts
|

A Search-Based Testing Framework for Deep Neural Networks of Source Code Embedding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 35 publications
(15 citation statements)
references
References 24 publications
0
15
0
Order By: Relevance
“…In the first step of MixCode, in addition to the original data, we utilize multiple code refactoring methods to generate more diverse code data as the candidate data for mixing. MixCode supports 18 types of refactoring methods in the literature [28], [29]. The functionality of each method and a corresponding example is listed in Table VII (In Appendix A).…”
Section: B Refactoring Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…In the first step of MixCode, in addition to the original data, we utilize multiple code refactoring methods to generate more diverse code data as the candidate data for mixing. MixCode supports 18 types of refactoring methods in the literature [28], [29]. The functionality of each method and a corresponding example is listed in Table VII (In Appendix A).…”
Section: B Refactoring Methodsmentioning
confidence: 99%
“…The first baseline is the basic data augmentation approach using the transformed code generated by step 1 in figure 1 to train the model directly. This data augmentation method is used in exiting works [29], [36]. The second baseline is the standard training process without any data augmentation.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Beyond boosting the effectiveness (e.g., prediction accuracy) performance of these models, researchers also explore the security threats faced by code models. For example, it is found that applying program semantic-preserving transformations (like renaming variables) to the inputs can make the state-of-the-art models produce wrong outputs [7], [8], [9], [11], [31], [32], which is called the adversarial attack. Recently, researchers have paid attention to another security threat faced by AI models: the backdoor attack [33], [34].…”
Section: Backdoor Attacks For Code Modelsmentioning
confidence: 99%
“…Applis et al [55] extend metamorphic testing approaches for DNN models for software programs to evaluate the robustness of a codeto-text generation model. Pour et al [32] focus on the embeddings of source code and propose a search-based testing framework to evaluate their robustness. Zhang et al [9] propose Metropolis-Hastings Modifier to generate adversarial examples for code authorship attribution models.…”
Section: Attacking Code Modelsmentioning
confidence: 99%