2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE) 2021
DOI: 10.1109/ase51524.2021.9678706
|View full text |Cite
|
Sign up to set email alerts
|

Assessing Robustness of ML-Based Program Analysis Tools using Metamorphic Program Transformations

Abstract: Metamorphic testing is a well-established testing technique that has been successfully applied in various domains, including testing deep learning models to assess their robustness against data noise or malicious input. Currently, metamorphic testing approaches for machine learning (ML) models focused on image processing and object recognition tasks. Hence, these approaches cannot be applied to ML targeting program analysis tasks. In this paper, we extend metamorphic testing approaches for ML models targeting … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(12 citation statements)
references
References 22 publications
(22 reference statements)
0
11
0
Order By: Relevance
“…There are 196 C code snippets that satisfy the aforementioned constraints. We compute a statistically representative sample size using a popular sample size calculator 5 with a confidence level at 99% and a confidence interval of 10. We sample 100 code snippets to conduct the user study, which is statistically representative.…”
Section: Rq1 How Natural Are the Adversarial Examples Generated By Al...mentioning
confidence: 99%
See 1 more Smart Citation
“…There are 196 C code snippets that satisfy the aforementioned constraints. We compute a statistically representative sample size using a popular sample size calculator 5 with a confidence level at 99% and a confidence interval of 10. We sample 100 code snippets to conduct the user study, which is statistically representative.…”
Section: Rq1 How Natural Are the Adversarial Examples Generated By Al...mentioning
confidence: 99%
“…Pour et al [34] proposed a testing framework for DNN of source code embedding, which can decrease the performance of code2vec [3] on method name prediction task by 2.05%. Applis et al [5] use metamorphic program transformations to assess the robustness of ML-based program analysis tools in a black-box manner.…”
Section: Adversarial Attack On Models Of Codementioning
confidence: 99%
“…Therefore, semantic tasks should use upper-level layers to represent code to achieve the best performance. Interestingly, the performance remains uniform in the middle layers (4)(5)(6)(7)(8)(9)(10). This can be related to the uniform attention of identifiers, values we see in the Figure 4.…”
Section: Semantic Representation Of Code For Code Clone Detection Usi...mentioning
confidence: 53%
“…They compare the performance of these models from the perspective of probing classifiers. Other studies [5,9,35,49], show that identifiers are important code entities and can be used in the modeling of Transfoemr based models. This work presents the first study in software engineering, which analyses the multi-headed attention framework of BERT, which is not done previously [21].…”
Section: Related Workmentioning
confidence: 99%
“…Rabin et al [31] evaluate whether neural program analyzers like GGNN [30] can generalize to programs modified using semantic preserving transformations. Applis et al [55] extend metamorphic testing approaches for DNN models for software programs to evaluate the robustness of a codeto-text generation model. Pour et al [32] focus on the embeddings of source code and propose a search-based testing framework to evaluate their robustness.…”
Section: Attacking Code Modelsmentioning
confidence: 99%