2019
DOI: 10.1007/s11219-019-09446-5
|View full text |Cite
|
Sign up to set email alerts
|

Classifying generated white-box tests: an exploratory study

Abstract: White-box test generator tools rely only on the code under test to select test inputs, and capture the implementation's output as assertions. If there is a fault in the implementation, it could get encoded in the generated tests. Tool evaluations usually measure fault-detection capability using the number of such fault-encoding tests. However, these faults are only detected, if the developer can recognize that the encoded behavior is faulty. We designed an exploratory study to investigate how developers perfor… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 51 publications
0
4
0
Order By: Relevance
“…We conducted empirical research to analyze test case understandability by exploiting the subset of test case evaluation result (450 instances) from Honfi's study [44] that involved 30 developers and 15 white box generated test cases. We extracted 20 test code metrics from the generated test case and six developer-related metrics from the preliminary survey.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…We conducted empirical research to analyze test case understandability by exploiting the subset of test case evaluation result (450 instances) from Honfi's study [44] that involved 30 developers and 15 white box generated test cases. We extracted 20 test code metrics from the generated test case and six developer-related metrics from the preliminary survey.…”
Section: Discussionmentioning
confidence: 99%
“…On the developers' side, their performance when classifying white-box test cases has been investigated in terms of whether the output is true (pass) or false (fail) [10]. The experiment involved 106 developers who were asked to classify the outputs of the test cases generated using several methods.…”
Section: Figure 1 Example Of Optimized Test Casementioning
confidence: 99%
See 1 more Smart Citation
“…Several examples in testing software-intensive CPSs [5], [6], [18], [19], [27], [28] highlight this realistic aspect. Furthermore, the usability of automatically generated tests may be hindered by test cases that are not realistic (i.e., strange and difficult to comprehend for developers) [29]. Problem statement.…”
Section: Introductionmentioning
confidence: 99%