1997
DOI: 10.1145/267580.267590
|View full text |Cite
|
Sign up to set email alerts
|

Software unit test coverage and adequacy

Abstract: Objective measurement of test quality is one of the key issues in software testing. It has been a major research focus for the last two decades. Many test criteria have been proposed and studied for this purpose. Various kinds of rationales have been presented in support of one criterion or another. We survey the research work in this area. The notion of adequacy criteria is examined together with its role in software dynamic testing. A review of criteria classification is followed by a summary of the methods … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
679
0
47

Year Published

1999
1999
2016
2016

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 955 publications
(726 citation statements)
references
References 177 publications
0
679
0
47
Order By: Relevance
“…As mentioned in the introduction, it is assumed that M is correct, and its correctness has been assured, for example, by using model checking techniques [19] with respect to some properties derived from the requirements. A test generation algorithm Φ using a test selection criterion [20], for example, covering every node if M is graph-based, is applied to M such that Φ(M ) generates a test set T = {t 1 , t 2 , . .…”
Section: Modeling and Model-based Testingmentioning
confidence: 99%
See 1 more Smart Citation
“…As mentioned in the introduction, it is assumed that M is correct, and its correctness has been assured, for example, by using model checking techniques [19] with respect to some properties derived from the requirements. A test generation algorithm Φ using a test selection criterion [20], for example, covering every node if M is graph-based, is applied to M such that Φ(M ) generates a test set T = {t 1 , t 2 , . .…”
Section: Modeling and Model-based Testingmentioning
confidence: 99%
“…A mutant is killed ("distinguished") by a test case t in T if the observed behavior of P differs from that of the mutant when executed against t. A mutant is equivalent to P if it always behaves the same as P for every case (i.e., every possible input). The test set T is said to be mutation adequate [20] with respect to P and X if every non-equivalent mutant of P generated by applying the mutation operators in X can be killed by at least one test case in T . Mutation testing has also been extended to validate a specification S. To do this, a set of mutation operators is applied to S to generate mutated specifications (S * ).…”
Section: Model-based Mutation Testing (Mbmt)mentioning
confidence: 99%
“…This ratio is usually used as a decisive factor in determining the point in time at which to stop testing, that is, to release SUT, or to improve it and enhance the test set to continue testing [15]. To be more precise, the coverage C is defined as C = |O |/|O|, where O is a finite set of measuring objects, O is a subset of O that has been tested, and |O| represents the number of elements of O.…”
Section: N-switch Faulty Transition Coverage Criterionmentioning
confidence: 99%
“…Testing being a complex and costly activity has motivated much research on efficient techniques to automate all aspects of the software testing process [3]. Of particular interest is the automation of test data generation [4] for functional unit testing, where the idea is to automatically generate a representative set of inputs for a unitary program fragment under test (typically a function or method). Running the considered code unit with the generated test data then offers a rather representative view of its actual behaviour, enabling one to detect errors efficiently.…”
Section: Introductionmentioning
confidence: 99%
“…In such an approach, the idea is to generate test data that in some way cover a sufficiently large part of the control-flow graph of the code unit under test [4]. In a nutshell, symbolic execution executes the unit under test over symbolic input values instead of concrete ones [5].…”
Section: Introductionmentioning
confidence: 99%