1995
DOI: 10.1142/s0218194095000241
|View full text |Cite
|
Sign up to set email alerts
|

Reachability Testing: An Approach to Testing Concurrent Software

Abstract: Concurrent programs are more difficult to test than sequential programs because of non-deterministic behavior. An execution of a concurrent program non-deterministically exercises a sequence of synchronization events called a synchronization sequence (or SYN-sequence). Non-deterministic testing of a concurrent program P is to execute P with a given input many times in order to exercise distinct SYN-sequences. In this paper, we present a new testing approach called reachability testing. If every execution of P … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
44
0
8

Year Published

2001
2001
2015
2015

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 56 publications
(52 citation statements)
references
References 0 publications
0
44
0
8
Order By: Relevance
“…In the field of the behavior of concurrent programs, Carver et al [12] proposed repeatable deterministic testing, while the idea of systematic generation of all thread schedules for concurrent program testing came with works on reachability testing [13,14]. The VeriSoft model checker [15] applied state exploration directly to executable programs, enumerating states rather than schedules.…”
Section: Related Workmentioning
confidence: 99%
“…In the field of the behavior of concurrent programs, Carver et al [12] proposed repeatable deterministic testing, while the idea of systematic generation of all thread schedules for concurrent program testing came with works on reachability testing [13,14]. The VeriSoft model checker [15] applied state exploration directly to executable programs, enumerating states rather than schedules.…”
Section: Related Workmentioning
confidence: 99%
“…In the experiment, we applied the notion of forced deterministic testing for concurrent programs [12] to conduct the evaluation.…”
Section: A Experimental Setupmentioning
confidence: 99%
“…This is typically defined in terms of the fraction of program paths tested [7][28]; paths, which in the multi-tasking/distributed scenarios we consider, are defined by the execution order graphs. The derived execution orderings also provide means for detecting violations of basic assumptions during testing, e.g., the execution of a scenario not defined by the EOG may be caused by an exceeded worst case execution time.…”
Section: Coveragementioning
confidence: 99%