2016
DOI: 10.1145/2858651
|View full text |Cite
|
Sign up to set email alerts
|

Concurrency Testing Using Controlled Schedulers

Abstract: We present an independent empirical study on concurrency testing using controlled schedulers. We have gathered 49 buggy concurrent software benchmarks, drawn from public code bases, which we call SCTBench. We applied a modified version of an existing concurrency testing tool to SCTBench, testing five controlled scheduling techniques: depth-first search, preemption bounding, delay bounding, a controlled random scheduler, and probabilistic concurrency testing (PCT). We attempt to answer several research question… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
25
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 26 publications
(26 citation statements)
references
References 54 publications
1
25
0
Order By: Relevance
“…Thus an eager technique would explore a constant number of traces and spend time linear in the parameter n, whereas a lazy technique would explore a number of traces, and spend time exponential in n. □ Benchmarks and Platform. The benchmark programs we use in this section all are parametric in the number of threads, and are taken from SV-COMP [2019], SCTBench [Thomson et al 2016], and from the papers that describe the tools we use [Abdulla et al 2017;Aronis et al 2018;Chalupa et al 2018]. In order for these programs to be handled by all tools, and RCMC in particular, we needed to convert some of them to C11 with SC read and write accesses.…”
Section: Performance Evaluationmentioning
confidence: 99%
See 2 more Smart Citations
“…Thus an eager technique would explore a constant number of traces and spend time linear in the parameter n, whereas a lazy technique would explore a number of traces, and spend time exponential in n. □ Benchmarks and Platform. The benchmark programs we use in this section all are parametric in the number of threads, and are taken from SV-COMP [2019], SCTBench [Thomson et al 2016], and from the papers that describe the tools we use [Abdulla et al 2017;Aronis et al 2018;Chalupa et al 2018]. In order for these programs to be handled by all tools, and RCMC in particular, we needed to convert some of them to C11 with SC read and write accesses.…”
Section: Performance Evaluationmentioning
confidence: 99%
“…Starting from five programs of SCTBench [Thomson et al 2016], the Systematic Concurrency Testing Benchmark Suite, we conducted an experiment in order to see whether the rf equivalence provides some advantage as far as bug finding is concerned. For our experiment, we chose the five reorder_N_bad benchmarks (N ∈ {3,4,5,10,20} is the number of created pthreads).…”
Section: Why the Reads-from Equivalence Matters For Smcmentioning
confidence: 99%
See 1 more Smart Citation
“…The most straightforward sampling algorithm is random walk : at each step, randomly pick an enabled event to execute. Previous work showed that even such a sampling outperformed the exhaustive search at finding errors in real-world concurrent programs [24]. This can be explained by applying the small-scope hypothesis [12,Sect.…”
Section: Introductionmentioning
confidence: 96%
“…The justification is that common patterns of concurrency bugs require few scheduling constraints and these in turn can be related to few preemptions [3,18]. Delay bounding [6] is another bounding technique that forces the scheduler to always schedule the first non-blocked process out of a total order of all processes.…”
Section: Stateless Model Checking Erlang Concuerror and Boundingmentioning
confidence: 99%